Month: February 2016

History of the Teapot

the “Utah teapot,” as it’s affectionately known—has had an enormous influence on the history of computing, dating back to 1974, when computer scientist Martin Newell was a Ph.D. student at the University of Utah. The U of U was a powerhouse of computer graphics research then, and Newell had some novel ideas for algorithms that could realistically display 3D shapes—rendering complex effects like shadows, reflective textures, or rotations that reveal obscured surfaces. But, to his chagrin, he struggled to find a digitized object worthy of his methods. Objects that were typically used for simulating reflections, like a chess pawn, a donut, and an urn, were too simple. He needed more interesting models. Sandra suggested that he digitize the shapes of the tea service they were using, a simple Melitta set from a local department store. It was an auspicious choice: The curves, handle, lid, and spout of the teapot all conspired to make it an ideal object for graphical experiment. Unlike other objects, the teapot could, for instance, cast a shadow on itself in several places. Newell grabbed some graph paper and a pencil, and sketched it. The computer model proved useful for Newell’s own research, featuring prominently in his next few publications. But he and Blinn also took the important step of sharing their model publicly. As it turned out, other researchers were also starved for interesting 3D models, and the digital teapot was exactly the experimental test bed they needed. At the same time, the shape was simple enough for Newell to input and for computers to process. (Rumor has it some researchers even had the data points memorized!) And unlike many household items, like furniture or fruit-in-a-bowl, the teapot’s simulated surface looked realistic without superimposing an artificial, textured pattern.

Journals fail to correct papers

RW: What’s been the most troubling incident(s) in the journals’ responses to your correspondence?

BG: I think it depends on perspective. NEJM have simply come out and said, effectively: “We don’t care about outcome switching and we don’t care about your letters correcting it”. While we disagree, and we think readers will be surprised to hear that NEJM take that view, it is at least straightforward. The responses from Annals have really surprised everyone, because they’ve been so confused, so internally contradictory, riddled with factual errors, and then they’ve behaved very oddly around publishing responses to their “rebuttals”.

you’d think journals have a vital interest in making papers as high quality as possible, but apparently that’s not the case, which should make it easier to replace them with something better

End of the uncanny valley

I can tell from the pixels is coming to an end.

As computer-generated characters become increasingly photorealistic, people are finding it harder to distinguish between real and computer-generated

With photo retouching, postproduction in film, plastic surgery, and increasingly effective makeup & skin care products, we’re being bombarded with a growing amount of imagery featuring people who don’t appear naturally human.

bye bye uncanny valley.
2021-10-17: Things are now at the point where you can win prestigious photography prizes for fake images:

The Book of Veles: How Jonas Bendiksen hoodwinked the photography industry. The photographer explains the many layers of intrigue that went into the creation of his book about misinformation in the contemporary media landscape. If computer-generated fake news pictures are accepted by the curators who have to pick the highlights of all the year’s best photojournalism, it shows that the whole industry is quite vulnerable. The big tech companies regularly recruit top-level hackers, even criminal ones, to try to break into their systems. They are called penetration testers. They are paid top dollar to hack as much as they can and search for weaknesses in company’s system architecture, so that they can go fix the loopholes and protect themselves against being taken advantage of. I guess I see what I did as a similar service for documentary photography and photojournalism, just on a volunteer basis.

Ramen Bloggers

Ramen bloggers aren’t just passive observers of the noodle soup phenomenon: to be a ramen writer of Kamimura’s stature, you need to live in a ramen town, and there is unquestionably no town in Japan more dedicated to ramen than Fukuoka. This city of 1.5m along the northern coast of Kyushu, the southernmost of Japan’s 4 main islands, is home to 2000 ramen shops, representing Japan’s densest concentration of noodle-soup emporiums. While bowls of ramen are like snowflakes in Japan, Fukuoka is known as the cradle of tonkotsu, a pork-bone broth made milky white by the deposits of fat and collagen extracted during days of aggressive boiling. It is not simply a specialty of the city; it is the city, a distillation of all its qualities and calluses. Tare is the flavour base that anchors each bowl, that special potion – usually just 30 ml of concentrated liquid – that bends ramen into 1 camp or another. In Sapporo, tare is made with miso. In Tokyo, soy sauce takes the lead. At enterprising ramen joints, you’ll find tare made with up to 24 ingredients, an apothecary’s stash of dried fish and fungus and esoteric add-ons. The objective of tare is essentially the core objective of Japanese food itself: to pack as much umami as possible into every bite.