Tag: graphics

5 technologies for 2022

In 5 years, new imaging devices using hyperimaging technology and AI will help us see broadly beyond the domain of visible light by combining multiple bands of the electromagnetic spectrum to reveal valuable insights or potential dangers that would otherwise be unknown or hidden from view. Most importantly, these devices will be portable, affordable and accessible, so superhero vision can be part of our everyday experiences.

Today, more than 99.9% of the electromagnetic spectrum cannot be observed by the naked eye. Over the last 100 years, scientists have built instruments that can emit and sense energy at different wavelengths. Today, we rely on some of these to take medical images of our body, see the cavity inside our tooth, check our bags at the airport, or land a plane in fog. However, these instruments are incredibly specialized and expensive and only see across specific portions of the electromagnetic spectrum.

Humans can’t tell from the pixels

In a 2016 paper, Hany Farid, a computer scientist at Dartmouth, along with some colleagues, found that “observers have considerable difficulty” telling computer-generated and real images apart—“more difficulty than we observed 5 years ago.” On the bright side, though, when the researchers provided 250 Mechanical Turk participants with a brief “training session”—by showing them 10 labeled computer-generated images and 10 original photographs—their ability to distinguish between the 2 types of images improved significantly.

DOOM 2016 Tech breakdown

The new DOOM is a perfect addition to the franchise, using the new id Tech 6 engine where ex-Crytek Tiago Sousa now assumes the role of lead renderer programmer after John Carmack’s departure. Historically id Software is known for open-sourcing their engines after a few years, which often leads to nice remakes and breakdowns. Whether this will stand true with id Tech 6 remains to be seen but we don’t necessarily need the source code to appreciate the nice graphics techniques implemented in the engine.

fascinating how much work goes into this. see also the analysis of the 2020 successor, using id Tech 7.

100X faster metallic rendering

The standard approach to modeling the way surfaces reflect light assumes that the surfaces are smooth at the pixel level. But that’s not the case in the real world for metallic materials as well as fabrics, wood finishes and wood grain, among others. As a result, with current methods, these surfaces will appear noisy, grainy or glittery. “There is currently no algorithm that can efficiently render the rough appearance of real specular surfaces. This is highly unusual in modern computer graphics, where almost any other scene can be rendered given enough computing power.”

History of the Teapot

the “Utah teapot,” as it’s affectionately known—has had an enormous influence on the history of computing, dating back to 1974, when computer scientist Martin Newell was a Ph.D. student at the University of Utah. The U of U was a powerhouse of computer graphics research then, and Newell had some novel ideas for algorithms that could realistically display 3D shapes—rendering complex effects like shadows, reflective textures, or rotations that reveal obscured surfaces. But, to his chagrin, he struggled to find a digitized object worthy of his methods. Objects that were typically used for simulating reflections, like a chess pawn, a donut, and an urn, were too simple. He needed more interesting models. Sandra suggested that he digitize the shapes of the tea service they were using, a simple Melitta set from a local department store. It was an auspicious choice: The curves, handle, lid, and spout of the teapot all conspired to make it an ideal object for graphical experiment. Unlike other objects, the teapot could, for instance, cast a shadow on itself in several places. Newell grabbed some graph paper and a pencil, and sketched it. The computer model proved useful for Newell’s own research, featuring prominently in his next few publications. But he and Blinn also took the important step of sharing their model publicly. As it turned out, other researchers were also starved for interesting 3D models, and the digital teapot was exactly the experimental test bed they needed. At the same time, the shape was simple enough for Newell to input and for computers to process. (Rumor has it some researchers even had the data points memorized!) And unlike many household items, like furniture or fruit-in-a-bowl, the teapot’s simulated surface looked realistic without superimposing an artificial, textured pattern.

End of the uncanny valley

I can tell from the pixels is coming to an end.

As computer-generated characters become increasingly photorealistic, people are finding it harder to distinguish between real and computer-generated

With photo retouching, postproduction in film, plastic surgery, and increasingly effective makeup & skin care products, we’re being bombarded with a growing amount of imagery featuring people who don’t appear naturally human.

bye bye uncanny valley.
2021-10-17: Things are now at the point where you can win prestigious photography prizes for fake images:

The Book of Veles: How Jonas Bendiksen hoodwinked the photography industry. The photographer explains the many layers of intrigue that went into the creation of his book about misinformation in the contemporary media landscape. If computer-generated fake news pictures are accepted by the curators who have to pick the highlights of all the year’s best photojournalism, it shows that the whole industry is quite vulnerable. The big tech companies regularly recruit top-level hackers, even criminal ones, to try to break into their systems. They are called penetration testers. They are paid top dollar to hack as much as they can and search for weaknesses in company’s system architecture, so that they can go fix the loopholes and protect themselves against being taken advantage of. I guess I see what I did as a similar service for documentary photography and photojournalism, just on a volunteer basis.