Tag: augmentedreality

Telepresence

Pretty impressive

2007-11-02: Nice article about the organizational structure at cisco, and how they paid for their collaboration tools by cutting travel budgets

“This will shock you. The other day I started the morning with my top staff in India. Then I went to Japan and a meeting with Fujitsu, then on to Cleveland, then London and a meeting with BT. The whole trip took only 3.5 hours, and I was far more effective in the calls.” The reason: Chambers was traveling, of course, over Cisco’s latest gee-whiz product: telepresence, a high-def, life-sized, Internet-based communications system that is to traditional video-conferencing what the latest big-screen surround-sound plasma extravaganza would be to Grandma’s black-and-white set with rabbit ears. “When I asked the team to design this, I said, ‘Make it like Star Trek. You know, Beam me up, Scotty.'”

2008-05-28: Holy crap indeed.

The ‘Cisco On-Stage TelePresence Experience’ was an ambitious collaboration between Cisco and Musion Systems. Musion seamlessly integrated their 3D holographic display technology with Cisco’s TelePresence’s system to create the world’s first real time virtual presentation.

2023-03-03: This whole area has not developed as quickly as hoped. Perhaps because regular video is good enough? Or because most people haven’t even tried video pre-pandemic. Anyway, here’s a late 2022 state of Google Starline. The person in charge of this space has since left, pointing to an AR / VR winter to come.

The failed promise of VR

anselm hook on the failed promise of VR, and the dangers of AR / 3D geo

capturing ‘appearance’ rather than ‘behavior’ was the death knell. very good post: My own sense has been that VR is something of a tar-pit and that Augmented Reality to close to VR for comfort. Having watched VRML ensnare and sink so many ventures I wonder if the same thing would happen with an intersection between cartography and visualization. I used to write video games and quite a few that were immersive 3D. In that role I used to hang out with the VRML community, watching them go through their contortions as they tried to define the VRML spec (and the atrocity that is now X3D). Oddly, the geo enthusiast get togethers we see today are in fact almost a perfect mirror of the kinds of VRML get-togethers that used to happen back in the late 90’s; a variety of participants some backed by ventures, other by passion, absorbed in the possibilities of a technology…. but all mostly really just contributing to the heat death of the universe. In the last go-round VR failed to succeed on the web for a variety of reasons, competition, lack of cohesion, internecine wars over a ideologically starved space, but perhaps mostly because the enthusiasts went after the lowest hanging fruit: capturing ‘appearance’ rather than ‘behavior’. You can see this in the way the VRML grammar has most of its emphasis on static geometry as opposed to parametric or procedural geometry and in the way it has very little emphasis on constraints over time or on simulation at all. One would have imagined a very rich physical dynamics model for VRML defining many kinds of joint and contact constraints – but in fact is is impoverished in that regard. What they wanted was to upload themselves into a furry wonderland, and what they got was a simple grammar for defining buckets of vertices and polygons… A form versus function argument. Insofar as a geo 3d interest group; it seems like such a group should focus on modeling the behavior of systems; with the trite ideas such as decorating 3d space with post-it notes or drawing static geometry on 3d space being treated as something that is taken for granted; yes needed but aspirational no. Basically (IMHO) if you want to build something truly durable, then you have to dig deep into the heart of where value is. I don’t see a lot of value in just doing the world in 3d; it’s been articulated for years as a thesis, there’s a huge amount of expertise that should have done this already and the digital landscape is riddled with half-hearted attempts to do just that; whose developers walked away eventually out of sheer boredom. But there is value in simulating the world; its behavior over time and the rules that drive the construction of the artifacts that we see in it. The difference is that in the former you are manually plunking down a bunch of buildings and calling it a city and in the latter you are building a time machine. In the former you have buckets of points and polygons and in the latter you have scripts that can grow buildings and can be used to graph interactions between different phenomena. The poignancy of our planet, its urban landscape, its beauty, was drawn home to me while flying out of OAK on this last friday just as the sun was setting. Looking down through rifts of cloud at trails of light and the shadows of buildings illuminated by the evening dusk one really did have a sense of being a god over a next-generation video game. As the sun faded one was left only with an abstract sketch of human habitat in halogen and phosphor. At that moment I happened to be reading ‘In the Absence of the Sacred‘ which talks about how quickly our urban culture has colonized the world. …and I could see what he was referring to directly by just looking out the window. It would have been wonderful to have a smart window showing not just the digital facts (who was where, what was where) but of how it came to be there; its flow over time, and where it was going. To see not just the present but to see through the layers of time as well… I did like this earlier, appropriate, comment cited by Kevin Kelly regarding situational awareness

Motion tracking

Richard marks just showed a couple motion tracking demos they developed for eye toys. google has images. some of the demos were so awesome that it drew loud cheering from this jaded crowd. farther out are sensors with depth detection which allows to have truly 3D integration, where you can “reach out” to interact with the system. the demo shows butterflies circling around a guy as he walks in front of the camera. this is truly awesome. also, for the DDR crowd, it now supports eye toys.

AR

steve mann is exploring augmented reality. while a lack of implants disqualifies the cyborg moniker, cf gargoyle, his experience is nevertheless one we will share in a few short years.
increasing the range of sensory inputs while increasing filtering capabilities strikes me as an excellent way to redefine who we are. we are what we perceive.

In his 2000 book “Cyborg: Digital Destiny and Human Possibility in the Age of the Wearable Computer,” Mann wrote about the surreal beauty he experienced in programming the computer in his vision to alter colors, or alert him to objects behind him.
“The wearable computer allows me to explore my humanity, alter my consciousness, shift my perspectives so that I can choose — any given time — to see the world in very different, often quite liberating ways,” he wrote in “Cyborg.”
For example, Mann and his graduate students have developed software that can transform billboards or other rectangular shapes in the physical world — when viewed through the lens of a wearable computer — into virtual boxes for reading e-mail and other messages.

Holographic video

The diffraction pattern from just 1 high-resolution hologram can easily use up more than 1 terabyte of data. A moderately flicker-free holographic video would require at least 20 such holograms per second. Clearly, churning through 20 terabytes worth of information every second would require extraterrestrial technology: today’s fastest PCs operate at 0.001% that rate.