Actroid DER2 is an upgraded version of Kokoro’s previous fembot, Actroid DER, who has made quite a name for herself by providing services at a number of events, including the 2005 World Expo. Compared to the previous model, DER2 has thinner arms and a wider repertoire of expressions. The smoothness of her movement has also been improved, making it now even more likely for the uninitiated to confuse her with an actual human being.
i saw the predecessor, DER, at nextfest. it was pretty amazing. now they already supercede it?
At local.ch we made a XHTML version – m.local.ch – of the phone book and event search available in July 2006. The last few months we have been working with Endoxon to create a more advanced solution using J2ME technology. This allows us to do more with the map – zooming & panning – displaying multiple results – read GPS position.
Hitachi has successfully tested a brain-machine interface that allows users to turn power switches on and off with their mind. Relying on optical topography, a neuroimaging technique that uses near-infrared light to map blood concentration in the brain, the system can recognize the changes in brain blood flow associated with mental activity and translate those changes into voltage signals for controlling external devices. In the experiments, test subjects were able to activate the power switch of a model train by performing mental arithmetic and reciting items from memory.
We’ve learned more about the brain in the last 5 years than we did in the last 50 years. Lynch is working on a proposal for a 5-year National Neurotechnology Initiative with a budget of $200 million a year. It would identify projects to fund, such as the development of a “brain interface” device that would route signals from the muscles and sensory organs; technology that would allow nerves to control prosthetic devices; and a brain-simulation project that would replicate the way the brain works.
2007-08-25: Brainloop, Google Earth controlled by a brain computer interface.
He inserts a 4 sq. mm array of 100 neural probes into the M1 arm knob of the cortex. With a random sample of neural signaling from that region of the brain, and some Kalman filtering, patients can instantly control the cursor on screen (unlike biofeedback or sensory remapping which require training). They can deduce motor intent from a sample of an average of 24 neurons. When connected to a robot hand for the first time, and asked to “make a fist” the patient exclaimed “holy shit” as it worked the first time. Prior to the experiments, open questions included: Do the neurons stay active (other work indicates that the motor cortex reorganizes within minutes of decoupled sensory input)? Can thinking still activate the motor neurons? The test patients had been in sensory deprivation for 2-9 years prior. Will there be scarring and degradation over time? 1 patient is 3 years in. What are the neural plasticity effects?
Researchers described the brain-computer interface that allowed Ms. Scheuermann to move an arm, turn and bend a wrist, and close a hand for the first time in 9 years. Less than 1 year after she told the research team, “I’m going to feed myself chocolate before this is over,” Ms. Scheuermann savored its taste and announced as they applauded her feat, “1 small nibble for a woman, 1 giant bite for BCI.”
Even though the animals were on different continents, with the resulting noisy transmission and signal delays, they could still communicate. This tells us that we could create a workable, network of animal brains distributed in many different locations.
2013-03-17: Hive mind privacy. One of the most interesting arguments for privacy in our (near) hive mind: to cut down on the quadratic communication overhead. Even our brain isn’t fully connected, rather sparsely in fact. 2014-03-04: I had somehow missed this 2 years ago. In the estimation of Mary Lou Jepsen: Could future devices read images from our brains? It should be possible to increase resolution 1000x in the next few years.
a significant proportion of patients who were classified as vegetative in recent years have been misdiagnosed – Owen estimates perhaps 20%. Schiff, who weighs up the extent of misdiagnosis a different way, goes further. Based on recent studies, he says 40% of patients thought to be vegetative are, when examined more closely, partly aware. Among this group of supposedly vegetative patients are those who are revealed by scanners to be able to communicate and should be diagnosed as locked-in, if they are fully conscious, or minimally conscious, if their abilities wax and wane. But Schiff believes the remainder will have to be defined another way altogether, since being aware does not necessarily mean being able to use mental imagery. Nor does being aware enough to follow a command mean possessing the ability to communicate.
Another story:
For 12 years, Scott had remained silent, locked inside his body, quietly watching the world go by. Now, the fMRI had revealed a person: a living, breathing soul who had a life, attitudes, beliefs, memories and experiences, and who had the sense of being somebody who was alive and in the world – no matter how strange and limited that world had become.
On many occasions in the months that followed, we conversed with Scott in the scanner. He expressed himself, speaking to us through this magical connection we had made between his mind and our machine. Somehow, Scott came back to life. He was able to tell us that he knew who he was; he knew where he was; and he knew how much time had passed since his accident. And thankfully, he confirmed that he wasn’t in any pain.
Neuroethics / when you are declared brain dead are in for an upheaval.
After a major injury, some patients are in such serious condition that doctors deliberately place them in an artificial coma to protect their body and brain so they can recover. That could be a mistake. An extreme deep coma — based on the experiment on the cats — may actually be more protective. “Indeed, an organ or muscle that remains inactive for a long time eventually atrophies. It is plausible that the same applies to a brain kept for an extended period in a state corresponding to a flat EEG. An inactive brain coming out of a prolonged coma may be in worse shape than a brain that has had minimal activity. Research on the effects of extreme deep coma during which the hippocampus is active is absolutely vital for the benefit of patients.”
intriguing new possibilities for computer-assisted communication of brain states between individuals. The brain-to-brain method may be used to augment this mutual coupling of the brains, and may have a positive impact on human social behavior
Brainet uses signals from an array of electrodes implanted in the brains of multiple rodents in experiments to merge their collective brain activity and jointly control a virtual avatar arm or even perform sophisticated computations — including image pattern recognition and even weather forecasting
A novel brain-computer-interface has allowed a paraplegic man to walk for a short distance, unaided by an exoskeleton or other types of robotic support.
2016-06-01: Remote controlled insects. This is an improvement over the robo cockroach:
UC Berkeley researchers are developing “Neural Dust,” tiny wireless sensors for implanting in the brain, muscles, and intestines that could someday be used to control prosthetics or a “electroceuticals” to treat epilepsy or fire up the immune system. So far, they’ve tested a 3 millimeter long version of the device in rats. “I think the long-term prospects for neural dust are not only within nerves and the brain, but much broader. Having access to in-body telemetry has never been possible because there has been no way to put something supertiny superdeep. But now I can take a speck of nothing and park it next to a nerve or organ, your GI tract or a muscle, and read out the data.”
2016-09-11: Do we really want to fuse our brains together?
If a rat can teach herself to use a completely new sensory modality – something the species has never experienced throughout the course of its evolutionary history – is there any cause to believe our own brains will prove any less capable of integrating novel forms of input?
Artificial Development is building CCortex, a massive spiking neural network simulation of the human cortex and peripheral systems. Upon completion, CCortex will represent up to 20b neurons and 20t connections, achieving a level of complexity that rivals the mammalian brain, and making it the largest, most biologically realistic neural network ever built. The system is up to 10k times larger than any previous attempt to replicate primary characteristics of human intelligence.
Antennas 100x smaller could lead to tiny brain implants, micro–medical devices, or phones you can wear on your finger. The antennas are expected to have sizes comparable to the acoustic wavelength, thus leading to orders of magnitude reduced antenna size compared to state-of-the-art compact antennas. These miniaturized ME antennas have drastically enhanced antenna gain at small size owing to the acoustically actuated ME effect based receiving/transmitting mechanisms at RF frequencies.
The new technique “could provide a means of communication for people who are unable to verbally communicate. It could also have forensic uses for law enforcement in gathering eyewitness information on potential suspects, rather than relying on verbal descriptions provided to a sketch artist.”
But what about letting patients actively participate with AI in improving performance? To test that idea, researchers conducted research using “mutual learning” between computer and humans — 2 severely impaired (tetraplegic) participants with chronic spinal cord injury. The goal: win a live virtual racing game at an international event. After training for several months, in Oct. 8, 2016, the 2 pilots participated in Cybathlon in Zurich, Switzerland — the first international para-Olympics for disabled individuals in control of bionic assistive technology. 1 of those pilots won the gold medal and the other held the tournament record.
DARPA is funding development of high resolution brain interfaces. At the same time there are 2 companies who have breakthrough technology for higher resolution brain interfaces. The 2 companies are Elon Musk’s Neuralink and Mary Lou Jepsen’s Openwater red light scanner.
A system that translates thought into intelligible speech. Devices monitor brain activity and Artificial Intelligence reconstructs the words a person hears. This breakthrough harnesses the power of speech synthesizers and artificial intelligence. It could lead to new ways for computers to communicate directly with the brain. The DNN-vocoder combination achieved the best performance (75% accuracy), which is 67% higher than the baseline system (Linear regression with auditory spectrogram).
An implanted brain-computer interface (above) coupled with deep-learning algorithms can translate thought into computerized speech. The researchers asked native English speakers on Amazon’s Mechanical Turk crowdsourcing marketplace to transcribe the sentences they heard. The listeners accurately heard the sentences 43% of the time when given a set of 25 possible words to choose from, and 21% of the time when given 50 words. Although the accuracy rate remains low, it would be good enough to make a meaningful difference to a “locked-in” person, who is almost completely paralyzed and unable to speak.
The new documentary I Am Human chronicles how neurotechnology could restore sight, retrain the body, and treat diseases—then make us all more than human.
Here, human participants listened to questions and responded aloud with answers while we used high-density electrocorticography (ECoG) recordings to detect when they heard or said an utterance and to then decode the utterance’s identity. Because certain answers were only plausible responses to certain questions, we could dynamically update the prior probabilities of each answer using the decoded question likelihoods as context. We decode produced and perceived utterances with accuracy rates as high as 61% and 76%, respectively (chance is 7% and 20%). Contextual integration of decoded question likelihoods significantly improves answer decoding. These results demonstrate real-time decoding of speech in an interactive, conversational setting, which has important implications for patients who are unable to communicate.
2019-10-30: Brain-to-Brain communication for group problem-solving
The interface combines electroencephalography (EEG) to record brain signals and transcranial magnetic stimulation (TMS) to deliver information noninvasively to the brain. The interface allows 3 human subjects to collaborate and solve a task using direct brain-to-brain communication. 2 of the 3 subjects are designated as “Senders” whose brain signals are decoded using real-time EEG data analysis. The decoding process extracts each Sender’s decision about whether to rotate a block in a Tetris-like game before it is dropped to fill a line. The Senders’ decisions are transmitted via the Internet to the brain of a third subject, the “Receiver,” who cannot see the game screen. The Senders’ decisions are delivered to the Receiver’s brain via magnetic stimulation of the occipital cortex. The Receiver integrates the information received from the 2 Senders and uses an EEG interface to make a decision about either turning the block or keeping it in the same orientation. A second round of the game provides an additional chance for the Senders to evaluate the Receiver’s decision and send feedback to the Receiver’s brain, and for the Receiver to rectify a possible incorrect decision made in the first round.
Using an implant, a paralyzed individual achieved typing speeds of 90 characters per minute with 94.1% raw accuracy online, and greater than 99% accuracy offline with a general-purpose autocorrect. Despite working with a relatively small amount of data (only 242 sentences’ worth of characters), the system worked remarkably well. The lag between the thought and a character appearing on screen was ~500ms, and the participant was able to produce 90 characters per minute, easily topping the previous record for implant-driven typing, which was ~25 characters per minute.
2022-04-15: EEG are terrible sensors. In-ear may fix that, and allow for continuous readings, and perhaps writing too.
But while the immediate uses of NextSense’s earbuds are medical, Berent hopes to eventually build a mass-market brain monitor that, if enough people start using it, can generate enormous quantities of day-to-day brain performance data. The catch, of course, is that since no one has ever done that, it’s not yet obvious what most people would get out of the information. That’s also what’s exciting. “We don’t necessarily know what we would learn because we’ve never had access to that type of data”.
Berent and his team envision a multipurpose device that can stream music and phone calls like AirPods; boost local sound like a hearing aid; and monitor your brain to provide a window into your moods, attention, sleep patterns, and periods of depression. He also hopes to zero in on a few sizes that would fit a vast majority of people, to dispense with all the ear-scanning.
Far along on the NextSense road map is something unproven, and kind of wild. If AI can decode tons of brain data, the next step would be to then change those patterns—perhaps by doing something as simple as playing a well-timed sound. “It’s almost a transformative moment in history,” fascinated by the prospect of using audio to nudge someone into a deeper sleep state. “It’s so convenient, it doesn’t bother you. People are wearing stuff in the ear typically anyway, right?”
Our BCI decoded speech at 62 words per minute, which is 3.4x faster than the prior record for any kind of BCI and begins to approach the speed of natural conversation (160 words per minute). We highlight 2 aspects of the neural code for speech that are encouraging for speech BCIs: spatially intermixed tuning to speech articulators that makes accurate decoding possible from only a small region of cortex, and a detailed articulatory representation of phonemes that persists years after paralysis. These results show a feasible path forward for using intracortical speech BCIs to restore rapid communication to people with paralysis who can no longer speak.
Global Imagination makes the Magic Planet digital video globe – the digital display with a sphere-shaped screen. We also supply software, content and services that enable you to present global information and promotional media in the most compelling and interactive way possible.
3d globes where you can visualize images on. this rocks super hard, and i want one
Heh. Sarah and I were told in costa rica that we are the worst kayaking crew ever. Maybe some practice would help
A real-time water simulator with a pre-computed database of 3D fluid dynamics. The system simulates a real-time wave model with a database for complex and fast-flow areas around objects that creates realistic wakes and force feedback of water resistance.
A fleet of 100 robotic submarines could in 5 years’ time be roaming the vast unexplored stretches of the world’s seafloors and helping unlock their mysteries. “The pace of exploration in the ocean is going a little too slowly”. Only 5% of the ocean floor has been explored in detail, which means there may be numerous new species and geothermal processes waiting to be discovered.
NOAA plans to map the oceans floors with unmanned vehicles. 2010-10-25: Antarctica Ocean UAV. Such a baby step. we should have fleets of fully autonomous ocean robots by now, mapping the sea floors.
Gavia, a bullet-shaped robot developed by the University of British Columbia, is currently in Antarctica on a mission to explore heretofore uncharted areas of the ocean.
excavating the past will mean deploying teams of remote-sensing robotic machines semi-autonomously flying, crawling, gridding, scanning, squeezing, and non-destructively burrowing their way into lost rooms and buried cities, perhaps even translating ancient languages along the way.
The wreckage of the ARA San Juan (S-42) was found by Ocean Infinity. Ocean Infinity used 5 Autonomous Underwater Vehicles (AUVs) to carry out the search. Ocean Infinity’s ocean search capability is the most advanced in the world. Their AUV’s are capable of operating in depths from 5 meters to 6000 meters and covering vast areas of the seabed at unparalleled speed. The AUVs are not tethered, allowing them to go deeper and collect higher quality data. They are equipped with side scan sonar, a multi-beam echo-sounder HD camera, and synthetic aperture sonar. Ocean Infinity is able to deploy 2 work class ROVs and heavy lifting equipment capable of retrieving objects weighing up to 45T from 6000 meters.
A particular class of functional equations, called comparametric equations, is introduced as a basis for quantigraphic image processing. Comparametric equations are fundamental to the analysis and processing of multiple images differing only in exposure. The well–known “gamma correction” of an image is presented as a simple example of a comparametric equation, for which it is shown that the underlying quantigraphic function does not pass through the origin.
For this reason it is argued that exposure adjustment by gamma correction is inherently flawed, and alternatives are provided. These alternatives, when applied to a plurality of images that differ only in exposure, give rise to a new kind of processing in the “amplitude domain” (as opposed to the time domain or the frequency domain).
The so-called biohybrid system sports a power pack and computer all contained within the prosthesis and uses sensors to allow more realistic movements than static, strap-on devices. The first systems have noninvasive sensors attached to the prostheses. In 2 years scientists will implant sensors into study volunteers’ nervous systems.
the state of the art of medical implants / prosthetics
overview of exhibits at nextfest. i’m going, bitches 2006-09-29: check this shit out. Super awesome. 2006-10-01: wired is back. after years of being lost in the dotcom woods, i had written them off, but their nextfest taught me better today. i will be thinking about the exhibits for a long time: a wonderful departure from the daily stream of bad news out of the religio-political corner. watch my linkstream for more in the coming days. in the meantime, here are some of my favorite exhibits: