Tag: googleearth

Google Earth Archaeology

I found more in the first 5, 6, 7 hours than I’ve found in 25 years of traditional field surveys and aerial archaeology.

2007-01-07: overview article. a bit thin on detail, unfortunately.
2014-05-05: holy shit:

A study of Cold War spy-satellite photos has tripled the number of known archaeological sites across the Middle East, revealing 1000s of ancient cities, roads, canals, and other ruins.

Brain Communication

A game where you compete in relaxation. The players’ brainwaves control a ball on a table, and the more relaxed scores a goal over the opponent.

2006-11-15: Slow but steady progress

Hitachi has successfully tested a brain-machine interface that allows users to turn power switches on and off with their mind. Relying on optical topography, a neuroimaging technique that uses near-infrared light to map blood concentration in the brain, the system can recognize the changes in brain blood flow associated with mental activity and translate those changes into voltage signals for controlling external devices. In the experiments, test subjects were able to activate the power switch of a model train by performing mental arithmetic and reciting items from memory.

2007-05-29: National Neurotechnology Initiative

We’ve learned more about the brain in the last 5 years than we did in the last 50 years. Lynch is working on a proposal for a 5-year National Neurotechnology Initiative with a budget of $200 million a year. It would identify projects to fund, such as the development of a “brain interface” device that would route signals from the muscles and sensory organs; technology that would allow nerves to control prosthetic devices; and a brain-simulation project that would replicate the way the brain works.

2007-08-25: Brainloop, Google Earth controlled by a brain computer interface.

2008-02-20: EEG startup.

Emotiv has created technologies that allow machines to take both conscious and non-conscious inputs directly from your mind.

I think I have future shock with this one.

Between this and haptic interfaces… woah.

2008-10-23: An update on the neuro cyborgs

He inserts a 4 sq. mm array of 100 neural probes into the M1 arm knob of the cortex. With a random sample of neural signaling from that region of the brain, and some Kalman filtering, patients can instantly control the cursor on screen (unlike biofeedback or sensory remapping which require training). They can deduce motor intent from a sample of an average of 24 neurons. When connected to a robot hand for the first time, and asked to “make a fist” the patient exclaimed “holy shit” as it worked the first time. Prior to the experiments, open questions included: Do the neurons stay active (other work indicates that the motor cortex reorganizes within minutes of decoupled sensory input)? Can thinking still activate the motor neurons? The test patients had been in sensory deprivation for 2-9 years prior. Will there be scarring and degradation over time? 1 patient is 3 years in. What are the neural plasticity effects?

2012-07-01: Brain in a vat is here!

The first real-time brain-scanning speller will allow people in an apparent vegetative state to communicate

2012-12-21: HCI chocolate

Researchers described the brain-computer interface that allowed Ms. Scheuermann to move an arm, turn and bend a wrist, and close a hand for the first time in 9 years. Less than 1 year after she told the research team, “I’m going to feed myself chocolate before this is over,” Ms. Scheuermann savored its taste and announced as they applauded her feat, “1 small nibble for a woman, 1 giant bite for BCI.”

2013-03-01: Brain to brain communication

Even though the animals were on different continents, with the resulting noisy transmission and signal delays, they could still communicate. This tells us that we could create a workable, network of animal brains distributed in many different locations.

2013-03-17: Hive mind privacy. One of the most interesting arguments for privacy in our (near) hive mind: to cut down on the quadratic communication overhead. Even our brain isn’t fully connected, rather sparsely in fact.
2014-03-04: I had somehow missed this 2 years ago. In the estimation of Mary Lou Jepsen: Could future devices read images from our brains? It should be possible to increase resolution 1000x in the next few years.

2014-04-27: Vegetative patients may be aware

a significant proportion of patients who were classified as vegetative in recent years have been misdiagnosed – Owen estimates perhaps 20%. Schiff, who weighs up the extent of misdiagnosis a different way, goes further. Based on recent studies, he says 40% of patients thought to be vegetative are, when examined more closely, partly aware. Among this group of supposedly vegetative patients are those who are revealed by scanners to be able to communicate and should be diagnosed as locked-in, if they are fully conscious, or minimally conscious, if their abilities wax and wane. But Schiff believes the remainder will have to be defined another way altogether, since being aware does not necessarily mean being able to use mental imagery. Nor does being aware enough to follow a command mean possessing the ability to communicate.

Another story:

For 12 years, Scott had remained silent, locked inside his body, quietly watching the world go by. Now, the fMRI had revealed a person: a living, breathing soul who had a life, attitudes, beliefs, memories and experiences, and who had the sense of being somebody who was alive and in the world – no matter how strange and limited that world had become.

On many occasions in the months that followed, we conversed with Scott in the scanner. He expressed himself, speaking to us through this magical connection we had made between his mind and our machine. Somehow, Scott came back to life. He was able to tell us that he knew who he was; he knew where he was; and he knew how much time had passed since his accident. And thankfully, he confirmed that he wasn’t in any pain.

Neuroethics / when you are declared brain dead are in for an upheaval.

After a major injury, some patients are in such serious condition that doctors deliberately place them in an artificial coma to protect their body and brain so they can recover. That could be a mistake. An extreme deep coma — based on the experiment on the cats — may actually be more protective. “Indeed, an organ or muscle that remains inactive for a long time eventually atrophies. It is plausible that the same applies to a brain kept for an extended period in a state corresponding to a flat EEG. An inactive brain coming out of a prolonged coma may be in worse shape than a brain that has had minimal activity. Research on the effects of extreme deep coma during which the hippocampus is active is absolutely vital for the benefit of patients.”

2014-09-11: Brain coupling

intriguing new possibilities for computer-assisted communication of brain states between individuals. The brain-to-brain method may be used to augment this mutual coupling of the brains, and may have a positive impact on human social behavior

2015-07-10: Rat onemind.

Brainet uses signals from an array of electrodes implanted in the brains of multiple rodents in experiments to merge their collective brain activity and jointly control a virtual avatar arm or even perform sophisticated computations — including image pattern recognition and even weather forecasting

2015-09-26: Unaided paraplegic walking

A novel brain-computer-interface has allowed a paraplegic man to walk for a short distance, unaided by an exoskeleton or other types of robotic support.

2016-06-01: Remote controlled insects. This is an improvement over the robo cockroach:

The rapid pace of miniaturization is swiftly blurring the line between the technological base we’ve created and the technological base that created us. Extreme miniaturization and advanced neural interfaces have enabled us to explore the remote control of insects in free flight via implantable radio-equipped miniature neural stimulating systems

2016-08-04: Neural Dust

UC Berkeley researchers are developing “Neural Dust,” tiny wireless sensors for implanting in the brain, muscles, and intestines that could someday be used to control prosthetics or a “electroceuticals” to treat epilepsy or fire up the immune system. So far, they’ve tested a 3 millimeter long version of the device in rats. “I think the long-term prospects for neural dust are not only within nerves and the brain, but much broader. Having access to in-body telemetry has never been possible because there has been no way to put something supertiny superdeep. But now I can take a speck of nothing and park it next to a nerve or organ, your GI tract or a muscle, and read out the data.”

2016-09-11: Do we really want to fuse our brains together?

If a rat can teach herself to use a completely new sensory modality – something the species has never experienced throughout the course of its evolutionary history – is there any cause to believe our own brains will prove any less capable of integrating novel forms of input?

2016-10-04: CCortex

Artificial Development is building CCortex, a massive spiking neural network simulation of the human cortex and peripheral systems. Upon completion, CCortex will represent up to 20b neurons and 20t connections, achieving a level of complexity that rivals the mammalian brain, and making it the largest, most biologically realistic neural network ever built. The system is up to 10k times larger than any previous attempt to replicate primary characteristics of human intelligence.

2017-03-23: Our Future Cyborg Brains

2017-09-05: 100x smaller Antennas

Antennas 100x smaller could lead to tiny brain implants, micro–medical devices, or phones you can wear on your finger. The antennas are expected to have sizes comparable to the acoustic wavelength, thus leading to orders of magnitude reduced antenna size compared to state-of-the-art compact antennas. These miniaturized ME antennas have drastically enhanced antenna gain at small size owing to the acoustically actuated ME effect based receiving/transmitting mechanisms at RF frequencies.

2018-02-27: EEG image reconstruction:

The new technique “could provide a means of communication for people who are unable to verbally communicate. It could also have forensic uses for law enforcement in gathering eyewitness information on potential suspects, rather than relying on verbal descriptions provided to a sketch artist.”

2018-05-14: Tetraplegics win race

But what about letting patients actively participate with AI in improving performance? To test that idea, researchers conducted research using “mutual learning” between computer and humans — 2 severely impaired (tetraplegic) participants with chronic spinal cord injury. The goal: win a live virtual racing game at an international event. After training for several months, in Oct. 8, 2016, the 2 pilots participated in Cybathlon in Zurich, Switzerland — the first international para-Olympics for disabled individuals in control of bionic assistive technology. 1 of those pilots won the gold medal and the other held the tournament record.

2018-09-11: DARPA Neurotechnology

DARPA is funding development of high resolution brain interfaces. At the same time there are 2 companies who have breakthrough technology for higher resolution brain interfaces. The 2 companies are Elon Musk’s Neuralink and Mary Lou Jepsen’s Openwater red light scanner.

2019-02-09: 75% Thought to Speech

A system that translates thought into intelligible speech. Devices monitor brain activity and Artificial Intelligence reconstructs the words a person hears. This breakthrough harnesses the power of speech synthesizers and artificial intelligence. It could lead to new ways for computers to communicate directly with the brain. The DNN-vocoder combination achieved the best performance (75% accuracy), which is 67% higher than the baseline system (Linear regression with auditory spectrogram).


2019-04-24: 43% Thought to speech

An implanted brain-computer interface (above) coupled with deep-learning algorithms can translate thought into computerized speech. The researchers asked native English speakers on Amazon’s Mechanical Turk crowdsourcing marketplace to transcribe the sentences they heard. The listeners accurately heard the sentences 43% of the time when given a set of 25 possible words to choose from, and 21% of the time when given 50 words. Although the accuracy rate remains low, it would be good enough to make a meaningful difference to a “locked-in” person, who is almost completely paralyzed and unable to speak.


2019-05-02: HCI Superpowers

The new documentary I Am Human chronicles how neurotechnology could restore sight, retrain the body, and treat diseases—then make us all more than human.

2019-08-01: Facebook has a 76% system:

Here, human participants listened to questions and responded aloud with answers while we used high-density electrocorticography (ECoG) recordings to detect when they heard or said an utterance and to then decode the utterance’s identity. Because certain answers were only plausible responses to certain questions, we could dynamically update the prior probabilities of each answer using the decoded question likelihoods as context. We decode produced and perceived utterances with accuracy rates as high as 61% and 76%, respectively (chance is 7% and 20%). Contextual integration of decoded question likelihoods significantly improves answer decoding. These results demonstrate real-time decoding of speech in an interactive, conversational setting, which has important implications for patients who are unable to communicate.

2019-10-30: Brain-to-Brain communication for group problem-solving

The interface combines electroencephalography (EEG) to record brain signals and transcranial magnetic stimulation (TMS) to deliver information noninvasively to the brain. The interface allows 3 human subjects to collaborate and solve a task using direct brain-to-brain communication. 2 of the 3 subjects are designated as “Senders” whose brain signals are decoded using real-time EEG data analysis. The decoding process extracts each Sender’s decision about whether to rotate a block in a Tetris-like game before it is dropped to fill a line. The Senders’ decisions are transmitted via the Internet to the brain of a third subject, the “Receiver,” who cannot see the game screen. The Senders’ decisions are delivered to the Receiver’s brain via magnetic stimulation of the occipital cortex. The Receiver integrates the information received from the 2 Senders and uses an EEG interface to make a decision about either turning the block or keeping it in the same orientation. A second round of the game provides an additional chance for the Senders to evaluate the Receiver’s decision and send feedback to the Receiver’s brain, and for the Receiver to rectify a possible incorrect decision made in the first round.

2021-05-14: 94% Thought to text

Using an implant, a paralyzed individual achieved typing speeds of 90 characters per minute with 94.1% raw accuracy online, and greater than 99% accuracy offline with a general-purpose autocorrect. Despite working with a relatively small amount of data (only 242 sentences’ worth of characters), the system worked remarkably well. The lag between the thought and a character appearing on screen was ~500ms, and the participant was able to produce 90 characters per minute, easily topping the previous record for implant-driven typing, which was ~25 characters per minute.

2022-04-15: EEG are terrible sensors. In-ear may fix that, and allow for continuous readings, and perhaps writing too.

But while the immediate uses of NextSense’s earbuds are medical, Berent hopes to eventually build a mass-market brain monitor that, if enough people start using it, can generate enormous quantities of day-to-day brain performance data. The catch, of course, is that since no one has ever done that, it’s not yet obvious what most people would get out of the information. That’s also what’s exciting. “We don’t necessarily know what we would learn because we’ve never had access to that type of data”.

Berent and his team envision a multipurpose device that can stream music and phone calls like AirPods; boost local sound like a hearing aid; and monitor your brain to provide a window into your moods, attention, sleep patterns, and periods of depression. He also hopes to zero in on a few sizes that would fit a vast majority of people, to dispense with all the ear-scanning.

Far along on the NextSense road map is something unproven, and kind of wild. If AI can decode tons of brain data, the next step would be to then change those patterns—perhaps by doing something as simple as playing a well-timed sound. “It’s almost a transformative moment in history,” fascinated by the prospect of using audio to nudge someone into a deeper sleep state. “It’s so convenient, it doesn’t bother you. People are wearing stuff in the ear typically anyway, right?”


2023-01-24: Faster speech to text

Our BCI decoded speech at 62 words per minute, which is 3.4x faster than the prior record for any kind of BCI and begins to approach the speed of natural conversation (160 words per minute). We highlight 2 aspects of the neural code for speech that are encouraging for speech BCIs: spatially intermixed tuning to speech articulators that makes accurate decoding possible from only a small region of cortex, and a detailed articulatory representation of phonemes that persists years after paralysis. These results show a feasible path forward for using intracortical speech BCIs to restore rapid communication to people with paralysis who can no longer speak.

Atom

I support the Log Format Roadmap because it has a fighting chance to become the first practical step to achieve CMS content interop. Blogs will drive adoption of the principles stated in against the grain. As a weblog vendor, I support it because it will drive the adoption of better tools, and will increase the market for everyone.
2003-06-27: Sam Ruby has been spearheading a major standardization effort in the blog world recently, and he has this to say about his motivations:

About a month ago, my interest and activity in this space kicked into high gear. I started attending weblogging conferences.

Far from claiming to have been the inspiration, it is still very nice to think that OSCOM was able to contribute to the drive towards standardization. This is the stuff we are talking about.
2004-06-08: So that is what Greg Stein has been up to. The sprint was much fun, as were the drinks.

Ever since Atom first popped up, I’ve been interested in it, and even attempted to join a small sprint/discussion at Seybold last year to talk about WebDAV. The bomb threat shut that down, but we simply moved locations for drinks rather than hacking 🙂 So while I’ve been tracking it generally, my specific current interest is through my work at Google. I’m the engineering manager for the Blogger group, so I’ve gotta pay some attention to what we’re signing up for 🙂

2005-09-06: All feeds for this blog now serve Atom 1.0. It will be interesting to watch if anyone notices / cares. Longer term, /atom.xml is the canonical url if you want to subscribe.
2006-10-18: RSS / ATOM / OPML schematron is much easier to work with than the mysterious feed validator code. Plus it works for really huge feeds. This has the README for the RSS validator. Pretty out of date, but a good starting point. For one, you need the latest schematron from Rick Jellife, not the old one on this site.
2006-11-21: Gdata JSON. They also do jsonp, and reuse the Atom serialization.
2006-12-01: GData for Google Spreadsheets. The data web circle gets more complete. This is (one) counterpart to the web formulas in Google Sheets. Now as to how GData can play in the semweb space. Maybe via Queso.
2007-01-31: Tim wonders how to use Atom categories properly. Link to the wikipedia url of the tag I’d say.
2007-02-14: If you browse to a page with an RSS or Atom feed, you get the option to immediately add that feed to Google Reader for mobile via Mobile Proxy Feed Discovery.
2007-03-31: Some Atom extensions by nature to encourage text mining. I don’t know.. They do not seem to reuse core Atom in their examples. Plus I am not sure how useful a word count really is.
2007-05-16: GdataServer

Generally speaking, the Lucene GData Server is an extensible syndication format server providing CRUD actions to alter feed content, authentication, optimistic concurrency and full text search based on Apache Lucene.

2007-05-25: APP frontend to LDAP. This might enable some interesting scenarios.
2007-06-01: Opensearch / Atom interface for Swiss whitepages. Nice!

Wir haben für unser Telefonbuch eine Schnittstelle entwickelt, welche es erlaubt unsere Telefondaten in anderen Applikationen oder Websites zu integrieren. Die Schnittstelle basiert auf dem Konzept von REST. Die Resultate werden als Atom-Feed geliefert, welcher mit OpenSearch- und tel.search.ch-spezifische Felder ergänzt ist. Mit Hilfe eines Schlüssels werden die Resultate auch strukturiert zurückgeliefert. Die Resultatzahl ist pro Abfrage auf 200 Einträge beschränkt.

2007-06-08: GData Fails as a Protocol

Gregor Rothfuss wondered whether I couldn’t influence people at Microsoft to also standardize on GData. The fact is that I’ve actually tried to do this with different teams on multiple occasions and each time the I’ve tried, certain limitations..

2007-06-10: Oy. And all this because I asked Dare why Microsoft doesn’t use APP.

There was quite a flurry of blogging about the Atom Publishing Protocol (APP) over the weekend, all kicked off by Dare Obasanjo’s criticisms of the protocol. Some of the posts were critical of Dare and his motives, but I’m thankful he started the conversation.

2007-07-26: The chorus for putting more REST into GIS / mapping gets louder, yay

The only thing needed to bring together this messy new world Atlas, is a global agreement about the structure of the data used to annotate the maps, as well as agreement on the format for retrieving such.

2007-07-28: WFS simple was hijacked, as usual, by people who don’t understand why worse is better. This is why I am not in the least interested in WFS and am betting on APP instead.

if the geospatial standards community continues on this path of isolating itself, of looking upstream to the ISO rather than downstream to the distributed neogeo developer community, it will miss out on being connected to amazing things.

Here’s a Feature Demo of a RESTful WFS-T with a call for GE to support posting of features. I would go further and ask for APP support.
Version control for Collaborative Mapping. Calls for diffs and patches. Might be built on top of an APP infrastructure, imho

The next major area of tool improvement I see is expanding the wiki notion of editing to more of a merging revision control model, with branches, versions, patches and eventually expanding in to distributed repositories. The ‘patch‘ is a small piece of code that can be applied to a computer program to fix something. They are widely used in the open source software world, both to get the latest improvements, and to allow those who have commit rights to a source repository to review outside improvements before putting them in. This helps create the meritocracy around projects, as they don’t let just anyone in to the repository as they might break the build. Such a case is less likely with maps, but sometimes core contributors might want to see a couple sample patches before letting a new member in. In the GeoServer versioning WFS work we have a GetDiff operation that returns a WFS Transaction that can then be applied to another WFS. This fits in with the technical part of how a patch works – they’re really easy to apply to one’s dataset. But unfortunately a WFS transaction is not as easy to read as a code patch. The other great thing about patches is that when leaf nodes are updating their data they can just request the change set – the patches – instead of having to do a full check out. So I’m still not sure how to solve this problem, the WFS Transaction is the best I’ve got, but I think we can do better, have a nice little format that just describes what changed.

Better UIs for Collaborative Mapping. More calls for rollback tools, and would like to see GE post to geoserver, etc

I think we need more user friendly options for collaborative editing. Not just putting some points on a map, but being able to get a sense of the history of the map, getting logs of changes and diffs of certain actions. Editing should be a breeze, and there should be a number of tools that enable this. Google’s MyMaps starts to get at the ease of editing, but I want it collaborative, able to track the history of edits and give you a visual diff of what’s changed. Rollbacks should also be a breeze – if you have really easy tools to edit it’s also going to be easier for people to vandalize. So you need to make tools that are even easier to rollback.

2007-07-29: Atom Futures

AtomPub sits in a very strange place, as it has the potential to disrupt 6 or more industry sectors, such as, Enterprise Content Management, Blogging, Digital/Desktop Publishing and Archiving, Mobile Web, EAI/WS-* messaging, Social Networks, Online Productivity tools. As interesting as the adoption rates, will be people and sectors finding reasons not use it to protect distribution channels and data lockins with more complicated solutions. Any kind of data garden is fair game for AtomPub to rationalize.

2007-07-30: Towards signed feeds

Why Digital Signature? This idea was first proposed by James Snell, and it’s a good one. Mind you, the benefits are a little bit theoretical, since no feed-reading clients that I’ve seen actually check a digital signature. The argument for this is similar to that for TLS; a bad guy who could somehow insert a fake press release into the feed could make zillions by gaming the share price. A verifiable digital signature would let someone reading the feed know that the news in it really truly did come from Sun.

2007-07-31: Atom for KML. Nice. I want to do more, but this is a good start. The Atom / KML meme spreads. Perception is reality, and I approve.
2007-08-03: Appfs

appfs can mount remote resources exposed via the Atom Publishing Protocol as a local filesystem.

2007-08-07: RESTful partial updates. Maybe useful for APP / KML to supplement update

over the past couple of months, there’s been a lot of discussion about the problem of partial updates in REST-over-HTTP. The problem is harder than it appears at first glance. The canonical scenario is that you’ve just retrieved a complicated resource, like an address book entry, and you decide you want to update just one small part, like a phone number. The canonical way to do this is to update your representation of the resource and then PUT the whole thing back, including all of the parts you didn’t change. If you want to avoid the lost update problem, you send back the ETag you got from the GET with your PUT inside an If-Match: header, so that you know that you’re not overwriting somebody else’s change.

Zend Google Data Client

The Zend Google Data Client provides a PHP 5 component to execute queries and commands against the Google Data APIs.

2007-08-14: Winer on Atom. Sore loser.
2007-08-19: How to deal with the sliding window problem where feed producers update more often than consumers, and consumers thus might miss entries.
A standardized way to get at previous entries that have scrolled out of a feed, and at the complete archive.
2007-08-28: YouTube GData. Nice to see more media-heavy usages. Now we have pretty much all of them, only KML is missing.
2007-10-29: APP Lock-In. So cute. Microsoft is in a tight spot: Admit they have no strategy and use APP, or invent their own. It seems they are trying to build a case to do just that.

It seems that while we weren’t looking, Google move us a step away from a world of simple, protocol-based interoperability on the Web to one based on running the right platform with the right libraries. Usually I wouldn’t care about whatever bad decisions the folks at Google are making with their API platform. However the problem is that it sends out the wrong message to other Web companies that are building Web APIs. The message that it’s all about embracing and extending Internet standards with interoperability being based on everyone running sanctioned client libraries instead of via simple, RESTful protocols is harmful to the Internet. Unfortunately, this harkens to the bad old days of Microsoft and I’d hate for us to begin a race to the bottom in this arena.

2007-12-06: FeedSync. The full syncing requirement makes this heavy weight

Although FeedSync is capable of full-blown multi-master synchronization, there are all kinds of interesting uses, including simple one-way uses. Consider, for example, how RSS typically has no memory. Most blogs publish items into a rolling window. If you subscribe after items have scrolled out of view, you can’t syndicate them. A FeedSync implementation could enable you synchronize a whole feed when you first subscribe, then update items moving forward. It could also enable the feed provider to delete items, which you might not want if the items are blog postings, but would want if they’re calendar items representing cancelled events.