Month: July 2010

Modern Male Sati

jealousy: you can’t undertake technical means to outlive your partner.

Peggy’s initial response to this ambition, rooted less in scientific skepticism than in her personal judgments about the quest for immortality, has changed little in the past 20-odd years. Robin, a deep thinker most at home in thought experiments, believes that there is some small chance his brain will be resurrected, that its time in cryopreservation will be merely a brief pause in the course of his life. Peggy finds the quest an act of cosmic selfishness.

Group think

To understand why this technology is so important, and so dangerous, you need to understand its patrimony. First, although the technology is brand new, the idea is a classic, long-time geek trope. It shows up, for example, in Isaac Asimov’s Foundation Trilogy, the best-selling albeit thinly-plotted space opera, in which protagonist Hari Seldon develops the science of “psychohistory”. Just as physics can predict the mass motion of a gas, even though any individual molecule is unpredictable, psychohistory allows us to predict the future of large groups of people. (It’s not hard to see why this sort of thing appeals to the socially maladroit. Forming cliques, establishing social ties– it’s complicated and messy stuff. If only there was a mathematics that laid it all out…)

But why is this technology only emerging now, not 15 or 20 years ago? For any technology, there are only 3 possible answers to this question: Moore’s law, the Internet, or the government. In the case of crowd dynamics, we have the last 2 to thank. The Internet has made the problem tractable by providing huge, easily-collected data sets of social interactions. But the government has been the real enabler. Just follow the money: nearly every relevant research project received funding from DARPA, the Defense Advanced Research Projects Agency.

the state of “human terrain” research, and its applications. shades of psychohistory.

New developments in AI

What to expect in the near-term is less clear. While strong AI still lies safely beyond the Maes-Garreau horizon1 (a vanishing point, perpetually 50 years ahead) a host of important new developments in weak AI are poised to be commercialized in the next few years. But because these developments are a paradoxical mix of intelligence and stupidity, they defy simple forecasts, they resist hype. They are not unambiguously better, cheaper, or faster. They are something new.

What are the implications of a car that adjusts its speed to avoid collisions … but occasionally mistakes the guardrail along a sharp curve as an oncoming obstacle and slams on the brakes? What will it mean when our computers know everything — every single fact, the entirety of human knowledge — but can only reason at the level of a cockroach?

autonomous driving and weak omniscience closer than you think. a joy to read.