for Honda, intelligence is a technology
Tag: ai
Brain simulation
IBM researchers assembled a simulated mouse cortical hemisphere on one of the smaller BlueGene/L supercomputers. They then ran the simulation — at 10s of computer processing equal to 1s of brain function.
2008-03-04: Blue Brain
the brain has 100b neurons and 1t synapses. So they need to scale up the neurons by 10m and the synapses by 33K. By 2017, there should be a single whole brain simulation. Personalized whole brain simulation would follow by 2027-2037.
Blue Brain scientists are confident that, at some point in the next few years, they will be able to start simulating an entire brain.
2008-10-29: The scale needed
There are petaflop supercomputers now so if such a system were dedicated to brain emulation a system 10x larger than the rat brain could be simulated.
2014-03-14: A fun monte carlo simulation of when whole brain emulation will become feasible:
50% chance for WBE (if it ever arrives) before 2059, with the 25th % in 2047 and the 75th % in 2074. WBE before 2030 looks very unlikely and only 10% likely before 2040
2015-09-11: Ethics of brain simulations
What I hope happens is that computational neuroscientists think a bit about the issue of suffering in their simulations rather than slip into the comfortable “It is just a simulation, it cannot feel anything” mode of thinking by default.
It is easy to tell oneself that simulations do not matter because not only do we know how they work when we make them (giving us the illusion that we actually know everything there is to know about the system – obviously not true since we at least need to run them to see what happens), but institutionally it is easier to regard them as non-problems in terms of workload, conflicts and complexity (let’s not rock the boat at the planning meeting, right?) And once something is in the “does not matter morally” category it becomes painful to move it out of it – many will now be motivated to keep it there.
2021-06-18: Towards the mouse brain connectome
A connectomic study of a petascale fragment of human cerebral cortex”, Shapson-Coe et al 2021 (“…This “digital tissue” is a ~660K× scale up of an earlier saturated reconstruction from a small region of mouse cortex, published in 2015 (Kasthuri et al 2015). Although this scaleup was difficult, it was not 100000x more difficult and took about the same amount of time as the previous data set (~4 years)…The rapid improvements over the past few years…argues that analyzing volumes that are even 3 orders of magnitude larger, such as an exascale whole mouse brain connectome, will likely be in reach within 10 years.”
2023-08-31: Human Brain Project retrospective
It took 10 years, 500 scientists and €600m, and now the Human Brain Project — one of the biggest research endeavors ever funded by the European Union — is coming to an end. Its audacious goal was to understand the human brain by modelling it in a computer.
During its run, scientists under the umbrella of the Human Brain Project (HBP) have published 1000s of papers and made significant strides in neuroscience, such as creating detailed 3D maps of at least 200 brain regions, developing brain implants to treat blindness and using supercomputers to model functions such as memory and consciousness and to advance treatments for various brain conditions. The project did not achieve its goal of simulating the whole human brain — an aim that many scientists regarded as far-fetched in the first place. It changed direction several times, and its scientific output became “fragmented and mosaic-like”.
Other Kinds of Minds

given that we don’t know what intelligence is (in any detailed way), it is hard to say exactly how diverse the space of intelligent minds is.
The Power of Babble
But the robots trained by his father might live a 1000 versions of Dwayne’s life, babbling tirelessly, until one of them finally learns to talk.
once you have enough data, hard problems become easier
Supervised labeling

A probabilistic formulation for semantic image annotation and retrieval is proposed. Annotation and retrieval are posed as classification problems where each class is defined as the group of database images labeled with a common semantic label. It is shown that, by establishing this one-to-one correspondence between semantic labels and semantic classes, a minimum probability of error annotation and retrieval are feasible with algorithms that are 1) conceptually simple, 2) computationally efficient, and 3) do not require prior semantic segmentation of training images. In particular, images are represented as bags of localized feature vectors, a mixture density estimated for each image, and the mixtures associated with all images annotated with a common semantic label pooled into a density estimate for the corresponding semantic class. This pooling is justified by a multiple instance learning argument and performed efficiently with a hierarchical extension of expectation-maximization. The benefits of the supervised formulation over the more complex, and currently popular, joint modeling of semantic label and visual feature distributions are illustrated through theoretical arguments and extensive experiments. The supervised formulation is shown to achieve higher accuracy than various previously published methods at a fraction of their computational cost. Finally, the proposed method is shown to be fairly robust to parameter tuning.
this system can produce tags on par with humans for many types of images.
Victim of the Brain
docudrama on the ideas of douglas hofstadter
Robot Self Modeling
Higher animals use some form of an “internal model” of themselves for planning complex actions and predicting their consequence, but it is not clear if and how these self-models are acquired or what form they take. Analogously, most practical robotic systems use internal mathematical models, but these are laboriously constructed by engineers. While simple yet robust behaviors can be achieved without a model at all, here we show how low-level sensation and actuation synergies can give rise to an internal predictive self-model, which in turn can be used to develop new behaviors. We demonstrate, both computationally and experimentally, how a legged robot automatically synthesizes a predictive model of its own topology (where and how its body parts are connected) through limited yet self-directed interaction with its environment, and then uses this model to synthesize successful new locomotive behavior before and after damage. The legged robot learned how to move forward based on only 16 brief self-directed interactions with its environment. These interactions were unrelated to the task of locomotion, driven only by the objective of disambiguating competing internal models. These findings may help develop more robust robotics, as well as shed light on the relation between curiosity and cognition in animals and humans: Creating models through exploration, and using them to create new behaviors through introspection.
grad school the hot topic was embodiment. this seems slightly related
Law AI busted
A expert system that helped users prepare bankruptcy filings for a fee made too many decisions to be considered a clerical tool
Jason Rennie
recommendations / machine learning guy (now working for phil)
Evolution of a Search Engine
mocks of google search towards sapience. fun!