Tag: books

lean production

Paul Erdos, the most prolific mathematician of all time, defied the conventional wisdom that mathematics was just a young man’s game. For the last 25 years of his life, Erdos raced against the specter of old age to prove as many mathematical theorems as possible. “The first sign of senility, is when a man forgets his theorems. The second sign is when he forgets to zip up. The third sign is when he forgets to zip down.” Erdos never experienced the first sign. He managed to think about more problems than any other mathematician in history and could recite the details of all 1475 of the papers he had written or co-authored. Fortified by espresso and amphetamines, Erdos did mathematics 19 hours a day, 7 days a week. “A mathematician is a machine for turning coffee into theorems.” When friends urged him to slow down, he always had the same response: “There’ll be plenty of time to rest in the grave.”

i’m currently reading the man who loved only numbers. i dig erdos way of living out of a suitcase, something i aspire to as well. it takes lean production to entirely new levels.

Symbolism

Mr. Bayley started out researching the meanings of watermarks found on old (9th through 13th century) papers. Examining the designs for the root meanings, he delved deep into myths, stories, language, religion and root meanings of words and syllabic sounds that he thought might compose surviving parts of the oldest spoken language. The religious composition of the language and the constructs of words indicating an omnipresence of religion within the spoken society demonstrates the roots of all human language and societies have been Edenite.

i just finished the lost language of symbolism by harold bayley, written in 1912. a wonderful book that intrigued me with it’s startling revelations about the origins of words and symbols, and how truly universal some of them are.

problems in solution

i ran across this beautiful example of the power of metaphors in metaphors we live by (page 143ff)

Another example how a metaphor can create new meaning for us came about by accident. An Iranian student, shortly after his arrival in Berkeley, took a seminar on metaphor from one of us. Among the wondrous things that he found in Berkeley was an expression that he heard over and over and understood as a beautifully sane metaphor. The expression was “the solution of my problems” –which he took to be a large volume of liquid, bubbling and smoking, containing all of your problems, either dissolved or in the form of precipitates, with catalysts constantly dissolving some problems (for the time being) and precipitating out others. He was terribly disillusioned to find that the residents of Berkeley had no such chemical metaphor in mind. And well he might be, for the chemical metaphor is both beautiful and insightful. It gives us a view of problems as things that never disappear utterly and that cannot be solved once and for all. All of your problems are always present, only they may be dissolved and in solution, or they may be in solid form. The best you can hope for is to find a catalyst that will make one problem dissolve without another one precipitate out. And since you do not have complete control over what goes into the solution, you are constantly finding old and new problems precipitating out and present problems dissolving, partly because of your efforts and partly despite anything you do.
The chemical metaphor gives us a new view of human problems. It is appropriate to the experience of finding that problems which we once thought were “solved” turn up again and again. The chemical metaphor says that problems are not the kind of things that can be made to disappear forever. To treat them as things that can be “solved” once and for all is pointless. To live by the chemical metaphor would be to accept it as a fact that no problem disappears forever. Rather than direct your energies towards solving your problems once and for all, you would direct your energies towards finding out what catalysts will dissolve your most pressing problems for the longest time without precipitating out worse ones. The reappearance of a problem is viewed as a natural occurrence rather than a failure on your part to “find the right way to solve it.”
To live by the chemical metaphor would mean that your problems have a different kind of reality for you. A temporary solution would be an accomplishment rather than a failure. Problems would be part of the natural order of things rather than disorders to be “cured.” The way you would understand your everyday life and the way you would act in it would be different if you lived by the chemical metaphor.

i have long been fascinated by the power mental concepts hold over our perception of reality. many would argue that metaphors like the chemical one cited above cannot shape our reality because they are constructs of language, while our reality is real. while this is true from an objectivist viewpoint, there is a lot more to human perception than objective reality, or what our sensory organs perceive. our daily experience is shaped by motivations, categorizations and other mental models, and this daily experience is what matters most to us.
imagination is key.

Great books

there is an awesome bookstore in cambridge called harvard bookstore. they are open until either 11pm or 10pm every day, so of course i spent some quality time there. here is a list (ironically, linking to the one store that has decent ASIN / ISBN coverage):
It Must Be Beautiful: Great Equations of Modern Science
Flu: The Story of the Great Influenza Pandemic of 1918 and the Search for the Virus That Caused It
The FUTURE AND ITS ENEMIES: The Growing Conflict Over Creativity, Enterprise, and Progress
The End of Work: The Decline of the Global Labor Force and the Dawn of the Post-Market Era
Beauty of Another Order: Photography in Science
The Atoms of Language: The Mind’s Hidden Rules of Grammar

Plague

It was about the beginning of September, 1664, that I, among the rest of my neighbors, heard in ordinary discourse that the plague was returned again in Holland; for it had been very violent there, and particularly at Amsterdam and Rotterdam, in the year 1663, whither, they say, it was brought, some said from Italy, others from the Levant, among some goods which were brought home by their Turkey fleet; others said it was brought from Candia; others from Cyprus. It mattered not from whence it came; but all agreed it was come into Holland again.

I just finished Daniel Defoe’s a journal of the plague year.
2014-09-30: this is an awesome poster visualizing the plague, walking you through how real science is done, in an entertaining and informative way.


2015-10-06: Plague is one of the most virulent pathogens.

the acquisition of a single gene named pla gave Y. pestis the ability to cause pneumonia, causing a form of plague so lethal that it kills essentially all of those infected who don’t receive antibiotics. In addition, it is also among the most infectious bacteria known. “Yersinia pestis is a pretty kick-ass pathogen. A single bacterium can cause disease in mice. It’s hard to get much more virulent than that.”

2021-03-30: Pushing the Black Death origin back.

Monica Green published a landmark article, The 4 Black Deaths, in the American Historical Review, that rewrites our narrative of this brutal and transformative pandemic. In it, she identifies a “big bang” that created 4 distinct genetic lineages that spread separately throughout the world and finds concrete evidence that the plague was already spreading from China to central Asia in the 1200s. This discovery pushes the origins of the Black Death back by over 100 years, meaning that the first wave of the plague was not a decades-long explosion of horror, but a disease that crept across the continents for over 100 years until it reached a crisis point.

2022-02-16: Black Death mortality rates varied widely.

“The data is sufficiently widespread and numerous to make it likely that the Black Death swept away 65% of Europe’s population”. But those figures, based on historical documents from the time, greatly overestimate the true toll of the plague. By analyzing ancient deposits of pollen as markers of agricultural activity, researchers found that the Black Death caused a patchwork of destruction. Some regions of Europe did indeed suffer devastating losses, but other regions held stable, and some even boomed. It’s possible that the ecology of rats and fleas that spread the bacteria was different from country to country. The ships that brought Yersinia to Europe may have come to some ports at a bad time of the year for spreading the plague, and to others at a better time.


2022-08-12: The plague may have had a role in the collapse of Egypt’s Old Kingdom and the Akkadian Empire in Mesopotamia

During the late 3rd millennium BCE, the Eastern Mediterranean and Near East witnessed societal changes in many regions, which are usually explained with a combination of social and climatic factors. However, recent archaeogenetic research forces us to rethink models regarding the role of infectious diseases in past societal trajectories. The plague bacterium Yersinia pestis, which was involved in some of the most destructive historical pandemics, circulated across Eurasia at least from the onset of the 3rd millennium BCE but the challenging preservation of ancient DNA in warmer climates has restricted the identification of Y. pestis from this period to temperate climatic regions. As such, evidence from culturally prominent regions such as the Eastern Mediterranean is currently lacking. Here, we present genetic evidence for the presence of Y. pestis and Salmonella enterica, the causative agent of typhoid/enteric fever, from this period of transformation in Crete, detected at the cave site Hagios Charalambos. We reconstructed 1 Y. pestis genome that forms part of a now-extinct lineage of Y. pestis strains from the Late Neolithic and Bronze Age that were likely not yet adapted for transmission via fleas. Furthermore, we reconstructed 2 ancient S. enterica genomes from the Para C lineage, which cluster with contemporary strains that were likely not yet fully host adapted to humans. The occurrence of these 2 virulent pathogens at the end of the Early Minoan period in Crete emphasizes the necessity to re-introduce infectious diseases as an additional factor possibly contributing to the transformation of early complex societies in the Aegean and beyond.

Planck Dives


The polis of Cartan has been in orbit around the black hole Chandrasekhar for almost 3 centuries. Now, a group of polis citizens are preparing to encode clones of themselves into a form that will be able to travel inside the hole and explore the nature of space time at the Planck scale, 10-35 meters.

i have to read some of the works of greg egan after i finish flatland a romance of many dimensions.

2022-02-08: Speaking of Flatland, Planiverse sounds interesting as well:

I recently praised Planiverse as peak hard science fiction. But as I hadn’t read it in decades, I thought maybe I should reread it to see if it really lived up to my high praise.

The basic idea is that a computer prof and his students in our universe create a simulated 2D universe, which then somehow becomes a way to view and talk to 1 particular person in a real 2D universe. This person is contacted just as they begin a mystical quest across their planet’s one continent, which lets the reader see many aspects of life there. Note there isn’t a page-turning plot nor interesting character development; the story is mainly an excuse to describe its world.

The book seems crazy wrong on how its mystical quest ends, and on its assumed connection to a computer simulation in our universe. But I presume that the author would admit to those errors as the cost of telling his story. However, the book does very well on physics, chemistry, astronomy, geology, and low level engineering. That is, on noticing how such things change as one moves from our 3D world to this 2D world, including via many fascinating diagrams. In fact this book does far better than most “hard” science fiction. Which isn’t so surprising as it is the result of a long collaboration between 10s of scientists. But alas no social scientists seem to have been included, as the book seem laughably wrong there. Let me explain.

Digital Philosophy

i’m still reading a new kind of science. one concept that wolfram advocates is that of finite nature.

The basic idea is that of ‘Finite Nature’. There is some discussion of the exact meaning of this phrase; my preferred definition is, the proposition that a finite quantity of space/time, containing a finite amount of matter/energy, is capable (in principle) of being simulated EXACTLY using a finite amount of computing power on a Universal Turing Machine.
The implications of the Finite Nature concept are pretty far-reaching. Among other things, any physical principle that is presently theorized to be based on continuous functions of any kind, must be supposed to actually be an approximation of an underlying computational process. This reverses the normal relationship, where digital processes (such as CA’s, or generally any quantized matrix calculations such as those used in computational dynamics) are seen as approximations of an underlying, continuous reality.

although wolfram has his detractors, his book is highly inspirational. he certainly has done much to publicize the field of digital philosophy.

Quantum Computing

It is becoming increasingly clear that, if a useful device for quantum computation will ever be built, it will be embodied by a classical computing machine with control over a truly quantum subsystem, this apparatus performing a mixture of classical and quantum computation. This paper investigates a possible approach to the problem of programming such machines: a template high level quantum language is presented which complements a generic general purpose classical language with a set of quantum primitives.

A very interesting paper, basically stating that any quantum computer will need a classical front end to deal with data pre- and post processing. even the very pragmatic distinction between call-by-value and call-by-reference needs to be rethought:

It is well known that the no-cloning theorem excludes the possibility of replicating the state of a generic quantum system. Since the call-by-value paradigm is based on the copy primitive, this means that quantum programming can not use call-by-value; therefore a mechanism for addressing parts of already allocated quantum data must be supplied by the language.

2007-02-12: D-Wave 16 qubit prototype, with hopes for a 1024 qubit system in late 2008. funny: they are not sure if it is a quantum computer at all, it might be an analog computer.
2007-04-22: the quantum computation version of the stacked turtle

But it was still pretty exciting stuff. Holy Zarquon, they said to one another, an infinitely powerful computer? It was like a 1000 Christmases rolled into 1. Program going to loop forever? You knew for a fact: this thing could execute an infinite loop in less than 10 seconds. Brute force primality testing of every single integer in existence? Easy. Pi to the last digit? Piece of cake. Halting Problem? Sa-holved.

They hadn’t announced it yet. They’d been programming. Obviously they hadn’t built it just to see if they could. They had had plans. In some cases they had even had code ready and waiting to be executed. One such program was Diane’s. It was a universe simulator. She had started out with a simulated Big Bang and run the thing forwards in time by 13.6b years, to just before the present day, watching the universe develop at every stage – taking brief notes, but knowing full well there would be plenty of time to run it again later, and mostly just admiring the miracle of creation.

2007-08-30: What Google Won’t Find

For “generic” problems of finding a needle in a haystack, most of us believe that quantum computers will give at most a polynomial advantage over classical ones.

2011-01-20: 10b qubits is very significant. i am sure there are all sorts of caveats, but still: wow
2011-10-04: Philosophy and Theoretical Computer Science class by Scott Aaronson.

This new offering will examine the relevance of modern theoretical computer science to traditional questions in philosophy, and conversely, what philosophy can contribute to theoretical computer science. Topics include: the status of the Church-Turing Thesis and its modern polynomial-time variants; quantum computing and the interpretation of quantum mechanics; complexity aspects of the strong-AI and free-will debates; complexity aspects of Darwinian evolution; the claim that “computation is physical”; the analog/digital distinction in computer science and physics; Kolmogorov complexity and the foundations of probability; computational learning theory and the problem of induction; bounded rationality and common knowledge; new notions of proof (probabilistic, interactive, zero-knowledge, quantum) and the nature of mathematical knowledge. Intended for graduate students and advanced undergraduates in computer science, philosophy, mathematics, and physics. Participation and discussion are an essential part of the course.

2013-04-13: Quantum computing since Democritus. Written in the spirit of the likes of Richard Feynman, Carl Sagan, and Douglas Hofstadter, and touching on some of the most fundamental issues in science, the unification of computation and physics. kind of like a new kind of science was, without the bs. Plus Scott is a funny guy, so even if you only understand 5% (likely, given the deep topics), seems worth it. If you want to get a taste, try this paper: NP-complete Problems and Physical Reality
2017-07-09: Multi-colored photons

the technology developed is readily extendable to create 2-quDit systems with more than 9000 dimensions (corresponding to 12 qubits and beyond, comparable to the state of the art in significantly more expensive/complex platforms).

2018-10-09: Quantum Verification. How do you know whether a quantum computer has done anything quantum at all?

After 8 years of graduate school, Mahadev has succeeded. She has come up with an interactive protocol by which users with no quantum powers of their own can nevertheless employ cryptography to put a harness on a quantum computer and drive it wherever they want, with the certainty that the quantum computer is following their orders. Mahadev’s approach gives the user “leverage that the computer just can’t shake off.” For a graduate student to achieve such a result as a solo effort is “pretty astounding”. Quantum computation researchers are excited not just about what Mahadev’s protocol achieves, but also about the radically new approach she has brought to bear on the problem. Using classical cryptography in the quantum realm is a “truly novel idea. I expect many more results to continue building on these ideas.”

2019-01-19: Spacetime QEC

space-time achieves its “intrinsic robustness,” despite being woven out of fragile quantum stuff. “We’re not walking on eggshells to make sure we don’t make the geometry fall apart. I think this connection with quantum error correction is the deepest explanation we have for why that’s the case.”

2019-04-19: Quantum Diff Privacy. On connections between differential privacy and quantum computing
2019-06-18: Neven’s Law

That rapid improvement has led to what’s being called “Neven’s law,” a new kind of rule to describe how quickly quantum computers are gaining on classical ones. Quantum computers are gaining computational power relative to classical ones at a “doubly exponential” rate — a staggeringly fast clip. With double exponential growth, “it looks like nothing is happening, nothing is happening, and then whoops, suddenly you’re in a different world.”

This is certainly the most extreme of the nerd rapture curves i have seen:

the very near future should be the watershed moment, where quantum computers surpass conventional computers and never look back. Moore’s Law cannot catch up. A year later, it outperforms all computers on Earth combined. Double qubits again the following year, and it outperforms the universe.

2019-06-23: Diamonds and Ion Qubits

In the mid-2000s there was a small diamond mined from the Ural Mountains. Is was called the ‘magic Russian sample. The diamond was extremely pure—almost all carbon, which isn’t common but with a few impurities that gave it strange quantum mechanical properties. Now anyone can go online and buy a $500 quantum-grade diamond for an experiment. The diamonds have nitrogen impurities—but what Schloss’s group needs is a hole right next to it, called a nitrogen vacancy. Russian “magic diamonds” can hold qubits in place and thus act the same way that a trapped-ion rig does. They replace a single carbon atom in a diamond’s atomic lattice with a nitrogen atom and leaving a neighboring lattice node empty, engineers can create what’s called a nitrogen-vacancy (NV) center. This is generally inexpensive since it’s derived from nature.

2019-07-04: John Wright joins UT Austin

With no evaluative judgment attached, this is an unprecedented time for quantum computing as a field. Where once faculty applicants struggled to make a case for quantum computing (physics departments: “but isn’t this really CS?” / CS departments: “isn’t it really physics?” / everyone: “couldn’t this whole QC thing, like, all blow over in a year?”), today departments are vying with each other and with industry players and startups to recruit talented people. In such an environment, we’re fortunate to be doing as well as we are. We hope to continue to expand.

2019-07-26: Quantum hardware should make monte carlo methods more powerful & accurate.
2019-08-20: 1 Million Qubits

Fujitsu has a Digital Annealer with 8192 Qubits and a 1M qubit system in the lab. Digital Annealer is a new technology that is used to solve large-scale combinatorial optimization problems instantly. Digital Annealer uses a digital circuit design inspired by quantum phenomena and can solve problems which are difficult and time consuming for classical computers.

2019-11-12: Topological Quantum Computer

Microsoft is developing Majorana-based topological quantum computer qubits which will be higher-quality and lower error rate qubits. A high-quality hybrid system made of InSb nanowires with epitaxial-grown Al shells has revealed ballistic superconductivity and quantized zero-bias conductance peak. This holds great promise for making the long-sought topological quantum qubits.

2019-12-13: 10000x Faster Quantum Simulations

they have made the simulation of the quantum electrons so fast that it could run extremely long without restrictions and the effect of their motion on the movement of the slow ions would be visible

2020-01-14: MIP*=RE

easily one of the biggest complexity-theoretic surprises so far in this century

2020-05-07: Room Temperature QC

Transparent crystals with optical nonlinearities could enable quantum computing at room temperature by 2030

2020-12-03: BosonSampling. A second method achieves quantum supremacy.

Do you have any amusing stories? When I refereed the Science paper, I asked why the authors directly verified the results of their experiment only for up to 26-30 photons, relying on plausible extrapolations beyond that. While directly verifying the results of n-photon BosonSampling takes ~2n time for any known classical algorithm, I said, surely it should be possible with existing computers to go up to n=40 or n=50? A couple weeks later, the authors responded, saying that they’d now verified their results up to n=40, but it burned $400000 worth of supercomputer time so they decided to stop there. This was by far the most expensive referee report I ever wrote!

2021-12-06: Quantum Computing Overview. A really good overview of the field of quantum computing with a clear explanation of how they work, why people are excited about quantum algorithms and their value, the potential applications of quantum computers including quantum simulation, artificial intelligence and more, and the different models and physical implementations people are using to build quantum computers like superconducting devices, quantum dots, trapped ions, photons or neutral atoms, and the challenges they face.


2023-02-23: Quantum Error Correction breakthrough

Here we report the measurement of logical qubit performance scaling across several code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find that our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, in terms of both logical error probability over 25 cycles and logical error per cycle ((2.914 ± 0.016)% compared to (3.028 ± 0.023)%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7 × 10−6 logical error per cycle floor set by a single high-energy event (1.6 × 10−7 excluding this event). We accurately model our experiment, extracting error budgets that highlight the biggest challenges for future systems. These results mark an experimental demonstration in which quantum error correction begins to improve performance with increasing qubit number, illuminating the path to reaching the logical error rates required for computation.

2023-06-19: It might be possible to work around noise, making quantum computing practical.

IBM physicist Abhinav Kandala conducted precise measurements of the noise in each of their qubits, which can follow relatively predictable patterns determined by their position inside the device, microscopic imperfections in their fabrication and other factors. Using this knowledge, the researchers extrapolated back to what their measurements — in this case, of the full state of magnetization of a 2D solid — would look like in the absence of noise. They were then able to run calculations involving all of Eagle’s 127 qubits and up to 60 processing steps — more than any other reported quantum-computing experiment. The results validate IBM’s short-term strategy, which aims to provide useful computing by mitigating, as opposed to correcting, errors. Over the longer term, IBM and most other companies hope to shift towards quantum error correction, a technique that will require large numbers of additional qubits for each data qubit.