Tag: science

PDE AI

researchers at Caltech have introduced a new deep-learning technique for solving PDEs that is dramatically more accurate than deep-learning methods developed previously. It’s also much more generalizable, capable of solving entire families of PDEs—such as the Navier-Stokes equation for any type of fluid—without needing retraining. Finally, it is 1000x faster than traditional mathematical formulas. Now here’s the crux of the paper. Neural networks are usually trained to approximate functions between inputs and outputs defined in Euclidean space, your classic graph with x, y, and z axes. But this time, the researchers decided to define the inputs and outputs in Fourier space. Because it’s far easier to approximate a Fourier function in Fourier space than to wrangle with PDEs in Euclidean space, which greatly simplifies the neural network’s job. Cue major accuracy and efficiency gains: in addition to its huge speed advantage over traditional methods, their technique achieves a 30% lower error rate when solving Navier-Stokes than previous deep-learning methods.

Essential gene evolution

Essential genes are often thought to be frozen in evolutionary time — evolving only very slowly if at all, because changing or dying would lead to the death of the organism. 100s of millions of years of evolution separate insects and mammals, but experiments show that the Hox genes guiding the development of the body plans in Drosophila fruit flies and mice can be swapped without a hitch because they are so similar. This remarkable evolutionary conservation is a foundational concept in genome research.

But a new study turns this rationale for genetic conservation on its head. Researchers at the Fred Hutchinson Cancer Research Center in Seattle reported last week in eLife that a large class of genes in fruit flies are both essential for survival and evolving extremely rapidly. In fact, the scientists’ analysis suggests that the genes’ ability to keep changing is the key to their essential nature. “Not only is this questioning the dogma, it is blowing the dogma out of the water

Metagenomic testing

Scientists have developed a single clinical laboratory test capable of zeroing in on the microbial miscreant afflicting a patient in as little as 6 hours – irrespective of what body fluid is sampled, the type or species of infectious agent, or whether physicians start out with any clue as to what the culprit may be.

The test will be a lifesaver, speeding appropriate drug treatment for the seriously ill, and should transform the way infectious diseases are diagnosed. Conventional diagnostic tests are designed to detect only 1 or sometimes a small panel of potential pathogens. In contrast, the new protocol employs powerful “next-generation” DNA-sequencing technology to account for all DNA in a sample, which may be from any species – human, bacterial, viral, parasitic, or fungal. Clinicians do not need to have a suspect in mind. To identify a match, the new test relies on specially developed analytical software to compare DNA sequences in the sample to massive genomic databases covering all known pathogens.

T-Cell testing

T cell assays are very labor intensive indeed, and the sample sizes in the papers on them tend to be in the 10s. The Oxford Immunotec people are trying to improve that. “There has. . .never been great demand for wading into the intricacies of T cell tests.” The test is definitely better at determining whether a person has had a previous coronavirus infection (as opposed to antibody measurements), and if we put that together with the other papers mentioned, it could be that this extends to saying how much protection these people retain. So the story is coming together. And just as vaccine work is never going to be the same after the huge amounts of work during this pandemic, it looks like T-cell research is never going to be the same, either. They’re both going to be better, faster, and more detailed, and that’s good. Because we’re going to need all this again some day.

Indistinguishability Obfuscation

The scheme’s security rests on 4 mathematical assumptions that have been widely used in other cryptographic contexts. And even the assumption that has been studied the least, called the “learning parity with noise” assumption, is related to a problem that has been studied since the 1950s. “You could imagine that maybe 50 years from now the crypto textbooks will basically say, ‘OK, here is a very simple construction of iO, and from that we’ll now derive all of the rest of crypto.’”

Makapansgat pebble

a 260-gram, 8.3 cm long, reddish-brown jasperite cobble, ca 3M years before present with natural chipping and wear patterns that make it look like a crude rendition of a human face. it has been suggested that some australopithecine might have recognized it as a symbolic face, in possibly the earliest example of symbolic thinking or aesthetic sense in the human heritage, and brought the pebble back to the cave. This would make it a candidate for the oldest known manuport

DRL sample efficiency

We find considerable progress in the sample efficiency of DRL at rates comparable to progress in algorithmic efficiency in deep learning. If the trends we observed proved to be robust and continued, the huge amounts of simulated data that are currently necessary to achieve state-of-the-art results in DRL might not be required for future applications such that training in real world contexts could become feasible.

DL generalize to brains

Last year, DiCarlo’s team published results that took on both the opacity of deep nets and their alleged inability to generalize. The researchers used a version of AlexNet to model the ventral visual stream of macaques and figured out the correspondences between the artificial neuron units and neural sites in the monkeys’ V4 area. Then, using the computational model, they synthesized images that they predicted would elicit unnaturally high levels of activity in the monkey neurons. In one experiment, when these “unnatural” images were shown to monkeys, they elevated the activity of 68% of the neural sites beyond their usual levels; in another, the images drove up activity in one neuron while suppressing it in nearby neurons. Both results were predicted by the neural-net model.

To the researchers, these results suggest that the deep nets do generalize to brains and are not entirely unfathomable. “However, we acknowledge that … many other notions of ‘understanding’ remain to be explored to see whether and how these models add value,” they wrote.