Tag: math

Busy Beaver

The goal of the “busy beaver” game is to find the longest-running computer program. Its pursuit has surprising connections to some of the most profound questions and concepts in mathematics. The rub is that BB(27) is such an incomprehensibly huge number that even writing it down, much less running the Goldbach-falsifying machine for that many steps, isn’t remotely possible in our physical universe. Nevertheless, that incomprehensibly huge number is still an exact figure whose magnitude represents “a statement about our current knowledge” of number theory.

PDE AI

researchers at Caltech have introduced a new deep-learning technique for solving PDEs that is dramatically more accurate than deep-learning methods developed previously. It’s also much more generalizable, capable of solving entire families of PDEs—such as the Navier-Stokes equation for any type of fluid—without needing retraining. Finally, it is 1000x faster than traditional mathematical formulas. Now here’s the crux of the paper. Neural networks are usually trained to approximate functions between inputs and outputs defined in Euclidean space, your classic graph with x, y, and z axes. But this time, the researchers decided to define the inputs and outputs in Fourier space. Because it’s far easier to approximate a Fourier function in Fourier space than to wrangle with PDEs in Euclidean space, which greatly simplifies the neural network’s job. Cue major accuracy and efficiency gains: in addition to its huge speed advantage over traditional methods, their technique achieves a 30% lower error rate when solving Navier-Stokes than previous deep-learning methods.

AI Symbolic Mathematics

For almost all the problems, the program took less than 1 second to generate correct solutions. And on the integration problems, it outperformed some solvers in the popular software packages Mathematica and Matlab in terms of speed and accuracy. The Facebook team reported that the neural net produced solutions to problems that neither of those commercial solvers could tackle.

Fluid Equation Singularity

In the same way that eddies in a stream alter downstream currents, Elgindi’s work itself prompted a new round of mathematical discovery. In October 2019, Hou and Jiajie Chen adapted some of Elgindi’s methods to create a rigorous mathematical proof of a scenario closely related to the one in the 2013 experiment. They proved that in this slightly modified scenario, the singularity they’d observed forming in the Euler equations really does occur.

“They took Elgindi’s ideas and applied them to the scenario from 2013”. The circle was complete.

There’s still more work to be done, of course. Hou’s new proof has some technical qualifications that prevent it from establishing the existence of the singularity in the exact situation he modeled in 2013. But after a remarkable 6-year run and with renewed momentum, Hou believes he’ll soon surmount those challenges, too. “I think we’re very close”.

2022-04-12:

Now another group has joined the hunt. They’ve found an approximation of their own — one that closely resembles Hou and Luo’s result — using a completely different approach. They’re currently using it to write their own computer-assisted proof. The team’s answer looked a lot like the solution that Hou and Luo had arrived at in 2013. But the mathematicians hope that their approximation paints a more detailed picture of what’s happening, since it marks the first direct calculation of a self-similar solution for this problem. “The new result specifies more precisely how the singularity is formed. You’re really extracting the essence of the singularity,. It was very difficult to show this without neural networks. It’s clear as night and day that it’s a much easier approach than traditional methods.”

Interpretable models

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

2023-03-01: Math approaches can help with interpretation

Let’s take the set of all cat images and the set of all images that aren’t cats. We’re going to view them as topological shapes, or manifolds. One is the manifold of cats and the other is the manifold of non-cats. These are going to be intertwined in some complicated way. Why? Because there are certain things that look very much like cats that are not a cat. Mountain lions sometimes get mistaken for cats. Replicas. The big thing is, 2 manifolds are intertwined in some very complex manner.
I measure the shape of the manifold as it passes through the layers of a neural network. Ultimately, I can show that it reduces to the simplest possible form. You can view a neural network as a device for simplifying the topology of the manifolds under study.

Approximating Pi

Under what circumstances is it possible to represent irrational numbers that go on forever—like pi—with simple fractions, like 22/7? The proof establishes that the answer to this very general question turns on the outcome of a single calculation. The Duffin-Schaeffer conjecture has you add up the measures of the sets of irrational numbers captured by each approximating fraction. It represents this number as a large arithmetic sum. Then it makes its key prediction: If that sum goes off to infinity, then you have approximated virtually all irrational numbers; if that sum instead stops at a finite value, no matter how many measures you sum together, then you’ve approximated virtually no irrational numbers.

Universal Scaling

And this scrambling process happens very early indeed. In their papers this spring, Berges, Gasenzer and their collaborators independently described prescaling for the first time, a period before universal scaling that their papers predicted for nuclear collisions and ultracold atoms, respectively. Prescaling suggests that when a system first evolves from its initial, far-from-equilibrium condition, scaling exponents don’t yet perfectly describe it. The system retains some of its previous structure — remnants of its initial configuration. But as prescaling progresses, the system assumes a more universal form in space and time, essentially obscuring irrelevant information about its own past. If this idea is borne out by future experiments, prescaling may be the nocking of time’s arrow onto the bowstring.