Leapfrogging Moore’s Law

fun idea: save compute power by selectively using lower precision math, then reinvest the savings:

We run the algorithm in double precision to a given error bound and measure energy consumption. This is our energy budget. We next run the algorithm in single precision for a number of iterations, followed by double precision for a number of iterations, consuming the same energy as before, and measure the error bound. The ratio between the first error bound and the second is the achieved improvement factor. For Laplace, we were able to achieve an improvement factor of 10^4, for Rosenbrock,10^8

Leave a comment