Computation, AI and the future of mathematics


Optimal stacking of oranges.

Optimal stacking of oranges.

Computer tools in mathematics: From foot dragging to full embrace

The present author recalls, while in graduate school at Stanford nearly 50 years ago, hearing two senior mathematicians (including the present author’s advisor) briefly discuss some other researcher’s usage of computation in a mathematical investigation. Their comments and body language clearly indicated a profound disdain for such efforts. Indeed, the prevailing mindset at the time was “real mathematicians don’t compute.”

Some researchers at the time attempted to draw more attention to the potential of computer tools in mathematics. In the early 1990s, Keith Devlin edited a column in the Notices of the American Mathematical Society on applications of computers in the field. Journals such as Mathematics of Computation and the new Experimental Mathematics received a growing number of submissions. But in general computation and computer tools remained relatively muted in the field.

How times have changed! Nowadays most mathematicians are comfortable using mathematical software such as Mathematica, Maple or Sage. Many have either personally written computer code in some language as part of a mathematical investigation, or, at the least, have worked closely with a colleague who writes such code. Most present-day mathematicians are familiar with using LaTeX to typeset papers and produce presentation viewgraphs, and many are also familiar with collaboration tools such as blogs, FaceTime, Skype, Zoom, Slack and Microsoft Teams.

One important development is that a growing number of mathematicians have experience using formal mathematical software to encode and rigorously confirm all steps of a formal mathematical proof. Some more widely used systems include Lean, Coq, HOL and Isabelle.

Indeed, in sharp contrast to 50 years ago, perhaps today one would say “real mathematicians must compute,” particularly if they wish to pursue state-of-the-art research in collaboration with other talented mathematicians. Here we mention just a few of the interesting developments in this arena.

Hales’ proof of the Kepler conjecture

The Kepler conjecture is the assertion that the simple scheme of stacking oranges typically seen in a supermarket has the highest possible average density, namely $\pi/(3 \sqrt{2}) = 0.740480489\ldots$, for any possible arrangement, regular or irregular. It is named after 17th-century astronomer Johannes Kepler, who first proposed that planets orbited in elliptical paths around the sun.

In the early 1990s, Thomas Hales, following an approach first suggested by Laszlo Fejes Toth in 1953, determined that the maximum density of all possible arrangements could be obtained by minimizing a certain function with 150 variables. In 1992, assisted by his graduate student Samuel Ferguson (son of famed mathematician-sculptor Helaman Ferguson), Hales embarked on a multi-year effort to find a lower bound for the function for each of roughly 5,000 different configurations of spheres, using a linear programming approach. In 1998, Hales and Ferguson announced that their project was complete, documented by 250 pages of notes and three Gbyte of computer code, data and results.

Hales’ computer-assisted proof generated some controversy — the journal Annals of Mathematics originally demurred, although ultimately accepted his paper. In the wake of this controversy, Hales decided to embark on a collaborative effort to certify the proof via automated techniques. This project, named Flyspeck, employed the proof-checking software HOL Light and Isabelle. The effort commenced in 2003, but was not completed until 2014.

Results on the twin-prime conjecture

Another good recent example of computer-based tools in modern mathematics came to light recently in work on the twin prime conjecture. The twin prime conjecture is the assertion that there are infinitely many pairs of primes differing by just two, such as $(11, 13), (101, 103)$, and $(4799, 4801)$. Although many eminent mathematicians have investigated the conjecture, few solid results have been obtained, and the conjecture remains open.

In 2013, the previously unheralded mathematician Yitang Zhang posted a paper proving that there are infinitely many pairs of primes with gaps between two and 70,000,000. While this certainly did not prove the full conjecture (the 70,000,000 figure would have to be reduced to just two), the paper and its underlying techniques constituted what one reviewer termed “a landmark theorem in the distribution of prime numbers.”

Zhang based his work on a 2005 paper by Dan Goldston, Janos Pintz and Cem Yildirim. Their paper employed a sieving method analogous to the Sieve of Eratosthenes to filter out pairs of primes that are closer together than average. They then proved that for any fraction, no matter how small, there exists some pair of primes closer together than that fraction of the average gap.

Immediately after Zhang’s result was made public, researchers wondered if the figure 70,000,000 could be lowered. Soon Terence Tao, James Maynard and several other prominent mathematicians instituted an online collaboration known as Polymath to further reduce the gap. (The Polymath system had previously been developed by Timothy Gowers and others, and has an impressive list of results.) When the dust had settled, Terence Tao proved a key result lowering the limit to just 246, using results developed by Zhang, Maynard and others. The resulting paper was listed as authored by “D.H.J. Polymath.”

New research results using data mining, machine learning and artificial intelligence

It is no secret that the methods of data mining, machine learning and artificial intelligence have made dramatic advances within the past years. Here are some well-known examples:

  1. In 2011, IBM’s “Watson” computer system defeated two premier champions of the American quiz show Jeopardy!.
  2. In 2017, AlphaGo Zero, developed by DeepMind, a subsidiary of Alphabet (Google’s parent company), defeated an earlier program, also developed by DeepMind, which in turn had defeated the world’s best Go player, a feat that many observers had not expected to see for decades. By one measure, AlphaGo Zero’s performance is as far above the world’s best Go player as the world’s best Go player is above a typical amateur.
  3. In 2020, AlphaFold 2, also developed by DeepMind, scored 92% on the 2020 Critical Assessment of Protein Structure Prediction (CASP) test, far above the 62% achieved by the second-best program in the competition. Nobel laureate Venki Ramakrishnan of Cambridge University exulted,” This computational work represents a stunning advance on the protein-folding problem, a 50-year-old grand challenge in biology. It has occurred decades before many people in the field would have predicted.”
  4. The financial industry already relies heavily on financial machine learning methods, and a major expansion of these technologies is coming, possibly displacing or obsoleting thousands of highly paid workers.
  5. In January 2023, researchers with the Search for Extraterrestrial Intelligence (SETI) project announced that they are deploying machine-learning techniques to sift through large datasets of microwave data. As Alexandra Witze writes in Nature, “Will an AI Be the First to Discover Alien Life?”
  6. In 2022, OpenAI launched ChatGPT, a software system with a remarkable facility to generate surprisingly cogent text on many topics and to interact with humans. This in turn has launched an incredible boom in AI research and development by several competing laboratories and firms that shows no sign of abating. In June 2024, Apple announced that it will soon provide OpenAI software in iPhones and other devices. Firms such as Nvidia that produce chips for AI have seen explosive demand for their products.

Advanced computational methods in mathematical research

Advanced machine learning, data mining and AI software is also being applied in mathematical research. One example is known as the Ramanujan Machine. This is a systematic computational approach, developed by a consortium of researchers mostly based in Israel, that attempts to discover mathematical formulas for fundamental constants, and to reveal the underlying structure of various constants. Their techniques have discovered numerous well-known formulas and several new ones, notably some continued fraction representations of $\pi, e$, Catalan’s constant, and values of the Riemann zeta function at integer arguments.

A similar, but quite distinct approach has been taken by a separate group of researchers, including famed mathematician brothers Jonathan Borwein and Peter Borwein (with some participation by the present author). By employing, in part, very high-precision computations, these researchers discovered a large number of new infinite series formulas for various mathematical constants (including $\pi$ and $\log(2)$, among others), some of which have the intriguing property that they permit one to directly compute binary or hexadecimal digits beginning at an arbitrary starting position, without needing to compute any of the digits that come before. Two example references are here and here.

Along this line, one very useful online reference for computational mathematics is N.J.A. Sloane’s Online Encyclopedia of Integer Sequences, which at present contains over 375,000 entries, many of which include numerous references of where a specific sequence appears in mathematical literature. With the rise of serious data-mining technology, we may see remarkable new mathematical results discovered by data mining this invaluable dataset.

Terence Tao on proof checkers, AI and the future of mathematical research

UCLA Fields Medalist mathematician Terence Tao recently sat down with Christoph Drosser, a writer for Spektrum der Wissenschaft, the German edition of Scientific American. Drosser and Tao discussed at length the current status of computer tools in mathematical research, including proof-checking software such as Lean and also AI-based tools, and what is likely to be the future course of the field with the ascendance of these tools. Here are some excerpts from this remarkable interview (with some minor editing by the present author):

Drosser: With the advent of automated proof checkers, how is [the trust between mathematicians] changing?

Tao: Now you can really collaborate with hundreds of people that you’ve never met before. And you don’t need to trust them, because they upload code and the Lean compiler verifies it. You can do much larger-scale mathematics than we do normally. When I formalized our most recent results with what is called the Polynomial Freiman-Ruzsa (PFR) conjecture, [I was working with] more than 20 people. We had broken up the proof in lots of little steps, and each person contributed a proof to one of these little steps. And I didn’t need to check line by line that the contributions were correct. I just needed to sort of manage the whole thing and make sure everything was going in the right direction. It was a different way of doing mathematics, a more modern way.

Drosser: German mathematician and Fields Medalist Peter Scholze collaborated in a Lean project — even though he told me he doesn’t know much about computers.

Tao: With these formalization projects, not everyone needs to be a programmer. Some people can just focus on the mathematical direction; you’re just splitting up a big mathematical task into lots of smaller pieces. And then there are people who specialize in turning those smaller pieces into formal proofs. We don’t need everybody to be a programmer; we just need some people to be programmers. It’s a division of labor.

[Continuing after some skip:]

Drosser: I heard about machine-assisted proofs 20 years ago, when it was a very theoretical field. Everybody thought you have to start from square one — formalize the axioms and then do basic geometry or algebra — and to get to higher mathematics was beyond people’s imagination. What has changed that made formal mathematics practical?

Tao: One thing that changed is the development of standard math libraries. Lean, in particular, has this massive project called mathlib. All the basic theorems of undergraduate mathematics, such as calculus and topology, and so forth, have one by one been put in this library. So people have already put in the work to get from the axioms to a reasonably high level. And the dream is to actually get [the libraries] to a graduate level of education. Then it will be much easier to formalize new fields [of mathematics]. There are also better ways to search because if you want to prove something, you have to be able to find the things that it already has confirmed to be true. So also the development of really smart search engines has been a major new development.

Drosser: So it’s not a question of computing power?

Tao: No, once we had formalized the whole PFR project, it only took like half an hour to compile it to verify. That’s not the bottleneck — it’s getting the humans to use it, the usability, the user friendliness. There’s now a large community of thousands of people, and there’s a very active online forum to discuss how to make the language better.

Drosser: Is Lean the state of the art, or are there competing systems?

Tao: Lean is probably the most active community. For single-author projects, maybe there are some other languages that are slightly better, but Lean is easier to pick up in general. And it has a very nice library and a nice community. It may eventually be replaced by an alternative, but right now it is the dominant formal language.

[Continuing after some skip:]

Drosser: So far, the idea for the proof still has to come from the human mathematician, doesn’t it?

Tao: Yes, the fastest way to formalize is to first find the human proof. Humans come up with the ideas, the first draft of the proof. Then you convert it to a formal proof. In the future, maybe things will proceed differently. There could be collaborative projects where we don’t know how to prove the whole thing. But people have ideas on how to prove little pieces, and they formalize that and try to put them together. In the future, I could imagine a big theorem being proven by a combination of 20 people and a bunch of AIs each proving little things. And over time, they will get connected, and you can create some wonderful thing. That will be great. It’ll be many years before that’s even possible. The technology is not there yet, partly because formalization is so painful right now.

Drosser: I have talked to people that try to use large language models or similar machine-learning technologies to create new proofs. Tony Wu and Christian Szegedy, who recently co-founded the company xAI, with Elon Musk and others, told me that in two to three years mathematics will be “solved” in the same sense that chess is solved — that machines will be better than any human at finding proofs.

Tao: I think in three years AI will become useful for mathematicians. It will be a great co-pilot. You’re trying to prove a theorem, and there’s one step that you think is true, but you can’t quite see how it’s true. And you can say, “AI, can you do this stuff for me?” And it may say, “I think I can prove this.” I don’t think mathematics will become solved. If there was another major breakthrough in AI, it’s possible, but I would say that in three years you will see notable progress, and it will become more and more manageable to actually use AI. And even if AI can do the type of mathematics we do now, it means that we will just move to a higher type of mathematics. So right now, for example, we prove things one at a time. It’s like individual craftsmen making a wooden doll or something. You take one doll and you very carefully paint everything, and so forth, and then you take another one. The way we do mathematics hasn’t changed that much. But in every other type of discipline, we have mass production. And so with AI, we can start proving hundreds of theorems or thousands of theorems at a time. And human mathematicians will direct the AIs to do various things. So I think the way we do mathematics will change, but their time frame is maybe a little bit aggressive.

Drosser: I interviewed Peter Scholze when he won the Fields Medal in 2018. I asked him, How many people understand what you’re doing? And he said there were about 10 people.

Tao: With formalization projects, what we’ve noticed is that you can collaborate with people who don’t understand the entire mathematics of the entire project, but they understand one tiny little piece. It’s like any modern device. No single person can build a computer on their own, mine all the metals and refine them, and then create the hardware and the software. We have all these specialists, and we have a big logistics supply chain, and eventually we can create a smartphone or whatever. Right now, in a mathematical collaboration, everyone has to know pretty much all the mathematics, and that is a stumbling block, as [Scholze] mentioned. But with these formalizations, it is possible to compartmentalize and contribute to a project only knowing a piece of it. I think also we should start formalizing textbooks. If a textbook is formalized, you can create these very interactive textbooks, where you could describe the proof of a result in a very high-level sense, assuming lots of knowledge. But if there are steps that you don’t understand, you can expand them and go into details — all the way down the axioms if you want to. No one does this right now for textbooks because it’s too much work. But if you’re already formalizing it, the computer can create these interactive textbooks for you. It will make it easier for a mathematician in one field to start contributing to another because you can precisely specify subtasks of a big task that don’t require understanding everything.

[Continuing after some skip:]

Drosser: By breaking down a problem and exploring it, you learn a lot of new things on the way, too. Fermat’s Last Theorem, for example, was a simple conjecture about natural numbers, but the math that was developed to prove it isn’t necessarily about natural numbers anymore. So tackling a proof is much more than just proving this one instance.

Tao: Let’s say an AI supplies an incomprehensible, ugly proof. Then you can work with it, and you can analyze it. Suppose this proof uses 10 hypotheses to get one conclusion — if I delete one hypothesis, does the proof still work? That’s a science that doesn’t really exist yet because we don’t have so many AI-generated proofs, but I think there will be a new type of mathematician that will take AI-generated mathematics and make it more comprehensible. Like, we have theoretical and experimental science. There are lots of things that we discover empirically, but then we do more experiments, and we discover laws of nature. We don’t do that right now in mathematics. But I think there’ll be an industry of people trying to extract insight from AI proofs that initially don’t have any insight.

Drosser: So instead of this being the end of mathematics, would it be a bright future for mathematics?

Tao: I think there’ll be different ways of doing mathematics that just don’t exist right now. I can see project manager mathematicians who can organize very complicated projects — they don’t understand all the mathematics, but they can break things up into smaller pieces and delegate them to other people, and they have good people skills. Then there are specialists who work in subfields. There are people who are good at trying to train AI on specific types of mathematics, and then there are people who can convert the AI proofs into something human-readable. It will become much more like the way almost any other modern industry works. Like, in journalism, not everyone has the same set of skills. You have editors, you have journalists, and you have businesspeople, and so forth — we’ll have similar things in mathematics eventually.

The full text of the interview is available here.

Comments are closed.