Are humans or computers better at mathematics?

Computer proofs

Considerable attention has been drawn to the discovery and proof of mathematical theorems by computer.

Perhaps the first major result by a computer came in 1976, with a proof of four-color theorem, namely the assertion that any map (with certain reasonable conditions) can be colored with just four distinct colors for individual states. This was first proved by computer in 1976, although flaws were later found, and a corrected proof was not completed until 1995.

In 2003, Thomas Hales of the University of Pittsburgh published a computer-based proof of Kepler’s conjecture, namely the assertion that the familiar method of stacking oranges in the supermarket is the most space-efficient way of arranging equal-diameter spheres (this was first proposed by 17th-century astronomer Johannes Kepler, who concluded that planets moved in elliptical orbits).

Although Hales published a proof Kepler’s conjecture in 2003, many mathematicians were not satisfied because the “proof” was accompanied by two Gbytes of computer output (a large amount at the time). In response, Hales in 2014 produced a computer-verified formal proof.

The new kid on the block

The latest development along this line is the May 2016 announcement of a computer proof of what is known as the Boolean Pythagorean triples problem. This is the assertion that the integers from 1 to n, where n is 7824 or less, can be colored either red or blue with the property that any set of three integers a, b and c that satisfy a2 + b2 = c2 (i.e., a, b and c form the sides of a right triangle) are the same color; however, for the integers from 1 to 7825, this cannot be done.

As it turns out, the number of possible ways to color the integers from 1 to 7825 is gigantic — over 102300 (a one followed by 2300 zeroes). This number is far, far greater than, say, the number of fundamental particles in the visible universe, which is a mere 1085. However, the researchers were able to sharply reduce this number, by taking advantage of various symmetries and number theory properties, to “only” one trillion. The computer run to examine each of these one trillion cases required two days on 800 processors of the University of Texas Stampede supercomputer.

The Texas computation, which we estimate performed roughly 1019 arithmetic operations, is not really the “largest” mathematical computation. A 2013 computation of digits of pi2 by the present authors and two IBM researchers did twice this many operations. The Great Internet Mersenne Prime Search (GIMPS), a global network of computers search for the largest known prime numbers, routinely performs a total of 450 trillion calculations per second, which every six hours exceeds the number of operations performed by the Texas calculation. In computer output, though, the Texas calculation takes the cake — a staggering 200 Tbytes, namely 2 x 1014 bytes, or 30,000 bytes for every human being on Earth.

The consequence, however, is less scary since the program produced a solution that can be checked by a much smaller program. This is akin to factoring a very large number c into two smaller factors a and b by computer, so that c = a x b. It is often quite difficult to find the two factors a and b, but once found it is a trivial task to multiply them together and verify that they work.

In this sense the Boolean Pythagorean triples result is less philosophically challenging than Clement Lam’s computer proof that there is no projective plane of order ten — a configuration of 121 lines and 121 points with exactly two points on each line and two lines running through each point. In this case the computer labored mightily before returning ‘no’. Reasons why we humans should believe this are central to Lam’s work (1989–91).

Are mathematicians obsolete?

So what do these developments mean? Are research mathematicians soon to join the ranks of chess grandmastersJeopardy champions, retail clerks, taxi drivers, truck drivers, radiologists and other professions threatened with obsolescence due to rapidly advancing technology?

Not really. Mathematicians, like many other professions, have, for the large part, embraced computation as a new mode of mathematical research, a development known as experimental mathematics, which has far-reaching implications.

Experimental mathematics

So what exactly is “experimental mathematics”? It is best defined as a mode of research that employs computers as a “laboratory,” in the same sense that a physicist, chemist, biologist or engineer performs an experiment, to:

  1. Gain insight and intuition.
  2. Discover new patterns and relationships.
  3. Use graphics to suggest underlying principles.
  4. Test and falsify conjectures.
  5. Explore a result to see if it is worth formal proof.
  6. Replace lengthy hand derivations with computer-based manipulations.
  7. Confirm results proved by conventional means.

The present authors have written on this topic at some length — see our books and papers for full technical details.

In one sense, there there is nothing fundamentally new in the experimental methodology of mathematical research. In the third century BCE, the great Greek mathematician Archimedes wrote:

For it is easier to supply the proof when we have previously acquired, by the [experimental] method, some knowledge of the questions than it is to find it without any previous knowledge.

Galileo once reputedly wrote “All truths are easy to understand once they are discovered; the point is to discover them.” Nineteenth century mathematician-physicist Carl Friederich Gauss frequently employed computations to motivate his remarkable discoveries. He once wrote, “I have the result, but I do not yet know how to get [prove] it.”

Computer-based experimental mathematics certainly has technology on its side. With every passing year, computer hardware advances with Moore’s Law, and mathematical computing software packages such as Maple, Mathematica, Sage and others become ever more powerful. Already these systems are powerful enough to solve virtually any equation, derivative, integral or other task in undergraduate mathematics.

So while ordinary human-based proofs are still essential, the computer leads the way in assisting mathematicians to identify new theorems and chart a route to formal proof.

What’s more, one can argue that in many cases computations are more compelling than human-based proofs. Human proofs, after all, are subject to mistakes, oversights, and reliance on earlier results by others that may be unsound. Recall, as a single example, that Andrew Wiles’ initial proof of Fermat’s Last Theorem was later found to be flawed.

Along this line, recently Alexander Yee and Shigeru Kondo computed 12.1 trillion digits of pi. To do this, they first computed somewhat more than 10 trillion base-16 digits, then they checked their computation by computing a section of base-16 digits near the end by a completely different algorithm, and compared the results — they matched perfectly.

So which is more reliable, a human-proved theorem hundreds of pages long, which only a handful of other mathematicians have read and verified in detail, or the Yee-Kondo result? Let’s face it, computation is arguably more reliable than proof in many cases. Additionally, since the days of Gödel and Turing, we have known that there is no reason to assume that a beautiful truth has any formal proof, let alone an elegant proof or even a humanly comprehensible one.

What does the future hold?

So to return to our original question, are human mathematicians destined to be subservient to our new computer overlords, as Jeopardy champion Ken Jennings lamented in accepting defeat to IBM’s Watson computer?

Probably not. There is every indication that research mathematicians will continue to work in respectful symbiosis with computers for the foreseeable future. Indeed, as this relationship and computer technology matures, mathematicians will become more comfortable leaving certain parts of a proof to computers.

This very question was discussed in a June 2014 panel discussion by the five inaugural Millennium Prize recipients for mathematics, namely Simon Donaldson, Maxim Kontsevich, Jacob Lurie, Terence Tao and Richard Taylor. Terence Tao expressed their consensus in these terms:

Computers will certainly increase in power, but I expect that much of mathematics will continue to be done with humans working with computers.

So don’t toss your algebra textbook quite yet. You will need it!

[This article was co-authored with Jonathan M. Borwein before his death in August 2016. A condensed version of this article appeared in The Conversation.]

Comments are closed.