In his latest book, Life 3.0: Being Human in the Age of Artificial Intelligence, Tegmark surveys some of the recent advances of the AI

Continue reading The future of artificial intelligence: Utopia or dystopia?

]]>In his latest book, Life 3.0: Being Human in the Age of Artificial Intelligence, Tegmark surveys some of the recent advances of the AI field, and then asks what kind of future we want. He argues that both utopia and dystopia are possible, depending on some difficult and wide-ranging decisions that we must make in the coming decade or two.

Progress in AI in the past few years has been undeniably breathtaking. Some of the more notable milestones include: (a) the 1997 defeat of chess champion Garry Kasparov by IBM’s Deep Blue system; (b) the 2011 defeat of two Jeopardy! champions by IBM’s Watson system; (c) the emergence of effective language translation facilities, exemplified by Google’s translate app; (d) the rise of practical voice recognition and natural language understanding, exemplified by Apple’s Siri and Amazon’s Echo; and, most recently, the 2017 defeat of the world champion Go player, Lee Se-dol, by Deep Mind’s AlphaGo program, a feat that had not been expected for decades if ever.

It is worth mentioning here that since the publication of Tegmark’s book, Deep Mind has developed a new Go-playing program, named AlphaGo Zero. This program taught itself to play Go with no human input (other than specifying the rules of the game). After just a few days of playing games against itself, it defeated the earlier AlphaGo program 100 games to zero. After one month, AlphaGo Zero’s Elo rating was as far above the world champion Go player as the world’s champion Go player is above a typical amateur.

With such examples, Tegmark argues that we can no longer dismiss the possibility of the emergence of an “artificial general intelligence” or “superintelligence,” namely a general-purpose intelligent system that exceeds human capabilities across a broad spectrum of potential applications. Such systems would have massive and possibly unimaginable consequences. For example, in one chapter Tegmark argues that advanced AI technology hugely changes the debate over Fermi’s paradox — if we can send intelligent software and nanotech spacecraft (such as those proposed by Yuri Milner and Stephen Hawking), instead of humans, to distant star systems, then the exploration of the universe is dramatically more feasible.

Closer to home, the rise of AI is raising concerns about how humans will find jobs in this brave new world. In the short run, self-driving cars and trucks will displace many thousands of human drivers, and increasingly capable intelligent robots are certain to displace humans both in manufacturing plants and also at construction sites. In the medium term, many white collar jobs will disappear, in the same way that many tax preparers have been replaced by software downloads and online tools. Already, IBM’s Watson system is being applied in medicine, such as in the treatment of cancer, and AI and big data are making huge waves in the finance world (see also this Bloomberg article). In the long term, it is not clear that any job categories will be unaffected. Will we face a future without jobs? How will we find purpose in life without meaningful work?

Tegmark illustrates the issues with an intriguing scenario: A team of computer scientists at a large corporation secretly develops an AI system, called “Prometheus.” To begin with, it becomes quite skilled in programming computer games that sell well, thus providing a significant income stream. From there the system starts producing online movies and TV shows, first entirely animated but later involving human actors, who use scripts and set designs produced by Prometheus, whose existence is hidden behind an opaque human organization. Then the Prometheus system expands its domain to engineer more powerful computer hardware (thus further enhancing its own capabilities), batteries with more storage life, powerful new pharmaceuticals (e.g., new anti-cancer drugs), remarkably capable robots, remarkably efficient vehicles, advanced educational systems and even political persuasion tools. Within just a few years, Prometheus effectively takes over the world.

Does this scenario seem futuristic and far-fetched, more in the realm of science fiction than reality? Don’t count on it.

Tegmark outlines a number of possible future paths for the future of society:

*Libertarian utopia*: Humans and superintelligent AI systems coexist peacefully, thanks to rigorously enforced property rights that cordon off the two domains.*Benevolent dictator*: Everyone knows that a superintelligent AI runs society, but it is tolerated because it does a rather good job.*Egalitarian utopia*: Humans and intelligences coexist peacefully, thanks to a guaranteed income and abolition of private property.*Gatekeeper*: An intelligent AI is created that interferes as little as possible in human affairs, except to prevent the creation of a competing superintelligent AI. Human needs are met, but technological progress may be forever stymied.*Enslaved god*: A superintelligent AI is created to produce amazing technology and wealth, but it is strictly confined and controlled by humans.*Conquerors*: A superientelligent AI takes control, decides that humans are a threat, nuisance and waste of resources, and then gets rid of us, possibly by a means that we do not even understand until it is too late.*Descendants*: Superintelligent AIs replace humans, but give us a gradual and graceful exit.*Zookeeper*: One or more superintelligent AI systems take control, but keep some humans around as amiable pets.*Reversion*: An Orwellian surveillance state blocks humans from engaging in advanced AI research and development.*Self-destruction*: Humanity extinguishes itself before superintelligent AI systems are deployed.

So what do we need to do to ensure a utopia and not a dystopia? To begin with, Tegmark argues, we need to very carefully define what our goals and values are as a society. Then we need to ensure that AI systems learn, adopt and retain our goals and values — this is the goal alignment problem of AI. Note that there is an inherent tension here, in that as a superintelligent AI improves its capability and world model, it may shed its previously established set of goals.

All of this raises some of the most fundamental questions of philosophy and religion, for example how to objectively define an ethical worldview. Utilitarianism, diversity, autonomy and some sense of legacy must all be part of this discussion. When will intelligent AI systems be granted legal rights? Even more fundamentally, how do we determine whether or not an AI system is a conscious entity? What is consciousness, anyway?

The recent developments in AI raise these and a host of similar questions, whose solution or at least better understanding is urgently needed. As Tegmark notes, we have before us some of the most difficult and basic questions ever posed by humans, and a ticking time limit in which to answer them.

Along this line, Tegmark recommends that we study long-lasting human organizations (he mentions the Roman Catholic Church, among others) for clues into how to craft a society that would be stable for hundreds, thousands or millions of years, at the same time accommodating unimaginable advances in technology.

In his Epilogue, Tegmark argues that it is no longer sufficient to merely highlight these challenges; real action is needed. To that end, he has organized a Future of Life Institute to focus efforts to understand these issues and to take positive steps to guide future development. The institute’s activities include research programs on AI safety, ensuring explainability in AI software, aligning goals, managing impact on jobs, reducing potential economic inequality, and balancing benefits and risks.

In the end, everyone will have to participate in examining these issues. This is one discussion we can’t sit out.

]]>Continue reading Fine tuning and Fermi’s paradox

]]>Ever since the time of Copernicus, the overriding worldview of scientific discovery has been that there is nothing special about Earth and humanity: the Earth is not the center of the solar system — we are merely one of several planets orbiting the Sun; the Sun is not the center of the Milky Way — it is merely one of over 100 billion stars in the galaxy; the Milky Way is not the center of the universe — it is merely one of over 100 billion galaxies in the universe; etc. Indeed, this “Copernican principle” has been assumed to apply very generally in all fields of science.

Yet some major cracks have arisen in this edifice in the past few years. Most notably, we appear to reside in a universe that is astoundingly fine-tuned for intelligent life. This counter-intuitive fact was presented and discussed at length in the new book by Australian astronomers Geraint F. Lewis and Luke A. Barnes, A Fortunate Universe: Life in a Finely Tuned Cosmos (see also [Rees2000]). Here is a brief summary of some of these “cosmic coincidences” (see also Fine tuned):

- The synthesis of carbon depends sensitively on the value of the strong force.
- The existence of both protons and neutrons depends sensitively on the strong and weak forces.
- If the electromagnetic force were not roughly 10
^{40}times stronger than gravity, the heavier elements would not form. - If the neutron mass were very slightly less, the universe would be entirely protons.
- The cosmic microwave background is just nonuniform enough (one part in 10
^{5}) to permit galaxies. - The positive and negative contributions to the vacuum energy density cancel to within one part in 10
^{120}(the cosmological constant paradox). - The positive and negative contributions of the Higgs boson mass cancel to one part in 10
^{19}to give the anomalously low value we observe. - In the first few minutes after the big bang, the universe must have been flat to within one part in 10
^{15}. - The overall entropy of the universe is “freakishly lower than life requires.”

In 1950, while having lunch with colleagues Emil Konopinski, Edward Teller and Herbert York, physicist Enrico Fermi suddenly blurted out, “Where is everybody?” Behind Fermi’s question was this line of reasoning: (a) There are likely numerous other technological civilizations in the Milky Way galaxy; (b) if a society is less advanced than us by even a few decades, they would not be technological, so any other technological civilization is, almost certainly, many thousands or millions of years more advanced; (d) within a few million years after becoming technological (an eye-blink in cosmic time), a society could have explored and/or colonized most if not all of the Milky Way; (e) so why don’t we see evidence of the existence of even a single extraterrestrial civilization?

Numerous scientists have examined Fermi’s paradox since it was first proposed, and have proposed solutions. Below is a brief listing of some of the proposed solutions, and common rejoinders [Webb2002, pg. 27-231]:

*They exist, but are under strict orders not to communicate with a civilization such as Earth (the “zookeeper” solution).*Rejoinder: In numerous vast, diverse ET civilizations (or even in just one ET civilization), each spanning multiple planets or stars, and each consisting of millions of individuals, it is hardly credible that a galactic society could impose a global ban on communication to Earth that is absolutely 100% effective. Note that once a signal has been broadcast and is on its way to Earth, there is no way to call it back, within known laws of physics. And for a civilization that is thousands or millions of years more advanced than us, such communication would be vanishingly cheap.*They exist, but have lost interest in scientific research, exploration and expansion (the “beach bum” solution)*. Rejoinder: Darwinian evolution strongly favors organisms that think, explore and expand. Thus it is hardly credible that*every*individual in*every*ET civilization has lost interest in scientific research, exploration and expansion, or that a global ban on such activities is absolutely 100% effective. What’s more, any ET society’s long-term existence crucially hinges on having an in-depth scientific understanding of all potential perils in its cosmic environment, including asteroids, meteorites, solar flares, supernovas, gamma ray bursts, neutron star mergers, potentially dangerous biological systems and potentially hostile neighbors.*They exist, but have no interest in a primitive, backward society such as ours; to them, we are as ants (the “humans are ants” solution).*Rejoinder: Perhaps 99.99% of an ET society is not interested in primitive societies such as ours. But, as before, it is hardly credible that*every*individual in*every*ET civilization has no interest. In our society, perhaps 99.99% of the public has little or no interest in ants. But many thousands do. There is even a full-fledged scientific field (myrmecology) to study ants, and researchers have meticulously catalogued and studied every known species.*They exist, but have progressed to more sophisticated communication technologies (the “advanced communication” solution)*. Rejoinder: This does not apply to signals that are specifically targeted to societies such as ours, in a form (optical, microwaves) that could be easily recognized by a newly technological society. Again, it is hardly credible that a galactic society could enforce a global ban, over a vast array of inhabited planets, each with millions of individuals, on communication targeted to emerging technological civilizations, that is absolutely 100% effective. As noted before, once a signal is on its way to Earth, it cannot be called back, within known laws of physics. Similar diversity arguments defeat a broad range of other proposed solutions (see below).*They exist, but are not aware of our existence yet — our first TV signals have only passed 80 light years’ distance (the “no evidence of humans” solution)*[Reynolds2017]. Rejoinder: Ample evidence of an emerging technological civilization on Earth has been on display for much longer: (a) our atmosphere has contained methane, oxygen and other chemical signs of life for at least three billion years; (b) images of Earth would have shown dinosaurs and countless other large species for at least 300 million years; (c) images of Earth would have shown bipedal hominins for at least 5 million years, and humans for at least 200,000 years; (d) images of Earth would have shown large human structures (Mesopotamia, Egypt, China, Rome) for at least 5,000 years; (e) urban lights have been on display for at least 2,000 years, particularly in the past 200 years; and (f) atmospheric carbon dioxide has been on the rise for 200 years.*They exist, but travel and communication are too difficult (the “technological” solution)*. Rejoinder: Recent dramatic and largely unanticipated developments in technology in the past few years have all but destroyed this solution: new energy sources, including various forms of fusion [Bailey2015]; new propulsion systems [Ion2016, Foster2004, Slough2013]; new space exploration vehicles [Drake2017]; fleets of nanocraft to visit nearby stars such as Alpha Centauri [Billings2016]; supercomputers (currently run at 10^{17}flop/s); quantum computing; artificial intelligence [AlphaGo2017,Parloff2016]; robotics; 3-D printing and nanotechnology; exoplanet search and imaging technology; gravitational lenses (see below); and von Neumann probes (see below). If we are on the verge of deploying such technologies today, what is stopping societies and even individuals that are thousands or millions of years more advanced than us?*Civilizations like us invariably self-destruct before becoming a space-faring society (the “self-destruct” solution)*. In 200 years of technological adolescence, we have not yet destroyed ourselves through a nuclear, environmental or biological catastrophe. Further, we have developed sophisticated supercomputer simulations to foresee and control future perils. Thus it is hardly credible that societies such as ours*invariably*self-destruct before they become space-faring society, without any exceptions whatsoever. In any event, within a few years human civilization will spread to the Moon, Mars and elsewhere, and then its long-term survival will be largely impervious to calamities on the home planet. As before, galloping technology is destroying this solution to Fermi’s paradox.*Earth is a unique planet with characteristics fostering a long-lived biological regime leading to intelligent life (the “rare earth” solution)*[Ward2000]. Rejoinder: Perhaps, although many recent discoveries point in the*opposite*direction: the universe contains over 100 billion galaxies; the Milky Way contains over 100 billion stars; thousands of exoplanets have been found (more than 40 in the habitable zone) (see below); recent work in biogenesis indicates that the origin of life was not a particularly unlikely event (also indicated by recent fossil finds, which show life arose almost immediately after the formation of Earth, over 3.8 billion years ago) — see Origin.*WE ARE ALONE, within the Milky Way galaxy if not beyond (the “solitary” solution)*. Rejoinder: It hardly seems credible that we are unique even in the Milky Way (with over 100 billion stars and planets), much less the entire universe (with over 100 billion galaxies). This solution may be consistent with Occam’s razor, but it is an extreme violation of the “Copernican principle,” namely the hypothesis that there is nothing special about Earth or humanity. Has the Copernican principle been completely overturned? Many recoil at this solution (including the author), but what is the alternative?

Numerous other proposed solutions and rejoinders are given at [Webb2002]. A more recent review of these issues is given in [Gribbin2011].

Let p be the probability that an individual on a given planet in a given year launches an interstellar exploration, m be the number of individuals on a typical planet, n be the number of planets, and t be the number of years. Then the probability P that a civilization has explored the Milky Way can be estimated as P = 1 – (1 – p)^{m n t}. Conservative estimates for the Milky Way are m > 10^{9}, n > 10^{11} and t > 5 x 10^{9}. For the universe as a whole, n > 10^{22} and t > 10^{10}.

In other words, if the probability of the rise of a space-faring civilization anywhere is even microscopically nonzero (given the instance of human civilization), then after billions of years, on many billions of planets, with billions of individuals, ET should be everywhere. Where is everybody?

As mentioned above, diversity arguments defeat a wide range of proposed solutions. Consider:

- Darwinian evolution is the only known or hypothesized mechanism whereby high-information organisms and species (carbon-based or not) can form.
- Diversity is a fundamental, inescapable law of Darwinian evolution.
- Diversity is also a law of economics, political science, organizational behavior, and even physics (quantum superposition, sum over histories, chaos, anisotropy in the cosmic microwave background, etc.).
- Highly conformist species, societies and organizations inevitably fail.
- All great figures of history were nonconformists: Albert Einstein, Martin Luther King, Susan B. Anthony, Nelson Mandela, Steve Jobs. Jobs’ motto was “think different.”

In a vast, diverse society, there will be exceptions to any rule. Thus claims that “all ET are like X” have no credibility, no matter what “X” is. It is ironic that while most scientists would reject stereotypes of religious, ethnic or national groups, some seem willing to hypothesize sweeping, ironclad stereotypes for ET societies.

As mentioned above in item 6, the “technological” solution argues that exploration and communication are simply too difficult. However, in addition to the developments listed above, a distant society could deploy “von Neumann probes,” self-replicating robotic spacecraft that travel to a star system, send video and scientific data back to the home planet, and then manufacture several copies of themselves, which are launched to even more distant systems. One recent analysis of the “slingshot” scenario found that 99% of all star systems in the Milky Way could be explored in only about five million years, which, is an eyeblink in the multi-billion-year age of the Milky Way [Nicholson2013].

“Exploring” the Milky Way telescopically is even easier, by means of a “gravitational lens,” namely using the Sun as a “lens” (according to general relativity). Magnifications of up to 10^{15} could be attained. With such a facility, which is nearly feasible at the present time, we could obtain rather high-resolution images of distant planets, and even listen in to their microwave transmissions (or other forms of electromagnetic communication). What’s more we could also send messages to them using the same facility [Landis2016].

As mentioned above, if we are on the verge of deploying such technologies today, what is stopping societies and even individuals that are thousands or millions of years more advanced than us? Are other civilizations using gravitational lenses to see close-up images of Earth? Or even to send messages to Earth? Why not?

With every new research finding of extrasolar planets, potential life-friendly environments within the solar system, and, especially, with every new advance of human technology, the mystery of Fermi’s paradox deepens. Indeed, “Where is everybody?” has emerged as one of the most significant scientific and philosophical questions of our time. Numerous scientists have traditionally opined that in such an enormous galaxy (and universe), there must be countless instances of extraterrestrial life, and almost as many full-fledged technological civilizations. But other scientists are beginning to question this orthodoxy, saying out loud that we may be alone, at least in the Milky Way galaxy if not beyond.

Max Tegmark, a prominent Swedish-American cosmologist, argues [Tegmark2017, pg. 241] that “this assumption that we’re not alone in our Universe is not only dangerous but also probably false.” He adds, “This is a minority view, and I may well be wrong, but it’s at the very least a possibility that we can’t currently dismiss, which gives us a moral imperative to play it safe and not drive our civilization extinct.”

Paul Davies, a prominent British-American physicist, concludes his latest book on the topic by stating his own assessment [Davies2010, pg. 207-208]: “[M]y answer is that we are probably the only intelligent beings in the observable universe and I would not be very surprised if the solar system contains the only life in the observable universe.”

John Gribbin, a prominent British astrophysicist, concludes his recent book on the topic in these uncompromising terms [Gribbin2011, pg. 205]: “They are not here, because they do not exist. The reasons why we are here form a chain so improbable that the chance of any other technological civilization existing in the Milky Way Galaxy at the present time is vanishingly small. We are alone, and we had better get used to it.”

If we are truly alone in the Milky Way or beyond, this greatly magnifies the paradox of universal fine tuning. Not only do we reside in an incredibly fortunate universe, but we occupy an incredibly unique time and place within that universe. Even if we are “only” extremely rare in the universe, this is a most important finding, with truly cosmic implications.

Was the universe made for us? Or is our understanding of the laws of physics fundamentally in error?

Either way, human existence is far more significant than anyone could have imagined even a few years ago.

]]>Now a new computer program, called “AlphaGo Zero,” which literally taught itself

Continue reading New Go-playing program teaches itself, beating previous program 100-0

]]>Now a new computer program, called “AlphaGo Zero,” which literally taught itself to learn from scratch without any human input, has defeated the previous program 100 games to zero.

What does this mean? First, some background:

Many are familiar with the 1997 defeat of Garry Kasparov, then the world’s reigning chess champion, by IBM’s “Deep Blue” computer. Commenting on his experience, Kasparov later reflected, “It was my luck (perhaps my bad luck) to be the world chess champion during the critical years in which computers challenged, then surpassed, human chess players.”

Deep Blue employed some new techniques, but for the most part it simply applied enormous computing power, so that it could store many openings, look ahead many moves, and — except when the programmers erred — never make mistakes.

An even more impressive performance was the 2011 defeat of two champion contestants on the American quiz show Jeopardy! by an IBM-developed computer system named “Watson.” The Watson achievement involved natural language understanding, namely the intelligent “understanding,” in some sense, of ordinary (often tricky) English text.

For example, in “Final Jeopardy” culminating the Jeopardy! match, in the category “19th century novelists,” the clue was “William Wilkinson’s ‘An Account of the Principalities of Wallachia and Moldavia’ inspired this author’s most famous novel.” Watson correctly responded “Who is Bram Stoker?” [the author of Dracula], thus sealing the victory. Legendary Jeopardy! champ Ken Jennings conceded by writing on his tablet, “I for one welcome our new computer overlords.”

In the years since the Jeopardy! demonstration, IBM has deployed its Watson AI technology in the health care field. In a recent test of its cancer-diagnosing facility, Watson recommended treatment plans that matched recommendations by oncologists in 99 percent of the cases, and offered options doctors missed 30 percent of them.

In the ancient Chinese game of Go, players place black and white beads on a 19×19 grid. The game is notoriously complicated, with strategies that can only be described in vague, subjective terms. For these reasons, in spite of advances such as the Deep Blue chess-playing program and even the Watson Jeopardy!-playing program, many observers did not expect Go-playing computer programs to be able to beat the best human players for many years.

Thus it was with considerable surprise when in March 2016, a computer program named AlphaGo, developed by researchers working for DeepMind, a subsidiary of Alphabet (Google’s parent company), defeated Lee Se-dol, a South Korean Go master 4-1 in a 5-game tournament. Not resting on laurels, the DeepMind researchers further enhanced their program, and then, on 23 May 2017, defeated Ke Jie, a 19-year-old Chinese Go master thought to be the world’s best human Go player.

In developing the program that defeated Lee and Ke, DeepMind researchers fed their program 100,000 top amateur games and “taught” it to imitate what it observed. Then they had the program play itself and learn from the results, slowly increasing its skill.

In the latest development, the new program, called AlphaGo Zero, bypassed the first step. The DeepMind researchers merely programmed the rules of Go, with a simple reward function that rewarded games won, and had it play games against itself. Initially, the program merely scattered pieces seemingly at random across the board. But it quickly got better at evaluating board positions, and substantially increased its level of skill.

Interestingly, along the way the program rediscovered many basic elements of Go strategies used by human players, including anticipating its opponent’s probable next moves. But unshackled from the experience of humans, it then developed new complex strategies never before seen in human Go games.

After just three days of training and 4.9 million training games (with the program playing against itself), the AlphaGo Zero program had advanced to the point that it defeated the earlier version of the program 100 games to zero.

Skill at Go (and several other games) is quantified by the Elo rating, which is based on the record of their past games. Lee’s rating is 3526, while Ke’s rating is 3661. After 40 days of training, AlphaGo Zero’s Elo rating was over 5000. Thus AlphaGo Zero was as far ahead of Ke as Ke is ahead of a good amateur player.

Additional details are available in a Scientific American article and in a Nature article.

So where is all this heading? A recent Time article features an interview with futurist Ray Kurzweil, who predicts an era, roughly in 2045, when machine intelligence will meet, then transcend human intelligence. Such future intelligent systems will then design even more powerful technology, resulting in a dizzying advance that we can only dimly foresee at the present time. Kurzweil outlines this vision in his book The Singularity Is Near.

Futurists such as Kurzweil certainly have their skeptics and detractors. Sun Microsystem founder Bill Joy is concerned that humans could be relegated to minor players in the future, and that out-of-control, nanotech-produced “grey goo” could destroy life on our fragile planet. But even setting aside such concerns, there is considerable concern about the societal, legal, financial and ethical challenges of such technologies, as exhibited by the increasingly strident social backlash against technology, science and “elites” that we see today.

As a single example, the technology of self-driving vehicles, long thought to be the stuff of science fiction, has dramatically advanced in the past two years. Prototype vehicles are already plying the streets of major U.S. and European cities. Some observers now estimate that self-driving vehicles could replace 1.7 million truckers in the next decade. Even drivers of delivery vehicles could see their jobs replaced, such as by Amazon drones.

One implication of all this is that education programs in engineering, finance, medicine, law and other fields will need to change dramatically to train students in the usage of emerging technology. And even the educational system itself will need to change, as evidenced by the rise in massive open online courses (MOOC). Along this line, the big tech firms are aggressively luring top AI talent, including university faculty, with huge salaries. But clearly the field cannot eat its seed corn in this way; some solution is needed to permit faculty to continue teaching while still participating in commercial R&D work.

But one way or the other, intelligent computers are coming. Society must find a way to accommodate this technology, and to deal respectfully with the many people whose lives will be affected.

Welcome to the Brave New World!

]]>Devlin’s books are:

The Man of Numbers: Fibonacci’s Arithmetic Revolution Finding Fibonacci: The Quest to Rediscover the Forgotten Mathematical Genius WhoContinue reading Fibonacci: A man of numbers

]]>Devlin’s books are:

- The Man of Numbers: Fibonacci’s Arithmetic Revolution
- Finding Fibonacci: The Quest to Rediscover the Forgotten Mathematical Genius Who Changed the World

Leonardo was hardly the first to discover decimal arithmetic. That honor goes to still-unknown mathematicians in India, at least by the year 300 CE and most likely earlier. One key source on the Indian origin of decimal arithmetic is the Bakhshali manuscript, an ancient mathematical treatise found in 1881 in the village of Bakhshali, Pakistan. The document presents numerous sophisticated mathematical methods, all illustrated with extensive decimal arithmetic calculations. Until recently, the consensus of scholars who had studied the manuscript was that it was written in either the 7th or the 12th century, but recent radiocarbon dating tests conducted by the Bodelian Library in London, where the document has been kept, show that at least part of it dates to as early as 300 CE. This confirms that mathematicians in India were well familiar with decimal arithmetic at least by this date and probably earlier. See our previous blog for details.

The Indian system was further developed by Islamic scholars in the Arab world, in the 9th, 10th, 11th and 12th centuries. One of these was Muḥammad ibn Mūsā al-Khwārizmī (whose surname is the origin of the English word “algorithm”), 780-850 CE. He developed sophisticated techniques to solve equations, and is thought to be the founder of algebra. Another prominent Eastern mathematician during this period was Omar Khayyam, 1048-1131 CE, today better known for his poetry.

Leonardo Pisano was born in roughly the year 1175 in Pisa, Italy, the son of a wealthy merchant. His father directed a trading post in Bugia (now Bejaia) in Algeria, and took his son with him on at least one visit. It was in Bugia where Leonardo learned about Hindu-Arabic decimal arithmetic, having observed first-hand how the system was being used by traders and merchants there.

When he returned to Pisa, Leonardo vowed to bring these mathematical tools to a wider European audience. So in 1202 he wrote the book *Liber Abbaci* (“Book of Calculation”), a 600-page Latin treatise packed with hundreds of detailed problems and solutions, and then promoted it to the Italian scholarly community. He subsequently wrote additional works, including *Practice Geometriae* (a compendium of applications in practical geometry) and *Liber Quadratorum* (a compendium of techniques for solving Diophantine equations).

Of particular interest to modern readers was Leonardo’s treatment of topics in business and finance. Among other things, Leonardo introduced the technique of what we now call “present value analysis.” Additional details are given in Devlin’s books.

Sadly, not much is known about Leonardo’s personal life beyond these few facts. The last mention of his name during his lifetime was a note dated 1240 in the records of the Republic of Pisa, which recognized Leonardo for the services that he had given to the city. Thereafter he was largely forgotten to history for several centuries. From 1240 until the 19th century, the only mention of his name was in *Summa de arithmetica geometria proportioni et proportionalità* (“All That Is Known About Arithmetic, Geometry, Proportions, and Proportionality”), dated 1494, where the author Luca Pacioli concluded by writing, “Since we follow for the most part Leonardo Pisano, I intend to clarify now that any enunciation mentioned with the name of the author is to be attributed to Leonardo.” In 1838, the French historian Guillaume Libri read this note and vowed to learn more about this Leonardo Pisano, and it is largely through Libri that the modern world has learned about him.

One puzzle has long remained, however: None of the hundreds books and tutorials on decimal arithmetic that proliferated in the century or two after 1202 followed Leonardo’s *Liber Abbaci* very closely. This fact has led Keith Devlin and other scholars to conclude that Leonardo must have written some other book, a simplified version of *Liber Abbaci* written in vernacular Italian, that was the source for these subsequent works.

In fact, Leonardo himself mentioned an additional work, now lost, named *Liber minors guise* or *Libro di merchaanti detto diminor guisa* (“Book in a lesser manner or book for merchants”). But no such manuscript has ever been found.

A breakthrough occurred in 2003, when Italian scholar Rafaella Franci published her analysis of a remarkable manuscript she found in the Biblioteca Riccardiana in Florence. The manuscript is anonymous, but some details in the manuscript suggest that it was written in 1290 or so. Its author began by declaring “This is the book of abacus according to the opinion of master Leonardo of the house of sons of Bonacie from Pisa”. Roughly 3/4 of the problems presented in the book are Italian translations of problems from Chapters 8 through 11 in *Liber Abbaci*. Leonardo’s famous rabbit problem, which leads to the Fibonacci sequence, is included here, although recast in terms of pigeons.

The author of this manuscript did not appear to be a highly skilled mathematician, given some errors and other problems. It appears that for the most part he merely copied the entire book from some other work with at most minor changes. From her analysis, Franci concluded that this other work must have been Leonardo’s lost *Libro di merchaanti detto diminor guisa*. In other words, the anonymous manuscript is very likely a copy (but not a very good copy) of Leonardo’s original simplified book.

In his book *Finding Fibonacci*, Devlin recounted his personal decades-long search for Leonardo. Eventually Devlin was rewarded by personally handling two of the handful of existing original copies of *Liber Abbaci*, as well as the manuscript, mentioned above, that is now thought to be copied from Leonardo’s simplified work for merchants and traders. Devlin describes his experience at finally seeing the simplified manuscript in person in these terms: “See Florence and die, the saying goes. Well, I had just seen something that, for me, had an impact far exceeding anything to be found on the streets outside.”

There can be no doubt that Leonardo’s writings were pivotal in the explosion of intercultural exchanges, trading and scientific advancements that happened in the the 13th, 14th and 15th century, which period of time we denote the Renaissance. We can only wonder how history would have changed if Fibonacci had lived earlier.

Suppose, for instance, that Fibonacci (and the Indian mathematicians that preceded him) had lived 1000 years earlier. Would the resulting infusion of decimal computation have rejuvenated European trade, science and technology, possibly reversing its decline and fall during the dark ages?

Suppose further that Fibonacci had been contemporary with Archimedes. What might have happened if Archimedes’ mathematical brilliance (he grasped and applied the basics of integral calculus 1800 years before Newton and Leibniz) had been combined with Hindu-Arabic decimal arithmetic? Our modern technological age might have been accelerated by nearly 2000 years. History would have been different, to say the least.

]]>Here are the titles and abstracts of these talks, plus URLs for the complete PDF viewgraph files:

1. What is experimental mathematics? (15 minutes)

This overview briefly summarizes what is meant by “experimental mathematics”, as pioneered in large part by the late Jonathan Borwein. We also explain why experimental mathematics offers a unique opportunity to involve a much broader community in the process of mathematical discovery and proof — high school students, undergraduate students, computer scientists,

Continue reading Talks on experimental mathematics

]]>Here are the titles and abstracts of these talks, plus **URLs for the complete PDF viewgraph files**:

1. What is experimental mathematics? (15 minutes)

This overview briefly summarizes what is meant by “experimental mathematics”, as pioneered in large part by the late Jonathan Borwein. We also explain why experimental mathematics offers a unique opportunity to involve a much broader community in the process of mathematical discovery and proof — high school students, undergraduate students, computer scientists, statisticians and data scientists. It also presents opportunities for outreach to the public in a way that traditional mathematics has not.

2. Pi and normality: Are the digits of pi “random”? (50 minutes)

Abstract: In this talk we review the history of pi, including recently discovered formulas such as the Borwein quartic formula (each iteration of which roughly quadruples the number of correct digits). We then describe the Bailey-Borwein-Plouffe (BBP) formula (which permits one to directly calculate base-16 or binary digits of pi beginning at an arbitrary starting point), which was discovered by a computer program, arguably the first major success of the experimental paradigm in modern mathematics. We then explain why the existence of BBP-type formulas for pi and other mathematical constants has an interesting implication for the age-old question of whether and why the digits of pi and other constants are “random” — i.e., the property that every m-long string of base-b digits appears, in the limit, with frequency 1/b^m. By extending these techniques, and by using a “hot spot lemma” proved using ergodic theory methods, we are able to prove normality for a large class of specific explicit constants (sadly not yet including pi), and also to present specific examples of why normality in one number base does not necessarily imply normality in other bases.

3. High-precision arithmetic and PSLQ (30 minutes)

Abstract: This talk describes the mathematics and computational techniques employed to compute with numbers of very high numeric precision — typically thousands or millions of digits. One key technique is the usage of fast Fourier transforms to accelerate multiplication, typically by a factor of many thousands. Other algorithms permit one to evaluate the common transcendental functions (e.g., cos, sin, exp, log, etc.) to high precision. The talk then discusses the PSLQ algorithm, which is one of the key tools of experimental mathematics, and gives a variety of examples of this techniques in use.

4. Experimental mathematics and integration (50 minutes)

Abstract: One of the most common applications of the experimental methodology in mathematics is to computationally evaluate a definite integral to high precision and then use the PSLQ algorithm to recognize its value in terms of well-known mathematical constants and formulas. The key challenge here is to compute integrals (finite or infinite interval; real line or multidimensional) to very high precision — typically hundreds or thousands of digits. Fortunately, some rather effective algorithms, notably as the tanh-sinh scheme, are known for this purpose. The talk then presents several examples of this methodology in action, including the evaluation of Ising integrals and box integrals. We also present some examples showing how these methods can fail unless performed carefully.

5. Ancient Indian mathematics (40 minutes)

Abstract: It has been commonly thought that our modern system of positional decimal arithmetic with zero arose in the 13th century, with the writings of Fibonacci. In fact, it arose at least 1000 years earlier, possibly before 0 CE. One of the most interesting early artifacts exhibiting decimal arithmetic in use is the Bakhshali manuscript, an ancient Indian mathematical treatise that was discovered in 1881 near Peshawar (then in India, but now in Pakistan). In the early 20th century a British scholar assigned the Bakhshali manuscript to the 13th century, because he was convinced that it was derived from Greek mathematics, but others have argued that it was several centuries older. In September 2017, the Bodelian Library in London, which houses the manuscript, announced the results of radiocarbon tests which show that at least part of the Bakhshali manuscript dates back to 300 CE or so. This talk describes the Bakhshali manuscript in detail, including examples of solutions of linear equations, second-degree Diophantine equations, arithmetic progressions, and iterative approximations of square roots. The talk then mentions some analysis, by the present author and the late Jonathan Borwein, on the square root methods in the Bahkshali manuscript and other ancient Indian documents.

6. Computational and experimental evaluation of large Poisson polynomials (50 minutes)

Abstract: In some earlier studies of lattice sums arising from the Poisson equation of mathematical physics, it was proven that

Sum (over m,n odd) of cos(m*pi*x) * cos(n*pi*y) / (m^2 + n^2)

is always 1/pi * log A, where A is an algebraic number. By means of some very large computations with the PSLQ algorithm, polynomials associated with A were computed for numerous rational arguments x and y. Based on early results, Jason Kimberley of the University of Newcastle, Australia, conjectured a number-theoretic formula for the degree of A in the case x = y = 1/s for some integer s. In a subsequent study, co-authored with Jonathan Borwein, Jason Kimberley and Watson Ladd, the Poisson polynomial problem was addressed with significantly more capable computational tools. As a result of this improved capability, we confirmed that Kimberley’s formula holds for most integers s up to 52, and also for s = 60 and s = 64. As far as we are aware, these computations, which employed up to 64,000-digit precision, producing polynomials with degrees up to 512 and integer coefficients up to 10^229, constitute the largest successful integer relation computations performed to date. Finally, by applying some advanced algebraic techniques, we were able to prove Kimberley’s conjecture and also affirm the fact that when s is even, the polynomial is palindromic.

[Note: Due to time constraints, talk #5 above was not presented. However, the viewgraph file for this talk is available at the link above.]

]]>The manuscript features an extensive usage of decimal arithmetic — the same full-fledged positional decimal arithmetic with zero system that we

Continue reading Origin of decimal arithmetic with zero pushed back to 3rd century CE

]]>The Bakhshali manuscript is an ancient mathematical treatise that was found in 1881 in the village of Bakhshali, approximately 80 kilometers northeast of Peshawar (then in India, now in Pakistan). Among the topics covered in this document, at least in the fragments that have been recovered, are solutions of systems of linear equations, indeterminate (Diophantine) equations of the second degree, arithmetic progressions of various types, and rational approximations of square roots (more on this below).

The manuscript features an extensive usage of decimal arithmetic — the same full-fledged positional decimal arithmetic with zero system that we use today (although the symbols for the digits are a bit different).

The manuscript appears to be a copy of an even earlier work. As Japanese scholar Takao Hayashi has noted, the manuscript includes the statement “sutra bhrantim asti” (“there is a corruption in the numbering of this sutra”), indicating that the work is a commentary on an earlier work.

Ever since its discovery in 1881, scholars have debated its age. Some, like British scholar G. R. Kaye, assigned the manuscript to the 12th century, in part because he believed that its mathematical content was derivative from Greek sources. In contrast, Rudolf Hoernle assigned the underlying manuscript to the “3rd or 4th century CE.” Similarly, Bibhutibhusan Datta concluded that the older document was dated “towards the beginning of the Christian era.” Gurjar placed it between the second century BCE and the second century CE. In a more recent analysis, Japanese scholar Takao Hayashi assigned the commentary to the seventh century, with the underlying original not much older. (See this paper for references.)

Recently the Bodelian Library in London, where the Bakhshali manuscript has been housed for decades, commissioned a radiocarbon dating study on the manuscript. The test results, which were announced on 14 September 2017, are quite surprising.

These tests found that the samples they examined dated from three different time periods: one from 885-993 CE, one from 680-779 CE and a third from 224-383 CE. The latter date means that at least some of the manuscript is hundreds of years older than Hayashi’s consensus date of seventh century. Indeed, the Bakhshali manuscript’s numerous usages of zero (represented by a centered dot) now means that the manuscript is the oldest known ancient artifact with zero.

One particularly intriguing item in the Bakhshali manuscript is the following algorithm for computing square roots:

In the case of a number whose square root is to be found, divide it by the by the approximate root [the root of the nearest square number]; multiply the denominator of the resulting [ratio of the remainder to the divisor] by two; square it [the fraction just obtained]; halve it; divide it by composite fraction [the first approximation]; subtract [from the composite fraction]; [the result is] the refined root. [Translation due to M. N. Channabasappa]

In modern notation, this algorithm is as follows. To obtain the square root of a number q, start with an approximation x_{0} and then calculate, for n >= 0,

a_{n} = (q – x_{n}^{2}) / (2 x_{n})

x_{n+1} = x_{n} + a_{n} – a_{n}^{2} / (2 (x_{n} + a_{n}))

In the examples presented in the Bakhshali manuscript, this formula is used to obtain rational approximations to square roots only for integer arguments q, only for integer-valued starting values x_{0}, and is only applied once in case (even though the result after one iteration is described as the “refined root,” possibly suggesting it could be repeated). But from a modern perspective, the scheme clearly can be repeated, and in fact converges very rapidly to sqrt(q), as we shall see below.

Several explicit applications of this scheme are presented in the Bakshshali manuscript. One example is to find an accurate rational approximation to the solution of the quadratic equation 3 x^{2} / 4 + 3 x / 4 = 7000. The manuscript notes that x = (sqrt(336009) – 3) / 6, and then calculates an accurate value for sqrt(336009), starting with the approximation 579. The result obtained is

579 + 515225088 / 777307500 = 450576267588 / 777307500

This is 579.66283303325903841…, which agrees with sqrt(336009) = 579.66283303313487498… to 12-significant-digit accuracy. From a modern perspective, this happens because the Bakhshali square root algorithm is *quartically convergent* — each iteration approximately quadruples the number of correct digits in the result, provided that either exact rational arithmetic or sufficiently high precision floating-point arithmetic is used.

For additional details see the paper Ancient Indian square roots: An exercise in forensic paleo-mathematics.

Discoveries such as these underscore the regrettable legacy of a Eurocentric bias in traditional studies on the history of mathematics and science. Western scholars such as G. R. Kaye (mentioned above) quickly convinced themselves that artifacts such as the Bakhshali manuscript, which clearly contain sophisticated mathematical work, must have been derivative of western sources, e.g., Greek mathematics, and were unwilling to accept that groundbreaking work could have arisen elsewhere.

Most likely the redating of the Bakhshali manuscript is just the first step in the rectifying these errors and granting full recognition to early mathematical and scientific work in India, China and the Middle East. It’s about time.

]]>Here are the details of the talk, including the Abstract:

Conant Prize lecture

]]>Here are the details of the talk, including the Abstract:

]]>College-level algebra is probably the greatest barrier for students — particularly first-generation students, students of color — obtaining a credential. … [I]f we know we’re disadvantaging large swaths of students who we need in the workforce, we have to question why. And is algebra really the only means we have

Continue reading Does mathematical training pay off in the long run?

]]>Eloy Ortiz Oakley, the Chancellor of the California Community College system, recently recommended that intermediate algebra should no longer be required to earn an associate degree, excerpt for students majoring in some field of mathematics, science or engineering (see also this Physics Today report):

College-level algebra is probably the greatest barrier for students — particularly first-generation students, students of color — obtaining a credential. … [I]f we know we’re disadvantaging large swaths of students who we need in the workforce, we have to question why. And is algebra really the only means we have to determine whether a student is going to be successful in their life?

Another Los Angeles Times report describes a “growing number” of educators who have been challenging the “gold standard” of mathematics education in the California community college system, which in 2009 raised its elementary algebra minimum standard. The article asks

How necessary is intermediate algebra, a high school-level course on factoring trinomials, graphing exponential functions and memorizing formulas that most non-math or science students will rarely use in everyday life or for the rest of college?

Along this line, is it realistic to train students in low-income areas to be proficient in mathematics?

Recently this issue was discussed in a National Public Radio segment. It mentioned Bob Moses, a black civil rights activist, who started the Algebra Project about 30 years ago. His goal was to take students (mostly black) who score the in the bottom tier on state mathematics tests, then double up on the subject for four years, preparing them to do college-level mathematics by the time they graduate from high school. Moses says that “this newfound competence is more than just empowering. It’s how these kids can avoid being second-class citizens when they finish high school, destined for low-wage, low-skill work on the second tier of an Information Age economy.”

So does mathematics training really pay off? Is it worth all the effort, time and trouble, both for students and for educators? In particular, does mathematics training pay off for blacks and other low-income minorities? A new report published by the National Bureau of Economics Research provides some answers (see also this synopsis).

In this study, Harvard scholar Joshua Goodman examined students whose high schools back in the 1980s changed their graduation requirements to require more mathematics. He found that 15 years after graduation, those African-American high school graduates who went to school when these changes were enacted earned on average 10% extra for every year of mathematics coursework.

Goodman noted that these students didn’t necessarily become rocket scientists, because the coursework was not at a particularly high level, but their familiarity with basic algebra and mathematics concepts allowed them to pursue and do well in jobs that required some level of quantitative and/or computational skill.

Other studies say basically the same thing. A 2014 study by Harvard scholars Shawn Cole, Anna Paulson and Gauri Kartini Shastry found that familiarity with mathematics helps in other aspects of life — those who finish more mathematics courses are less likely to experience foreclosure or become delinquent on credit card accounts.

The recent survey data from Glassdoor confirm that mathematics training is indispensable for high-paying careers. In their 2017 listing of the 25 highest-paying jobs in the U.S., 19 involve mathematical proficiency (according to a count by the present author). These jobs range from nuclear engineer and corporate controller to software engineering manager and data architect (a new and rapidly expanding occupational category).

One can argue how much mathematics required in various occupations, and what percentage of the future economy will require strong mathematical proficiency.

But for anyone who has any aspiration to pursue a career in science or technology, mathematics is a must. As the present author and the late Jonathan Borwein argued in response to a claim by the eminent biologist E.O. Wilson, limited mathematical proficiency may have been passable for a scientist 30 or more years ago, but it most certainly is not acceptable today.

In particular, the recent explosion of data in almost every arena of scientific research and technology, and the growing importance of careful and statistically accurate analysis of data, places more rather than less emphasis on mathematical training. For example (to pick Wilson’s field of biology), genome sequencing technology has advanced almost beyond belief in the past 25 years. When the Human Genome Project was launched in 1990, many were skeptical that the project could complete sequencing of a single human’s genome by 2005. Yet this was completed ahead of schedule, in 2002. This project cost nearly one billion U.S. dollars. Today, this same feat can be done for as little as $1000 in a few hours or days. As a result, DNA sequencing is being extensively employed in virtually every corner of biology, including evolution and paleontology, and is also well on its way to become a staple of medical practice.

Other fields experiencing an explosion of data (and a corresponding explosion in demand of mathematically trained analysts) include astronomy, chemistry, computer science, cosmology, energy, environment, finance, geology, internet technology, machine learning, medicine, mobile technology, physics, robotics, social media and more.

So it is time to put these arguments against mathematical education to bed. They are wrong. Let’s join with educators in finding ways to improve mathematics education, not fight against it.

[Added 05 Aug 2017:] A new MarketWatch.com report, citing a recent analysis of 26 million U.S. online job postings, has found that roughly 50% of the jobs in the top income quartile (those paying $57,000 or more) require at least some computer coding skill. As always, a fairly strong mathematical background is required for any training or employment in computer software.

]]>It is worth

Continue reading Pi and the collapse of peer review

]]>Many of us have heard of the Indiana pi episode, where a bill submitted to the Indiana legislature, written by one Edward J. Goodwin, claimed to have squared the circle, yielding a value of pi = 3.2. Although the bill passed the Indiana House, it narrowly failed in the Senate and never became law, due largely to the intervention of Prof. C.A. Waldo of Purdue University, who happened to be at the Indiana legislature on other business. The story is always good for a laugh to lighten up a dull mathematics lecture.

It is worth pointing out that Goodwin’s erroneous value was ruled out by mathematicians ranging back to Archimedes, who showed that 223/71 < pi < 22/7, and by the third century Chinese mathematician Liu Hui and the fifth century Indian mathematician Aryabhata, both of whom found pi to at least four digit accuracy. In the 1600s, Isaac Newton calculated pi to 15 digits, and since then numerous mathematicians have calculated pi to ever-greater accuracy. The most recent calculation of pi, by Peter Trueb, produced over 22 *trillion* decimal digits, carefully double-checked by an independent calculation.

The question of whether pi could be written as an algebraic formula or as the root of some algebraic equation with integer coefficients was finally settled by Carl Louis Ferdinand von Lindemann, who in 1882 proved that pi is transcendental. That was 135 years ago, 15 years prior to Goodwin’s claims!

Aren’t we glad we live in the 21st century, with iPhones, Teslas, CRISPR gene-editing technology, and supercomputers that can analyze the most complex physical, biological and environmental phenomena? and where our extensive international system of peer-reviewed journals produces an ever-growing body of reliable scientific knowledge? Surely incidents such as the Indiana pi episode are well behind us?

Not so fast! Consider the following papers, each of which was published within the past five years in what claim to be reputable, peer-reviewed journals:

Papers asserting that pi = 17 – 8 sqrt(3) = 3.1435935394…:

- Paper A1, in the IOSR Journal of Mathematics.
- Paper A2, in the International Journal of Mathematics and Statistics Invention.
- Paper A3, in the International Journal of Engineering Research and Applications.

Papers asserting that pi = (14 – sqrt (2))/4 = 3.1464466094…:

- Paper B1, in the IOSR Journal of Mathematics.
- Paper B2, also in the IOSR Journal of Mathematics.
- Paper B3, again in the IOSR Journal of Mathematics.
- Paper B4, in the International Journal of Mathematics and Statistics Invention.
- Paper B5, again in the International Journal of Mathematics and Statistics Invention.
- Paper B6, in the International Journal of Engineering Inventions.
- Paper B7, in the International Journal of Latest Trends in Engineering and Technology.
- Paper B8, in the IOSR Journal of Engineering.

This listing is by no means exhaustive — numerous additional items from peer-reviewed journals could be listed. Some additional variant values of pi (which thankfully have not yet appeared in peer-reviewed venues) include a claim that pi = 4 / sqrt(phi) = 3.1446055110…, where phi is the golden ratio = 1.6180339887…, and a separate claim that pi = 2 * sqrt (2 * (sqrt(5) – 1)) = 3.1446055110…

Along this line, the present author wonders whether the above authors have mobile phones. These phones contain the numerical value of pi (or values computed based on pi), in binary, typically to 7-digit accuracy, as part of their digital signal processing facility, and would certainly would not work properly with a different value of pi. The same can be said about the GPS facility in most mobile phones, which relies critically on equations involving general and special relativity. For that matter, the electronics of mobile phones are engineered based on principles of quantum mechanics, some of which involve pi. If these authors truly believe pi to be in error, they should not use their phones (or any other high-tech device).

Before continuing, it is worth asking how one might justify the value of pi to a lay reader who is not a mathematician. Arguably the simplest and most direct method is Archimedes’ method, which computes the perimeters of circumscribed and inscribed polygons, beginning with a hexagon and then doubling the number of sides with each iteration. The scheme may be presented in our modern notation as follows: Set a1 = 2 * sqrt(3) and b1 = 3. Then iterate

a2 = 2 * a1 * b1 / (a1 + b1); b2 = sqrt (a2 * b1); a1 = a2; b1 = b2

At the end of each step, a1 is the perimeter of the circumscribed polygon, and b1 is the perimeter of the inscribed polygon, so that a1 > pi > b1. Successive values for 10 iterations are as follows:

0: 3.4641016151 > pi > 3.0000000000

1: 3.2153903091 > pi > 3.1058285412

2: 3.1596599420 > pi > 3.1326286132

3: 3.1460862151 > pi > 3.1393502030

4: 3.1427145996 > pi > 3.1410319508

5: 3.1418730499 > pi > 3.1414524722

6: 3.1416627470 > pi > 3.1415576079

7: 3.1416101766 > pi > 3.1415838921

8: 3.1415970343 > pi > 3.1415904632

9: 3.1415937487 > pi > 3.1415921059

10: 3.1415929273 > pi > 3.1415925166

Note that the two proposed values of pi mentioned in the papers above, namely 3.1464466094 and 3.1435935394, are excluded even by iteration 4. A similar calculation with areas of circumscribed and inscribed polygons, which is an even more direct and compelling demonstration, yields a similar result.

In recent years mathematicians have discovered much more rapidly convergent schemes to compute pi. With the Borwein quartic iteration for pi, for example, each iteration approximately quadruples the number of correct digits. Just three iterations of yields

3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798214808651328230664709384460955058223172535940812848111745028410270193,

which agrees with the classical value of pi to 171 digits (i.e. to the precision shown).

These and numerous other formulas for pi are listed in a collection of pi formulas assembled by the present author.

Peer review is the bedrock of modern science. Without rigorous peer review, by well-qualified reviewers, modern mathematics and science could not exist. Reviewers typically rate a submission on criteria such as:

- Relevance to the journal or conference’s charter.
- Clarity of exposition.
- Objectivity of style.
- Acknowledgement of prior work.
- Freedom from plagiarism.
- Theoretical background.
- Validity of reasoning.
- Experimental procedures and data analysis.
- Statistical methods.
- Conclusions.
- Originality and importance.

Needless to say, the papers listed above should never have been approved for publication, since such material immediately violates item 7, not to mention items 3, 4, 6 and others. Keep in mind that no editor or reviewer with even an undergraduate degree in mathematics could possibly fail to notice the claim that the traditional value of pi is incorrect. Indeed, it is hard to imagine a comparable claim in other fields: A claim that Newton’s gravitational constant is incorrect? or that atoms and molecules do not really exist? or that evolution never happened? or that the earth is only a few thousand years old?

At the very least, even to an editor without advanced mathematical training, the assertion that the traditional value of pi is incorrect would certainly have to be considered an “extraordinary claim,” which, as Carl Sagan once reminded us, requires “extraordinary evidence.” And it is quite clear that none of the above papers have offered compelling arguments, presented in highly professional and rigorous mathematical language, to justify such a claim. Thus these manuscripts should have either been rejected outright, or else referred to well-qualified mathematicians for rigorous review.

Also, the fervor with which some of these authors address their work should raise a red flag. There is simply no place in modern mathematics and science for fervor in presenting research work (see item #3 in the list of peer review standards above), since any good scholar should be prepared to discard his or her pet theory, once it has been clearly refuted by more careful reasoning or experimentation. Such problems are part of the explanation for the persistence of young-earth creationism, for instance.

So how could such egregious errors of manuscript review have occurred? The present author is regrettably forced to “follow the money” (as the shadowy informant Deep Throat in the movie All the President’s Men recommended). Indeed, all of the above journals listed above are on Beale’s list of pay-to-publish journals. Many of these journals have acquired a reputation of loose standards of publication, with only a superficial review, in return for charging a fee to authors for having their papers published on the journal’s website.

Obviously the mathematical community, and in fact the entire scientific community, needs to tighten standards for peer review and to oppose any form of “peer-reviewed” publication that involves only a perfunctory review.

Along this line, some say that we should simply ignore papers that claim incorrect values of pi, or even all articles in pay-to-publish journals, in the same way that mathematicians typically ignore email messages from writers who claim to have proven the Riemann hypothesis, or that computer scientists typically ignore writers claiming to have proven that P = NP, or that physicists typically ignore writers claiming to have devised a “theory of everything.” But in that case many legitimate papers would be excluded. Indeed, it is a grave disservice to the quality papers published in these journals for the editors’ loose standards to allow poor quality and clearly erroneous manuscripts to also appear.

In any event, there is a real danger that as a growing number of papers are published with erroneous or questionable results, other papers may cite them, thus starting a food chain of scholarship that is, at its base, mistaken. Such errors may only be rooted out years after legitimate mathematicians and scientists have cited and applied their results, and then labored in vain to understand paradoxical conclusions.

So what will the future bring? Increasing confusion, resulting from growing numbers of questionable and false published results, many in presumably peer-reviewed sources? We all have a stake in this battle.

]]>The honor goes to Michael Rao of the Ecole Normale Superieure de Lyon in France. He has completed a computer-assisted proof to complete the inventory of pentagonal shapes, the last remaining holdout. He identified 371 scenarios for how corners of pentagons might fit together, and then checked, by means of an algorithm, each scenario. In the end, his computer program determined that the

Continue reading French mathematician completes proof of tessellation conjecture

]]>The honor goes to Michael Rao of the Ecole Normale Superieure de Lyon in France. He has completed a computer-assisted proof to complete the inventory of pentagonal shapes, the last remaining holdout. He identified 371 scenarios for how corners of pentagons might fit together, and then checked, by means of an algorithm, each scenario. In the end, his computer program determined that the 15 known families of pentagonal tilings is a complete set.

A team of researchers led by Casey Mann of the University of Washington, Bothell had been working on a similar effort, and conceded that Rao had beaten them to the finish.

Rao’s effort must still be subjected to peer review, but Thomas Hales of the University of Pittsburgh, who recently proved the Kepler conjecture (that the supermarket scheme for stacking oranges is the optimal method) by means of a computer-assisted algorithm, has independently reconstructed much of Rao’s proof, and so researchers are relatively sure that Rao’s proof will hold up.

Additional details about Rao’s proof and the tessellation problem can be found in a very nice Quanta Magazine article by Natalie Wolchover.

]]>