Now a new computer program, called “AlphaGo Zero,” which literally taught itself

Continue reading New Go-playing program teaches itself, beating previous program 100-0

]]>Now a new computer program, called “AlphaGo Zero,” which literally taught itself to learn from scratch without any human input, has defeated the previous program 100 games to zero.

What does this mean? First, some background:

Many are familiar with the 1997 defeat of Garry Kasparov, then the world’s reigning chess champion, by IBM’s “Deep Blue” computer. Commenting on his experience, Kasparov later reflected, “It was my luck (perhaps my bad luck) to be the world chess champion during the critical years in which computers challenged, then surpassed, human chess players.”

Deep Blue employed some new techniques, but for the most part it simply applied enormous computing power, so that it could store many openings, look ahead many moves, and — except when the programmers erred — never make mistakes.

An even more impressive performance was the 2011 defeat of two champion contestants on the American quiz show Jeopardy! by an IBM-developed computer system named “Watson.” The Watson achievement was significantly more impressive than Deep Blue or other board-game-playing programs because it involved natural language understanding, namely the intelligent “understanding,” in some sense, of ordinary (often tricky) English text.

For example, in “Final Jeopardy” culminating the Jeopardy! match, in the category “19th century novelists,” the clue was “William Wilkinson’s ‘An Account of the Principalities of Wallachia and Moldavia’ inspired this author’s most famous novel.” Watson correctly responded “Who is Bram Stoker?” [the author of Dracula], thus sealing the victory. Legendary Jeopardy! champ Ken Jennings conceded by writing on his tablet, “I for one welcome our new computer overlords.”

In the years since the Jeopardy! demonstration, IBM has deployed its Watson AI technology in the health care field. In a recent test of its cancer-diagnosing facility, Watson recommended treatment plans that matched recommendations by oncologists in 99 percent of the cases, and offered options doctors missed 30 percent of them.

In the ancient Chinese game of Go, players place black and white beads on a 19×19 grid. The game is notoriously complicated, with strategies that can only be described in vague, subjective terms. For these reasons, in spite of advances such as the Deep Blue chess-playing program and even the Watson Jeopardy!-playing program, many observers did not expect Go-playing computer programs to be able to beat the best human players for many years.

Thus it was with considerable surprise when in March 2016, a computer program named AlphaGo, developed by researchers working for DeepMind, a subsidiary of Alphabet (Google’s parent company), defeated Lee Se-dol, a South Korean Go master 4-1 in a 5-game tournament. Not resting on laurels, the DeepMind researchers further enhanced their program, and then, on 23 May 2017, defeated Ke Jie, a 19-year-old Chinese Go master thought to be the world’s best human Go player.

In developing the program that defeated Lee and Ke, DeepMind researchers fed their program 100,000 top amateur Top games and “taught” it to imitate what it observed. Then they had the program play itself and learn from the results, slowly increasing its skill.

In the latest development, the new program, called AlphaGo Zero, bypassed the first step. The DeepMind researchers merely programmed the rules of Go, with a simple reward function that rewarded games won, and had it play games against itself. Initially, the program merely scattered pieces seemingly at random across the board. But it quickly got better at evaluating board positions, and substantially increased its level of skill.

Interestingly, along the way the program rediscovered many basic elements of Go strategies used by human players, including anticipating its opponent’s probable next moves. But unshackled from the experience of humans, it then developed new complex strategies never before seen in human Go games.

After just three days of training and 4.9 million training games (with the program playing against itself), the AlphaGo Zero program had advanced to the point that it defeated the earlier version of the program 100 games to zero.

Skill at Go (and several other games) is quantified by the Elo rating, which is based on the record of their past games. Lee’s rating is 3526, while Ke’s rating is 3661. After 40 days of training, AlphaGo Zero’s Elo rating was over 5000. Thus AlphaGo Zero was as far ahead of Ke as Ke is ahead of a good amateur player.

Additional details are available in a Scientific American article and in a Nature article.

So where is all this heading? A recent Time article features an interview with futurist Ray Kurzweil, who predicts an era, roughly in 2045, when machine intelligence will meet, then transcend human intelligence. Such future intelligent systems will then design even more powerful technology, resulting in a dizzying advance that we can only dimly foresee at the present time. Kurzweil outlines this vision in his book The Singularity Is Near.

Futurists such as Kurzweil certainly have their skeptics and detractors. Sun Microsystem founder Bill Joy is concerned that humans could be relegated to minor players in the future, and that out-of-control, nanotech-produced “grey goo” could destroy life on our fragile planet. But even setting aside such concerns, there is considerable concern about the societal, legal, financial and ethical challenges of such technologies, as exhibited by the increasingly strident social backlash against technology, science and “elites” that we see today.

As a single example, the technology of self-driving vehicles, long thought to be the stuff of science fiction, has dramatically advanced in the past two years. Prototype vehicles are already plying the streets of major U.S. and European cities. Some observers now estimate that self-driving vehicles could replace 1.7 million truckers in the next decade. Even drivers of delivery vehicles could see their jobs replaced, such as by Amazon drones.

But one way or the other, intelligent computers are coming. Society must find a way to accommodate this technology, and to deal respectfully with the many people whose lives will be affected.

Welcome to the Brave New World!

]]>Devlin’s books are:

The Man of Numbers: Fibonacci’s Arithmetic Revolution Finding Fibonacci: The Quest to Rediscover the Forgotten Mathematical Genius WhoContinue reading Fibonacci: A man of numbers

]]>Devlin’s books are:

- The Man of Numbers: Fibonacci’s Arithmetic Revolution
- Finding Fibonacci: The Quest to Rediscover the Forgotten Mathematical Genius Who Changed the World

Leonardo was hardly the first to discover decimal arithmetic. That honor goes to still-unknown mathematicians in India, at least by the year 300 CE and most likely earlier. One key source on the Indian origin of decimal arithmetic is the Bakhshali manuscript, an ancient mathematical treatise found in 1881 in the village of Bakhshali, Pakistan. The document presents numerous sophisticated mathematical methods, all illustrated with extensive decimal arithmetic calculations. Until recently, the consensus of scholars who had studied the manuscript was that it was written in either the 7th or the 12th century, but recent radiocarbon dating tests conducted by the Bodelian Library in London, where the document has been kept, show that at least part of it dates to as early as 300 CE. This confirms that mathematicians in India were well familiar with decimal arithmetic at least by this date and probably earlier. See our previous blog for details.

The Indian system was further developed by Islamic scholars in the Arab world, in the 9th, 10th, 11th and 12th centuries. One of these was Muḥammad ibn Mūsā al-Khwārizmī (whose surname is the origin of the English word “algorithm”), 780-850 CE. He developed sophisticated techniques to solve equations, and is thought to be the founder of algebra. Another prominent Eastern mathematician during this period was Omar Khayyam, 1048-1131 CE, today better known for his poetry.

Leonardo Pisano was born in roughly the year 1175 in Pisa, Italy, the son of a wealthy merchant. His father directed a trading post in Bugia (now Bejaia) in Algeria, and took his son with him on at least one visit. It was in Bugia where Leonardo learned about Hindu-Arabic decimal arithmetic, having observed first-hand how the system was being used by traders and merchants there.

When he returned to Pisa, Leonardo vowed to bring these mathematical tools to a wider European audience. So in 1202 he wrote the book *Liber Abbaci* (“Book of Calculation”), a 600-page Latin treatise packed with hundreds of detailed problems and solutions, and then promoted it to the Italian scholarly community. He subsequently wrote additional works, including *Practice Geometriae* (a compendium of applications in practical geometry) and *Liber Quadratorum* (a compendium of techniques for solving Diophantine equations).

Of particular interest to modern readers was Leonardo’s treatment of topics in business and finance. Among other things, Leonardo introduced the technique of what we now call “present value analysis.” Additional details are given in Devlin’s books.

Sadly, not much is known about Leonardo’s personal life beyond these few facts. The last mention of his name during his lifetime was a note dated 1240 in the records of the Republic of Pisa, which recognized Leonardo for the services that he had given to the city. Thereafter he was largely forgotten to history for several centuries. From 1240 until the 19th century, the only mention of his name was in *Summa de arithmetica geometria proportioni et proportionalità* (“All That Is Known About Arithmetic, Geometry, Proportions, and Proportionality”), dated 1494, where the author Luca Pacioli concluded by writing, “Since we follow for the most part Leonardo Pisano, I intend to clarify now that any enunciation mentioned with the name of the author is to be attributed to Leonardo.” In 1838, the French historian Guillaume Libri read this note and vowed to learn more about this Leonardo Pisano, and it is largely through Libri that the modern world has learned about him.

One puzzle has long remained, however: None of the hundreds books and tutorials on decimal arithmetic that proliferated in the century or two after 1202 followed Leonardo’s *Liber Abbaci* very closely. This fact has led Keith Devlin and other scholars to conclude that Leonardo must have written some other book, a simplified version of *Liber Abbaci* written in vernacular Italian, that was the source for these subsequent works.

In fact, Leonardo himself mentioned an additional work, now lost, named *Liber minors guise* or *Libro di merchaanti detto diminor guisa* (“Book in a lesser manner or book for merchants”). But no such manuscript has ever been found.

A breakthrough occurred in 2003, when Italian scholar Rafaella Franci published her analysis of a remarkable manuscript she found in the Biblioteca Riccardiana in Florence. The manuscript is anonymous, but some details in the manuscript suggest that it was written in 1290 or so. Its author began by declaring “This is the book of abacus according to the opinion of master Leonardo of the house of sons of Bonacie from Pisa”. Roughly 3/4 of the problems presented in the book are Italian translations of problems from Chapters 8 through 11 in *Liber Abbaci*. Leonardo’s famous rabbit problem, which leads to the Fibonacci sequence, is included here, although recast in terms of pigeons.

The author of this manuscript did not appear to be a highly skilled mathematician, given some errors and other problems. It appears that for the most part he merely copied the entire book from some other work with at most minor changes. From her analysis, Franci concluded that this other work must have been Leonardo’s lost *Libro di merchaanti detto diminor guisa*. In other words, the anonymous manuscript is very likely a copy (but not a very good copy) of Leonardo’s original simplified book.

In his book *Finding Fibonacci*, Devlin recounted his personal decades-long search for Leonardo. Eventually Devlin was rewarded by personally handling two of the handful of existing original copies of *Liber Abbaci*, as well as the manuscript, mentioned above, that is now thought to be copied from Leonardo’s simplified work for merchants and traders. Devlin describes his experience at finally seeing the simplified manuscript in person in these terms: “See Florence and die, the saying goes. Well, I had just seen something that, for me, had an impact far exceeding anything to be found on the streets outside.”

There can be no doubt that Leonardo’s writings were pivotal in the explosion of intercultural exchanges, trading and scientific advancements that happened in the the 13th, 14th and 15th century, which period of time we denote the Renaissance. We can only wonder how history would have changed if Fibonacci had lived earlier.

Suppose, for instance, that Fibonacci (and the Indian mathematicians that preceded him) had lived 1000 years earlier. Would the resulting infusion of decimal computation have rejuvenated European trade, science and technology, possibly reversing its decline and fall during the dark ages?

Suppose further that Fibonacci had been contemporary with Archimedes. What might have happened if Archimedes’ mathematical brilliance (he grasped and applied the basics of integral calculus 1800 years before Newton and Leibniz) had been combined with Hindu-Arabic decimal arithmetic? Our modern technological age might have been accelerated by nearly 2000 years. History would have been different, to say the least.

]]>Here are the titles and abstracts of these talks, plus URLs for the complete PDF viewgraph files:

1. What is experimental mathematics? (15 minutes)

This overview briefly summarizes what is meant by “experimental mathematics”, as pioneered in large part by the late Jonathan Borwein. We also explain why experimental mathematics offers a unique opportunity to involve a much broader community in the process of mathematical discovery and proof — high school students, undergraduate students, computer scientists,

Continue reading Talks on experimental mathematics

]]>Here are the titles and abstracts of these talks, plus **URLs for the complete PDF viewgraph files**:

1. What is experimental mathematics? (15 minutes)

This overview briefly summarizes what is meant by “experimental mathematics”, as pioneered in large part by the late Jonathan Borwein. We also explain why experimental mathematics offers a unique opportunity to involve a much broader community in the process of mathematical discovery and proof — high school students, undergraduate students, computer scientists, statisticians and data scientists. It also presents opportunities for outreach to the public in a way that traditional mathematics has not.

2. Pi and normality: Are the digits of pi “random”? (50 minutes)

Abstract: In this talk we review the history of pi, including recently discovered formulas such as the Borwein quartic formula (each iteration of which roughly quadruples the number of correct digits). We then describe the Bailey-Borwein-Plouffe (BBP) formula (which permits one to directly calculate base-16 or binary digits of pi beginning at an arbitrary starting point), which was discovered by a computer program, arguably the first major success of the experimental paradigm in modern mathematics. We then explain why the existence of BBP-type formulas for pi and other mathematical constants has an interesting implication for the age-old question of whether and why the digits of pi and other constants are “random” — i.e., the property that every m-long string of base-b digits appears, in the limit, with frequency 1/b^m. By extending these techniques, and by using a “hot spot lemma” proved using ergodic theory methods, we are able to prove normality for a large class of specific explicit constants (sadly not yet including pi), and also to present specific examples of why normality in one number base does not necessarily imply normality in other bases.

3. High-precision arithmetic and PSLQ (30 minutes)

Abstract: This talk describes the mathematics and computational techniques employed to compute with numbers of very high numeric precision — typically thousands or millions of digits. One key technique is the usage of fast Fourier transforms to accelerate multiplication, typically by a factor of many thousands. Other algorithms permit one to evaluate the common transcendental functions (e.g., cos, sin, exp, log, etc.) to high precision. The talk then discusses the PSLQ algorithm, which is one of the key tools of experimental mathematics, and gives a variety of examples of this techniques in use.

4. Experimental mathematics and integration (50 minutes)

Abstract: One of the most common applications of the experimental methodology in mathematics is to computationally evaluate a definite integral to high precision and then use the PSLQ algorithm to recognize its value in terms of well-known mathematical constants and formulas. The key challenge here is to compute integrals (finite or infinite interval; real line or multidimensional) to very high precision — typically hundreds or thousands of digits. Fortunately, some rather effective algorithms, notably as the tanh-sinh scheme, are known for this purpose. The talk then presents several examples of this methodology in action, including the evaluation of Ising integrals and box integrals. We also present some examples showing how these methods can fail unless performed carefully.

5. Ancient Indian mathematics (40 minutes)

Abstract: It has been commonly thought that our modern system of positional decimal arithmetic with zero arose in the 13th century, with the writings of Fibonacci. In fact, it arose at least 1000 years earlier, possibly before 0 CE. One of the most interesting early artifacts exhibiting decimal arithmetic in use is the Bakhshali manuscript, an ancient Indian mathematical treatise that was discovered in 1881 near Peshawar (then in India, but now in Pakistan). In the early 20th century a British scholar assigned the Bakhshali manuscript to the 13th century, because he was convinced that it was derived from Greek mathematics, but others have argued that it was several centuries older. In September 2017, the Bodelian Library in London, which houses the manuscript, announced the results of radiocarbon tests which show that at least part of the Bakhshali manuscript dates back to 300 CE or so. This talk describes the Bakhshali manuscript in detail, including examples of solutions of linear equations, second-degree Diophantine equations, arithmetic progressions, and iterative approximations of square roots. The talk then mentions some analysis, by the present author and the late Jonathan Borwein, on the square root methods in the Bahkshali manuscript and other ancient Indian documents.

6. Computational and experimental evaluation of large Poisson polynomials (50 minutes)

Abstract: In some earlier studies of lattice sums arising from the Poisson equation of mathematical physics, it was proven that

Sum (over m,n odd) of cos(m*pi*x) * cos(n*pi*y) / (m^2 + n^2)

is always 1/pi * log A, where A is an algebraic number. By means of some very large computations with the PSLQ algorithm, polynomials associated with A were computed for numerous rational arguments x and y. Based on early results, Jason Kimberley of the University of Newcastle, Australia, conjectured a number-theoretic formula for the degree of A in the case x = y = 1/s for some integer s. In a subsequent study, co-authored with Jonathan Borwein, Jason Kimberley and Watson Ladd, the Poisson polynomial problem was addressed with significantly more capable computational tools. As a result of this improved capability, we confirmed that Kimberley’s formula holds for most integers s up to 52, and also for s = 60 and s = 64. As far as we are aware, these computations, which employed up to 64,000-digit precision, producing polynomials with degrees up to 512 and integer coefficients up to 10^229, constitute the largest successful integer relation computations performed to date. Finally, by applying some advanced algebraic techniques, we were able to prove Kimberley’s conjecture and also affirm the fact that when s is even, the polynomial is palindromic.

[Note: Due to time constraints, talk #5 above was not presented. However, the viewgraph file for this talk is available at the link above.]

]]>The manuscript features an extensive usage of decimal arithmetic — the same full-fledged positional decimal arithmetic with zero system that we

Continue reading Origin of decimal arithmetic with zero pushed back to 3rd century CE

]]>The Bakhshali manuscript is an ancient mathematical treatise that was found in 1881 in the village of Bakhshali, approximately 80 kilometers northeast of Peshawar (then in India, now in Pakistan). Among the topics covered in this document, at least in the fragments that have been recovered, are solutions of systems of linear equations, indeterminate (Diophantine) equations of the second degree, arithmetic progressions of various types, and rational approximations of square roots (more on this below).

The manuscript features an extensive usage of decimal arithmetic — the same full-fledged positional decimal arithmetic with zero system that we use today (although the symbols for the digits are a bit different).

The manuscript appears to be a copy of an even earlier work. As Japanese scholar Takao Hayashi has noted, the manuscript includes the statement “sutra bhrantim asti” (“there is a corruption in the numbering of this sutra”), indicating that the work is a commentary on an earlier work.

Ever since its discovery in 1881, scholars have debated its age. Some, like British scholar G. R. Kaye, assigned the manuscript to the 12th century, in part because he believed that its mathematical content was derivative from Greek sources. In contrast, Rudolf Hoernle assigned the underlying manuscript to the “3rd or 4th century CE.” Similarly, Bibhutibhusan Datta concluded that the older document was dated “towards the beginning of the Christian era.” Gurjar placed it between the second century BCE and the second century CE. In a more recent analysis, Japanese scholar Takao Hayashi assigned the commentary to the seventh century, with the underlying original not much older. (See this paper for references.)

Recently the Bodelian Library in London, where the Bakhshali manuscript has been housed for decades, commissioned a radiocarbon dating study on the manuscript. The test results, which were announced on 14 September 2017, are quite surprising.

These tests found that the samples they examined dated from three different time periods: one from 885-993 CE, one from 680-779 CE and a third from 224-383 CE. The latter date means that at least some of the manuscript is hundreds of years older than Hayashi’s consensus date of seventh century. Indeed, the Bakhshali manuscript’s numerous usages of zero (represented by a centered dot) now means that the manuscript is the oldest known ancient artifact with zero.

One particularly intriguing item in the Bakhshali manuscript is the following algorithm for computing square roots:

In the case of a number whose square root is to be found, divide it by the by the approximate root [the root of the nearest square number]; multiply the denominator of the resulting [ratio of the remainder to the divisor] by two; square it [the fraction just obtained]; halve it; divide it by composite fraction [the first approximation]; subtract [from the composite fraction]; [the result is] the refined root. [Translation due to M. N. Channabasappa]

In modern notation, this algorithm is as follows. To obtain the square root of a number q, start with an approximation x_{0} and then calculate, for n >= 0,

a_{n} = (q – x_{n}^{2}) / (2 x_{n})

x_{n+1} = x_{n} + a_{n} – a_{n}^{2} / (2 (x_{n} + a_{n}))

In the examples presented in the Bakhshali manuscript, this formula is used to obtain rational approximations to square roots only for integer arguments q, only for integer-valued starting values x_{0}, and is only applied once in case (even though the result after one iteration is described as the “refined root,” possibly suggesting it could be repeated). But from a modern perspective, the scheme clearly can be repeated, and in fact converges very rapidly to sqrt(q), as we shall see below.

Several explicit applications of this scheme are presented in the Bakshshali manuscript. One example is to find an accurate rational approximation to the solution of the quadratic equation 3 x^{2} / 4 + 3 x / 4 = 7000. The manuscript notes that x = (sqrt(336009) – 3) / 6, and then calculates an accurate value for sqrt(336009), starting with the approximation 579. The result obtained is

579 + 515225088 / 777307500 = 450576267588 / 777307500

This is 579.66283303325903841…, which agrees with sqrt(336009) = 579.66283303313487498… to 12-significant-digit accuracy. From a modern perspective, this happens because the Bakhshali square root algorithm is *quartically convergent* — each iteration approximately quadruples the number of correct digits in the result, provided that either exact rational arithmetic or sufficiently high precision floating-point arithmetic is used.

For additional details see the paper Ancient Indian square roots: An exercise in forensic paleo-mathematics.

Discoveries such as these underscore the regrettable legacy of a Eurocentric bias in traditional studies on the history of mathematics and science. Western scholars such as G. R. Kaye (mentioned above) quickly convinced themselves that artifacts such as the Bakhshali manuscript, which clearly contain sophisticated mathematical work, must have been derivative of western sources, e.g., Greek mathematics, and were unwilling to accept that groundbreaking work could have arisen elsewhere.

Most likely the redating of the Bakhshali manuscript is just the first step in the rectifying these errors and granting full recognition to early mathematical and scientific work in India, China and the Middle East. It’s about time.

]]>Here are the details of the talk, including the Abstract:

Conant Prize lecture

]]>Here are the details of the talk, including the Abstract:

]]>College-level algebra is probably the greatest barrier for students — particularly first-generation students, students of color — obtaining a credential. … [I]f we know we’re disadvantaging large swaths of students who we need in the workforce, we have to question why. And is algebra really the only means we have

Continue reading Does mathematical training pay off in the long run?

]]>Eloy Ortiz Oakley, the Chancellor of the California Community College system, recently recommended that intermediate algebra should no longer be required to earn an associate degree, excerpt for students majoring in some field of mathematics, science or engineering (see also this Physics Today report):

College-level algebra is probably the greatest barrier for students — particularly first-generation students, students of color — obtaining a credential. … [I]f we know we’re disadvantaging large swaths of students who we need in the workforce, we have to question why. And is algebra really the only means we have to determine whether a student is going to be successful in their life?

Another Los Angeles Times report describes a “growing number” of educators who have been challenging the “gold standard” of mathematics education in the California community college system, which in 2009 raised its elementary algebra minimum standard. The article asks

How necessary is intermediate algebra, a high school-level course on factoring trinomials, graphing exponential functions and memorizing formulas that most non-math or science students will rarely use in everyday life or for the rest of college?

Along this line, is it realistic to train students in low-income areas to be proficient in mathematics?

Recently this issue was discussed in a National Public Radio segment. It mentioned Bob Moses, a black civil rights activist, who started the Algebra Project about 30 years ago. His goal was to take students (mostly black) who score the in the bottom tier on state mathematics tests, then double up on the subject for four years, preparing them to do college-level mathematics by the time they graduate from high school. Moses says that “this newfound competence is more than just empowering. It’s how these kids can avoid being second-class citizens when they finish high school, destined for low-wage, low-skill work on the second tier of an Information Age economy.”

So does mathematics training really pay off? Is it worth all the effort, time and trouble, both for students and for educators? In particular, does mathematics training pay off for blacks and other low-income minorities? A new report published by the National Bureau of Economics Research provides some answers (see also this synopsis).

In this study, Harvard scholar Joshua Goodman examined students whose high schools back in the 1980s changed their graduation requirements to require more mathematics. He found that 15 years after graduation, those African-American high school graduates who went to school when these changes were enacted earned on average 10% extra for every year of mathematics coursework.

Goodman noted that these students didn’t necessarily become rocket scientists, because the coursework was not at a particularly high level, but their familiarity with basic algebra and mathematics concepts allowed them to pursue and do well in jobs that required some level of quantitative and/or computational skill.

Other studies say basically the same thing. A 2014 study by Harvard scholars Shawn Cole, Anna Paulson and Gauri Kartini Shastry found that familiarity with mathematics helps in other aspects of life — those who finish more mathematics courses are less likely to experience foreclosure or become delinquent on credit card accounts.

The recent survey data from Glassdoor confirm that mathematics training is indispensable for high-paying careers. In their 2017 listing of the 25 highest-paying jobs in the U.S., 19 involve mathematical proficiency (according to a count by the present author). These jobs range from nuclear engineer and corporate controller to software engineering manager and data architect (a new and rapidly expanding occupational category).

One can argue how much mathematics required in various occupations, and what percentage of the future economy will require strong mathematical proficiency.

But for anyone who has any aspiration to pursue a career in science or technology, mathematics is a must. As the present author and the late Jonathan Borwein argued in response to a claim by the eminent biologist E.O. Wilson, limited mathematical proficiency may have been passable for a scientist 30 or more years ago, but it most certainly is not acceptable today.

In particular, the recent explosion of data in almost every arena of scientific research and technology, and the growing importance of careful and statistically accurate analysis of data, places more rather than less emphasis on mathematical training. For example (to pick Wilson’s field of biology), genome sequencing technology has advanced almost beyond belief in the past 25 years. When the Human Genome Project was launched in 1990, many were skeptical that the project could complete sequencing of a single human’s genome by 2005. Yet this was completed ahead of schedule, in 2002. This project cost nearly one billion U.S. dollars. Today, this same feat can be done for as little as $1000 in a few hours or days. As a result, DNA sequencing is being extensively employed in virtually every corner of biology, including evolution and paleontology, and is also well on its way to become a staple of medical practice.

Other fields experiencing an explosion of data (and a corresponding explosion in demand of mathematically trained analysts) include astronomy, chemistry, computer science, cosmology, energy, environment, finance, geology, internet technology, machine learning, medicine, mobile technology, physics, robotics, social media and more.

So it is time to put these arguments against mathematical education to bed. They are wrong. Let’s join with educators in finding ways to improve mathematics education, not fight against it.

[Added 05 Aug 2017:] A new MarketWatch.com report, citing a recent analysis of 26 million U.S. online job postings, has found that roughly 50% of the jobs in the top income quartile (those paying $57,000 or more) require at least some computer coding skill. As always, a fairly strong mathematical background is required for any training or employment in computer software.

]]>It is worth

Continue reading Pi and the collapse of peer review

]]>Many of us have heard of the Indiana pi episode, where a bill submitted to the Indiana legislature, written by one Edward J. Goodwin, claimed to have squared the circle, yielding a value of pi = 3.2. Although the bill passed the Indiana House, it narrowly failed in the Senate and never became law, due largely to the intervention of Prof. C.A. Waldo of Purdue University, who happened to be at the Indiana legislature on other business. The story is always good for a laugh to lighten up a dull mathematics lecture.

It is worth pointing out that Goodwin’s erroneous value was ruled out by mathematicians ranging back to Archimedes, who showed that 223/71 < pi < 22/7, and by the third century Chinese mathematician Liu Hui and the fifth century Indian mathematician Aryabhata, both of whom found pi to at least four digit accuracy. In the 1600s, Isaac Newton calculated pi to 15 digits, and since then numerous mathematicians have calculated pi to ever-greater accuracy. The most recent calculation of pi, by Peter Trueb, produced over 22 *trillion* decimal digits, carefully double-checked by an independent calculation.

The question of whether pi could be written as an algebraic formula or as the root of some algebraic equation with integer coefficients was finally settled by Carl Louis Ferdinand von Lindemann, who in 1882 proved that pi is transcendental. That was 135 years ago, 15 years prior to Goodwin’s claims!

Aren’t we glad we live in the 21st century, with iPhones, Teslas, CRISPR gene-editing technology, and supercomputers that can analyze the most complex physical, biological and environmental phenomena? and where our extensive international system of peer-reviewed journals produces an ever-growing body of reliable scientific knowledge? Surely incidents such as the Indiana pi episode are well behind us?

Not so fast! Consider the following papers, each of which was published within the past five years in what claim to be reputable, peer-reviewed journals:

Papers asserting that pi = 17 – 8 sqrt(3) = 3.1435935394…:

- Paper A1, in the IOSR Journal of Mathematics.
- Paper A2, in the International Journal of Mathematics and Statistics Invention.
- Paper A3, in the International Journal of Engineering Research and Applications.

Papers asserting that pi = (14 – sqrt (2))/4 = 3.1464466094…:

- Paper B1, in the IOSR Journal of Mathematics.
- Paper B2, also in the IOSR Journal of Mathematics.
- Paper B3, again in the IOSR Journal of Mathematics.
- Paper B4, in the International Journal of Mathematics and Statistics Invention.
- Paper B5, again in the International Journal of Mathematics and Statistics Invention.
- Paper B6, in the International Journal of Engineering Inventions.
- Paper B7, in the International Journal of Latest Trends in Engineering and Technology.
- Paper B8, in the IOSR Journal of Engineering.

This listing is by no means exhaustive — numerous additional items from peer-reviewed journals could be listed. Some additional variant values of pi (which thankfully have not yet appeared in peer-reviewed venues) include a claim that pi = 4 / sqrt(phi) = 3.1446055110…, where phi is the golden ratio = 1.6180339887…, and a separate claim that pi = 2 * sqrt (2 * (sqrt(5) – 1)) = 3.1446055110…

Along this line, the present author wonders whether the above authors have mobile phones. These phones contain the numerical value of pi (or values computed based on pi), in binary, typically to 7-digit accuracy, as part of their digital signal processing facility, and would certainly would not work properly with a different value of pi. The same can be said about the GPS facility in most mobile phones, which relies critically on equations involving general and special relativity. For that matter, the electronics of mobile phones are engineered based on principles of quantum mechanics, some of which involve pi. If these authors truly believe pi to be in error, they should not use their phones (or any other high-tech device).

Before continuing, it is worth asking how one might justify the value of pi to a lay reader who is not a mathematician. Arguably the simplest and most direct method is Archimedes’ method, which computes the perimeters of circumscribed and inscribed polygons, beginning with a hexagon and then doubling the number of sides with each iteration. The scheme may be presented in our modern notation as follows: Set a1 = 2 * sqrt(3) and b1 = 3. Then iterate

a2 = 2 * a1 * b1 / (a1 + b1); b2 = sqrt (a2 * b1); a1 = a2; b1 = b2

At the end of each step, a1 is the perimeter of the circumscribed polygon, and b1 is the perimeter of the inscribed polygon, so that a1 > pi > b1. Successive values for 10 iterations are as follows:

0: 3.4641016151 > pi > 3.0000000000

1: 3.2153903091 > pi > 3.1058285412

2: 3.1596599420 > pi > 3.1326286132

3: 3.1460862151 > pi > 3.1393502030

4: 3.1427145996 > pi > 3.1410319508

5: 3.1418730499 > pi > 3.1414524722

6: 3.1416627470 > pi > 3.1415576079

7: 3.1416101766 > pi > 3.1415838921

8: 3.1415970343 > pi > 3.1415904632

9: 3.1415937487 > pi > 3.1415921059

10: 3.1415929273 > pi > 3.1415925166

Note that the two proposed values of pi mentioned in the papers above, namely 3.1464466094 and 3.1435935394, are excluded even by iteration 4. A similar calculation with areas of circumscribed and inscribed polygons, which is an even more direct and compelling demonstration, yields a similar result.

In recent years mathematicians have discovered much more rapidly convergent schemes to compute pi. With the Borwein quartic iteration for pi, for example, each iteration approximately quadruples the number of correct digits. Just three iterations of yields

3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798214808651328230664709384460955058223172535940812848111745028410270193,

which agrees with the classical value of pi to 171 digits (i.e. to the precision shown).

These and numerous other formulas for pi are listed in a collection of pi formulas assembled by the present author.

Peer review is the bedrock of modern science. Without rigorous peer review, by well-qualified reviewers, modern mathematics and science could not exist. Reviewers typically rate a submission on criteria such as:

- Relevance to the journal or conference’s charter.
- Clarity of exposition.
- Objectivity of style.
- Acknowledgement of prior work.
- Freedom from plagiarism.
- Theoretical background.
- Validity of reasoning.
- Experimental procedures and data analysis.
- Statistical methods.
- Conclusions.
- Originality and importance.

Needless to say, the papers listed above should never have been approved for publication, since such material immediately violates item 7, not to mention items 3, 4, 6 and others. Keep in mind that no editor or reviewer with even an undergraduate degree in mathematics could possibly fail to notice the claim that the traditional value of pi is incorrect. Indeed, it is hard to imagine a comparable claim in other fields: A claim that Newton’s gravitational constant is incorrect? or that atoms and molecules do not really exist? or that evolution never happened? or that the earth is only a few thousand years old?

At the very least, even to an editor without advanced mathematical training, the assertion that the traditional value of pi is incorrect would certainly have to be considered an “extraordinary claim,” which, as Carl Sagan once reminded us, requires “extraordinary evidence.” And it is quite clear that none of the above papers have offered compelling arguments, presented in highly professional and rigorous mathematical language, to justify such a claim. Thus these manuscripts should have either been rejected outright, or else referred to well-qualified mathematicians for rigorous review.

Also, the fervor with which some of these authors address their work should raise a red flag. There is simply no place in modern mathematics and science for fervor in presenting research work (see item #3 in the list of peer review standards above), since any good scholar should be prepared to discard his or her pet theory, once it has been clearly refuted by more careful reasoning or experimentation. Such problems are part of the explanation for the persistence of young-earth creationism, for instance.

So how could such egregious errors of manuscript review have occurred? The present author is regrettably forced to “follow the money” (as the shadowy informant Deep Throat in the movie All the President’s Men recommended). Indeed, all of the above journals listed above are on Beale’s list of pay-to-publish journals. Many of these journals have acquired a reputation of loose standards of publication, with only a superficial review, in return for charging a fee to authors for having their papers published on the journal’s website.

Obviously the mathematical community, and in fact the entire scientific community, needs to tighten standards for peer review and to oppose any form of “peer-reviewed” publication that involves only a perfunctory review.

Along this line, some say that we should simply ignore papers that claim incorrect values of pi, or even all articles in pay-to-publish journals, in the same way that mathematicians typically ignore email messages from writers who claim to have proven the Riemann hypothesis, or that computer scientists typically ignore writers claiming to have proven that P = NP, or that physicists typically ignore writers claiming to have devised a “theory of everything.” But in that case many legitimate papers would be excluded. Indeed, it is a grave disservice to the quality papers published in these journals for the editors’ loose standards to allow poor quality and clearly erroneous manuscripts to also appear.

In any event, there is a real danger that as a growing number of papers are published with erroneous or questionable results, other papers may cite them, thus starting a food chain of scholarship that is, at its base, mistaken. Such errors may only be rooted out years after legitimate mathematicians and scientists have cited and applied their results, and then labored in vain to understand paradoxical conclusions.

So what will the future bring? Increasing confusion, resulting from growing numbers of questionable and false published results, many in presumably peer-reviewed sources? We all have a stake in this battle.

]]>The honor goes to Michael Rao of the Ecole Normale Superieure de Lyon in France. He has completed a computer-assisted proof to complete the inventory of pentagonal shapes, the last remaining holdout. He identified 371 scenarios for how corners of pentagons might fit together, and then checked, by means of an algorithm, each scenario. In the end, his computer program determined that the

Continue reading French mathematician completes proof of tessellation conjecture

]]>The honor goes to Michael Rao of the Ecole Normale Superieure de Lyon in France. He has completed a computer-assisted proof to complete the inventory of pentagonal shapes, the last remaining holdout. He identified 371 scenarios for how corners of pentagons might fit together, and then checked, by means of an algorithm, each scenario. In the end, his computer program determined that the 15 known families of pentagonal tilings is a complete set.

A team of researchers led by Casey Mann of the University of Washington, Bothell had been working on a similar effort, and conceded that Rao had beaten them to the finish.

Rao’s effort must still be subjected to peer review, but Thomas Hales of the University of Pittsburgh, who recently proved the Kepler conjecture (that the supermarket scheme for stacking oranges is the optimal method) by means of a computer-assisted algorithm, has independently reconstructed much of Rao’s proof, and so researchers are relatively sure that Rao’s proof will hold up.

Additional details about Rao’s proof and the tessellation problem can be found in a very nice Quanta Magazine article by Natalie Wolchover.

]]>What’s more, some Hollywood stars and celebrities have bona fide scientific credentials and achievements. Perhaps the most notable example is Hedy Lamarr, an Austrian-American actress who starred in movies such as the 1938 film Algiers, directed by John Cromwell, and the 1949 film

Continue reading Are Hollywood stars qualified to comment on science?

]]>Nowadays it is not at all unusual for Hollywood stars to lend their public celebrity status to endorse or promote some cause. For example, Angelina Jolie has lent her name and support to international efforts dealing with the refugee crisis. Sean Penn personally assisted efforts to deal with the Haiti earthquake crisis.

What’s more, some Hollywood stars and celebrities have bona fide scientific credentials and achievements. Perhaps the most notable example is Hedy Lamarr, an Austrian-American actress who starred in movies such as the 1938 film Algiers, directed by John Cromwell, and the 1949 film Samson and Delilah, directed by Cecil B. DeMille. She and her musician friend George Antheil were credited with inventing the first radio device with a frequency-hopping signal that cannot be tracked or jammed. It was technologically difficult to produce the item at the time, but updated versions were later deployed by the U.S. Navy. In 1997, Lamarr and Antheil were posthumously inducted into the U.S. National Inventors Hall of Fame.

Some contemporary Hollywood figures with scientific credentials include actress Mayim Bialik, who received a PhD in neuroscience from UCLA, actor-director Ben Miller, who studied for a PhD in solid state physics from Cambridge, and singer-songwriter Brian May, who received a PhD in astrophysics from Imperial College London.

Several Hollywood figures have lent their support to various scientific causes, notably global warming. Perhaps the best example here is Lenoardo DiCaprio, who played a role in the documentary The 11th Hour, and, in the 2007 Oscar ceremony, appeared with former U.S. Vice President Al Gore to announce new environmental policies for the Oscar awards.

Others who have been outspoken on global warming include Bjork, Emma Watson, Pharrell Williams, Emma Thompson, Akon and Arnold Schwarzenegger.

Unfortunately, in many cases Hollywood figures are clearly out of their league, and have promoted causes or made declarations that can only be described as pseudoscience. Here are some notable examples:

- Oprah Winfrey: Oprah Winfrey is widely regarded as one of the most influential women in the world; until recently, when she finally ended her weekly TV shows, she had 40 million regular viewers. She regularly features guests who promote highly questionable “alternative” health therapies, ranging from thyroid “remedies” to USD$30,000 “Thermage” machines, which the promoters claim use radio waves to smooth wrinkles and tighten skin. Among her many guests, she featured Jenny McCarthy, who claimed that MMR vaccination caused her son’s autism (more on this in the next item), and thus led considerable impetus to the anti-vaccination movement.
- Jenny McCarthy: Ms. McCarthy publicly blamed her child’s autism on his MMR vaccination and has played a leading role in the anti-vaccination movement. This is in spite of the fact that the one (and only) study claiming a link was later thoroughly debunked, and numerous other in-depth studies have found no link whatsoever. Partly a result of McCarthy’s activism, in 2015 the U.S. suffered its worst measles outbreak in 20 years. Similar outbreaks have been reported in Europe.
- Suzanne Somers: Ms. Somers promotes numerous highly questionable health practices. She suggests daily injections of estrogen for women (despite well-known health risks), taking 60 vitamins and supplements per day; and wearing “nanotechnology patches” to help sleep, lose weight and to promote “overall detoxification.”
- Ben Stein: Filmmaker Ben Stein produced the movie Expelled: No Intelligence Allowed, which dismissed evolution as a myth, alleged that countering voices have been persecuted, and even argued that Darwin’s theory paved the path to the Holocaust. Clips from scientists were shown out of context, and a very one-sided view of several other issues and events was presented.
- Gwyneth Paltrow: Ms. Paltrow has a long history of advocating numerous highly questionable health products, often promoted through her Goop brand. Her latest item is skin stickers, which promise to rebalance the energy frequency in our bodies. Goop also claimed that these stickers employed carbon fiber materials used in NASA space suits. Needless too say, “rebalancing the energy frequency in our bodies” is utter scientific nonsense, but even the claim about NASA is false, quickly denied by NASA. Paltrow has also campaigned against genetically modified foods, in spite of the fact that a recent in-depth report by the National Academy of Sciences found “no differences that would implicate a higher risk to human health from eating GE foods than from eating their non-GE counterparts.”

All of this raises the question of why the public places so much trust in Hollywood figures. Surely it is no secret that hardly any of these people are qualified to comment on scientific matters. Part of the reason, sadly, is the overall scientific illiteracy of the public.

But even here, scientists must share part of the blame. For all too long, researchers have focused exclusively on their studies, avoiding public interaction and involvement. The events of recent years should make it very clear that this approach is not working. Instead, scientists, mathematicians and others in technical fields must engage in dialogue with the public, writing articles and books targeted to the public, and also seeking opportunities to engage with persons of other disciplines, including the arts and humanities.

After all, we live in a worldwide society that is more dependent on science and technology than ever before. Thus it behooves everyone to become more knowledgeable about science and its implications for society, and for scientific researchers to better share their world with the public, not just research findings but also the excitement, wonder and awe of the research enterprise. We have only our ignorance to lose.

]]>A good many times I have been present at gatherings of people who, by the standards of the traditional culture, are thought highly educated and who have with considerable gusto been expressing their incredulity at the illiteracy of scientists. Once or twice

Continue reading Carlos Rovelli’s “Reality Is Not What It Seems”

]]>Back in 1959, the influential British scholar C. P. Snow gave a lecture entitled The two cultures and the scientific revolution. In this discourse Snow warned of a widening divide between the scientific world on one hand and the humanities on the other: “This polarization is a sheer loss to us all.” Snow wrote,

A good many times I have been present at gatherings of people who, by the standards of the traditional culture, are thought highly educated and who have with considerable gusto been expressing their incredulity at the illiteracy of scientists. Once or twice I have been provoked and have asked the company how many of them could describe the Second Law of Thermodynamics. The response was cold: it was also negative. Yet I was asking something which is about the scientific equivalent of: “Have you read a work of Shakespeare’s?”

I now believe that if I had asked an even simpler question — such as, What do you mean by mass, or acceleration, which is the scientific equivalent of saying, “Can you read?” — not more than one in ten of the highly educated would have felt that I was speaking the same language. So the great edifice of modern physics goes up, and the majority of the cleverest people in the western world have about as much insight into it as their neolithic ancestors would have had.

So what can be done to bridge this unfortunate and destructive divide? One hopeful sign of progress is that more and more accomplished scientists and mathematicians are taking up the challenge to communicate the excitement of their field to the wider public.

Many of us remember Carl Sagan’s Cosmos TV series, which introduced modern science in general and planetary science in particular to a wide audience in a very appealing format, first broadcast on the U.S. Public Broadcasting Service in 1980. More recently, Neil DeGrasse Tyson narrated a new version of Cosmos, which has been similarly successful.

In the general arena of mathematics, physics, cosmology and astronomy, perhaps the most successful recent expositions are Brian Greene’s books The Elegant Universe and The Fabric of the Cosmos, which again were developed into a relatively successful TV series that reached millions. Others with a background in this general arena, who have written successfully for the larger public, include John Barrow, Paul Davies, Alan Guth, Lee Smolin, Leonard Susskind, Lisa Randall and Max Tegmark.

The latest entry in this genre is Carlos Rovelli’s Reality Is Not What It Seems: The Journey to Quantum Gravity. In this book, Rovelli attempts to lay the philosophical and historical foundations for recent research in physics in general, and loop quantum gravity in particular.

Rovelli starts by telling of the ancient Greek scholar Leucippus and his disciple Democritus, who was later described by the Roman scholar Seneca as “the most subtle of the Ancients.” Democritus, who lived about 450 BCE, was one of the first to argue that there had to be “atoms” that comprise all material things. Democritus observed that matter could not be continuous and infinitely divisible, because (as Aristotle later reported the argument) no matter how many of these presumably infinitely small pieces were woven together they would still have no extension.

With the development of modern chemistry in the 18th and 19th century, most scientists were convinced that atoms had to be real, but some still demurred, citing the lack of definitive evidence. In 1897, for example, Ernst Mach declared, “I do not believe that atoms exist!” The first definitive proof of the “atomic hypothesis” was provided by an obscure, rebellious 25-year-old working at the Swiss patent office, namely Albert Einstein. Einstein developed a theory to explain Brownian motion, and was able to calculate the size of atoms and molecules for the first time.

Rovelli then recounts how physicists fretted in the late 19th century over a nagging discrepancy between Newton’s laws of motion and the laws of electromagnetic fields, as discovered by Faraday and mathematized by Maxwell. Maxwell’s theory led to the derivation of the speed of electromagnetic waves, but with respect to what? Again it was Einstein, who in 1905 showed that by abandoning the notion of absolute time, the two theories could be brought into agreement (except for gravitation, which had to wait for his general theory of relativity in 1915).

In another reference to ancient philosophy and literature, Rovelli points out that in Dante, in his *Inferno*, appears to have comprehended the basic notion that the space around us is a 3-sphere, as deduced by Einstein.

Rovelli moves on to more modern physics, including the fundamentally discrete nature of all things, e.g., Einstein’s finding that light consists of discrete quanta, and the development of quantum mechanics in the 20th century by Niels Bohr, Werner Heisenberg, Paul Dirac and others. Rovelli then turns his attention to the lingering problem of how to reconcile the laws of quantum mechanics, which govern the very small with remarkable precision, with those of general relativity, which govern the large-scale structure of space-time.

Here Rovelli again hearkens back to Democritus and argues that the fabric of space-time must itself be granular, and in fact given by a networked grid — a “spin network.” These are the “atoms” of space and time, and in fact it follows that our perception of time flowing uniformly forward is but an illusion at the macro scale. Rovelli also points out how loop quantum gravity also suggests that the Big Bang might be a misnomer — our universe may have arisen in a “Big Bounce” from an earlier universe.

Rovelli finally addresses the question of empirical confirmation. He acknowledges that definitive tests of loop quantum gravity are still lacking, but he points out that supersymmetry, which is thought to be an underpinning of string theory (the other, better known theory of quantum gravity), has suffered severe setbacks, because not one of the hypothesized supersymmetric particles has appeared in the latest experiments at the Large Hadron Collider.

All of this is described in a very lucid manner — Rovelli and his translators clearly have a remarkable talent for this type of exposition. And their exposition is accompanied rather effectively with numerous graphics to illustrate the increasingly subtle concepts that are presented.

In spite of the maxim that every equation will halve book sales, Rovelli doesn’t demur, and inserts, here and there, the real equations of the theories he is discussing, as much for their beauty as anything else. These include the equations of general relativity and also the equations of loop quantum gravity, confirming that observation that if a theory can’t be summarized by equations that fit on a T-shirt, then something must be wrong.

Rovelli’s book was reviewed by physicist Lisa Randall in the New York Times. Randall complimented Rovelli on his attempt to bring recent research in physics to a broader audience, but she faulted him on certain details. For example, she noted that Rovelli had given the ratio of the size of universe (the largest dimension) to the Planck scale (the smallest dimension) as 10^{120}, whereas the actual ratio is 10^{60}. In a response, Rovelli acknowledges the error, although he points out that in loop quantum gravity, it is most natural to compare areas, where the ratio is 10^{120}.

Secondly, Randall criticized Rovelli for presenting a theory (in the context of the Big Bounce) that “isn’t sufficiently well developed to do the necessary calculations to establish such a claim.” That is a valid criticism, but oddly this same general criticism could be leveled at Randall’s own recent book Dark Matter and the Dinosaurs.

Perhaps the most significant criticism is Randall’s observation that perhaps Rovelli has tried too hard to connect modern physics to the writings of the ancients: “Ideas about relativity or gravity in ancient times weren’t the same as Einstein’s theory.” The present blogger has to agree with this general assessment — it is easy to over-romanticize the past.

In spite of these flaws, Rovelli has written a marvelous book, definitely one to place on your stack (or your iPad or Kindle) for summer reading. The present blogger looks forward to additional works by this talented writer.

]]>