College-level algebra is probably the greatest barrier for students — particularly first-generation students, students of color — obtaining a credential. … [I]f we know we’re disadvantaging large swaths of students who we need in the workforce, we have to question why. And is algebra really the only means we have

Continue reading Does mathematical training pay off in the long run?

]]>Eloy Ortiz Oakley, the Chancellor of the California Community College system, recently recommended that intermediate algebra should no longer be required to earn an associate degree, excerpt for students majoring in some field of mathematics, science or engineering (see also this Physics Today report):

College-level algebra is probably the greatest barrier for students — particularly first-generation students, students of color — obtaining a credential. … [I]f we know we’re disadvantaging large swaths of students who we need in the workforce, we have to question why. And is algebra really the only means we have to determine whether a student is going to be successful in their life?

Another Los Angeles Times report describes a “growing number” of educators who have been challenging the “gold standard” of mathematics education in the California community college system, which in 2009 raised its elementary algebra minimum standard. The article asks

How necessary is intermediate algebra, a high school-level course on factoring trinomials, graphing exponential functions and memorizing formulas that most non-math or science students will rarely use in everyday life or for the rest of college?

Along this line, is it realistic to train students in low-income areas to be proficient in mathematics?

Recently this issue was discussed in a National Public Radio segment. It mentioned Bob Moses, a black civil rights activist, who started the Algebra Project about 30 years ago. His goal was to take students (mostly black) who score the in the bottom tier on state mathematics tests, then double up on the subject for four years, preparing them to do college-level mathematics by the time they graduate from high school. Moses says that “this newfound competence is more than just empowering. It’s how these kids can avoid being second-class citizens when they finish high school, destined for low-wage, low-skill work on the second tier of an Information Age economy.”

So does mathematics training really pay off? Is it worth all the effort, time and trouble, both for students and for educators? In particular, does mathematics training pay off for blacks and other low-income minorities? A new report published by the National Bureau of Economics Research provides some answers (see also this synopsis).

In this study, Harvard scholar Joshua Goodman examined students whose high schools back in the 1980s changed their graduation requirements to require more mathematics. He found that 15 years after graduation, those African-American high school graduates who went to school when these changes were enacted earned on average 10% extra for every year of mathematics coursework.

Goodman noted that these students didn’t necessarily become rocket scientists, because the coursework was not at a particularly high level, but their familiarity with basic algebra and mathematics concepts allowed them to pursue and do well in jobs that required some level of quantitative and/or computational skill.

Other studies say basically the same thing. A 2014 study by Harvard scholars Shawn Cole, Anna Paulson and Gauri Kartini Shastry found that familiarity with mathematics helps in other aspects of life — those who finish more mathematics courses are less likely to experience foreclosure or become delinquent on credit card accounts.

The recent survey data from Glassdoor confirm that mathematics training is indispensable for high-paying careers. In their 2017 listing of the 25 highest-paying jobs in the U.S., 19 involve mathematical proficiency (according to a count by the present author). These jobs range from nuclear engineer and corporate controller to software engineering manager and data architect (a new and rapidly expanding occupational category).

One can argue how much mathematics required in various occupations, and what percentage of the future economy will require strong mathematical proficiency.

But for anyone who has any aspiration to pursue a career in science or technology, mathematics is a must. As the present author and the late Jonathan Borwein argued in response to a claim by the eminent biologist E.O. Wilson, limited mathematical proficiency may have been passable for a scientist 30 or more years ago, but it most certainly is not acceptable today.

In particular, the recent explosion of data in almost every arena of scientific research and technology, and the growing importance of careful and statistically accurate analysis of data, places more rather than less emphasis on mathematical training. For example (to pick Wilson’s field of biology), genome sequencing technology has advanced almost beyond belief in the past 25 years. When the Human Genome Project was launched in 1990, many were skeptical that the project could complete sequencing of a single human’s genome by 2005. Yet this was completed ahead of schedule, in 2002. This project cost nearly one billion U.S. dollars. Today, this same feat can be done for as little as $1000 in a few hours or days. As a result, DNA sequencing is being extensively employed in virtually every corner of biology, including evolution and paleontology, and is also well on its way to become a staple of medical practice.

Other fields experiencing an explosion of data (and a corresponding explosion in demand of mathematically trained analysts) include astronomy, chemistry, computer science, cosmology, energy, environment, finance, geology, internet technology, machine learning, medicine, mobile technology, physics, robotics, social media and more.

So it is time to put these arguments against mathematical education to bed. They are wrong. Let’s join with educators in finding ways to improve mathematics education, not fight against it.

[Added 05 Aug 2017:] A new MarketWatch.com report, citing a recent analysis of 26 million U.S. online job postings, has found that roughly 50% of the jobs in the top income quartile (those paying $57,000 or more) require at least some computer coding skill. As always, a fairly strong mathematical background is required for any training or employment in computer software.

]]>It is worth

Continue reading Pi and the collapse of peer review

]]>Many of us have heard of the Indiana pi episode, where a bill submitted to the Indiana legislature, written by one Edward J. Goodwin, claimed to have squared the circle, yielding a value of pi = 3.2. Although the bill passed the Indiana House, it narrowly failed in the Senate and never became law, due largely to the intervention of Prof. C.A. Waldo of Purdue University, who happened to be at the Indiana legislature on other business. The story is always good for a laugh to lighten up a dull mathematics lecture.

It is worth pointing out that Goodwin’s erroneous value was ruled out by mathematicians ranging back to Archimedes, who showed that 223/71 < pi < 22/7, and by the third century Chinese mathematician Liu Hui and the fifth century Indian mathematician Aryabhata, both of whom found pi to at least four digit accuracy. In the 1600s, Isaac Newton calculated pi to 15 digits, and since then numerous mathematicians have calculated pi to ever-greater accuracy. The most recent calculation of pi, by Peter Trueb, produced over 22 *trillion* decimal digits, carefully double-checked by an independent calculation.

The question of whether pi could be written as an algebraic formula or as the root of some algebraic equation with integer coefficients was finally settled by Carl Louis Ferdinand von Lindemann, who in 1882 proved that pi is transcendental. That was 135 years ago, 15 years prior to Goodwin’s claims!

Aren’t we glad we live in the 21st century, with iPhones, Teslas, CRISPR gene-editing technology, and supercomputers that can analyze the most complex physical, biological and environmental phenomena? and where our extensive international system of peer-reviewed journals produces an ever-growing body of reliable scientific knowledge? Surely incidents such as the Indiana pi episode are well behind us?

Not so fast! Consider the following papers, each of which was published within the past five years in what claim to be reputable, peer-reviewed journals:

Papers asserting that pi = 17 – 8 sqrt(3) = 3.1435935394…:

- Paper A1, in the IOSR Journal of Mathematics.
- Paper A2, in the International Journal of Mathematics and Statistics Invention.
- Paper A3, in the International Journal of Engineering Research and Applications.

Papers asserting that pi = (14 – sqrt (2))/4 = 3.1464466094…:

- Paper B1, in the IOSR Journal of Mathematics.
- Paper B2, also in the IOSR Journal of Mathematics.
- Paper B3, again in the IOSR Journal of Mathematics.
- Paper B4, in the International Journal of Mathematics and Statistics Invention.
- Paper B5, again in the International Journal of Mathematics and Statistics Invention.
- Paper B6, in the International Journal of Engineering Inventions.
- Paper B7, in the International Journal of Latest Trends in Engineering and Technology.
- Paper B8, in the IOSR Journal of Engineering.

This listing is by no means exhaustive — numerous additional items from peer-reviewed journals could be listed. Some additional variant values of pi (which thankfully have not yet appeared in peer-reviewed venues) include a claim that pi = 4 / sqrt(phi) = 3.1446055110…, where phi is the golden ratio = 1.6180339887…, and a separate claim that pi = 2 * sqrt (2 * (sqrt(5) – 1)) = 3.1446055110…

Along this line, the present author wonders whether the above authors have mobile phones. These phones contain the numerical value of pi (or values computed based on pi), in binary, typically to 7-digit accuracy, as part of their digital signal processing facility, and would certainly would not work properly with a different value of pi. The same can be said about the GPS facility in most mobile phones, which relies critically on equations involving general and special relativity. For that matter, the electronics of mobile phones are engineered based on principles of quantum mechanics, some of which involve pi. If these authors truly believe pi to be in error, they should not use their phones (or any other high-tech device).

Before continuing, it is worth asking how one might justify the value of pi to a lay reader who is not a mathematician. Arguably the simplest and most direct method is Archimedes’ method, which computes the perimeters of circumscribed and inscribed polygons, beginning with a hexagon and then doubling the number of sides with each iteration. The scheme may be presented in our modern notation as follows: Set a1 = 2 * sqrt(3) and b1 = 3. Then iterate

a2 = 2 * a1 * b1 / (a1 + b1); b2 = sqrt (a2 * b1); a1 = a2; b1 = b2

At the end of each step, a1 is the perimeter of the circumscribed polygon, and b1 is the perimeter of the inscribed polygon, so that a1 > pi > b1. Successive values for 10 iterations are as follows:

0: 3.4641016151 > pi > 3.0000000000

1: 3.2153903091 > pi > 3.1058285412

2: 3.1596599420 > pi > 3.1326286132

3: 3.1460862151 > pi > 3.1393502030

4: 3.1427145996 > pi > 3.1410319508

5: 3.1418730499 > pi > 3.1414524722

6: 3.1416627470 > pi > 3.1415576079

7: 3.1416101766 > pi > 3.1415838921

8: 3.1415970343 > pi > 3.1415904632

9: 3.1415937487 > pi > 3.1415921059

10: 3.1415929273 > pi > 3.1415925166

Note that the two proposed values of pi mentioned in the papers above, namely 3.1464466094 and 3.1435935394, are excluded even by iteration 4. A similar calculation with areas of circumscribed and inscribed polygons, which is an even more direct and compelling demonstration, yields a similar result.

In recent years mathematicians have discovered much more rapidly convergent schemes to compute pi. With the Borwein quartic iteration for pi, for example, each iteration approximately quadruples the number of correct digits. Just three iterations of yields

3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798214808651328230664709384460955058223172535940812848111745028410270193,

which agrees with the classical value of pi to 171 digits (i.e. to the precision shown).

These and numerous other formulas for pi are listed in a collection of pi formulas assembled by the present author.

Peer review is the bedrock of modern science. Without rigorous peer review, by well-qualified reviewers, modern mathematics and science could not exist. Reviewers typically rate a submission on criteria such as:

- Relevance to the journal or conference’s charter.
- Clarity of exposition.
- Objectivity of style.
- Acknowledgement of prior work.
- Freedom from plagiarism.
- Theoretical background.
- Validity of reasoning.
- Experimental procedures and data analysis.
- Statistical methods.
- Conclusions.
- Originality and importance.

Needless to say, the papers listed above should never have been approved for publication, since such material immediately violates item 7, not to mention items 3, 4, 6 and others. Keep in mind that no editor or reviewer with even an undergraduate degree in mathematics could possibly fail to notice the claim that the traditional value of pi is incorrect. Indeed, it is hard to imagine a comparable claim in other fields: A claim that Newton’s gravitational constant is incorrect? or that atoms and molecules do not really exist? or that evolution never happened? or that the earth is only a few thousand years old?

At the very least, even to an editor without advanced mathematical training, the assertion that the traditional value of pi is incorrect would certainly have to be considered an “extraordinary claim,” which, as Carl Sagan once reminded us, requires “extraordinary evidence.” And it is quite clear that none of the above papers have offered compelling arguments, presented in highly professional and rigorous mathematical language, to justify such a claim. Thus these manuscripts should have either been rejected outright, or else referred to well-qualified mathematicians for rigorous review.

Also, the fervor with which some of these authors address their work should raise a red flag. There is simply no place in modern mathematics and science for fervor in presenting research work (see item #3 in the list of peer review standards above), since any good scholar should be prepared to discard his or her pet theory, once it has been clearly refuted by more careful reasoning or experimentation. Such problems are part of the explanation for the persistence of young-earth creationism, for instance.

So how could such egregious errors of manuscript review have occurred? The present author is regrettably forced to “follow the money” (as the shadowy informant Deep Throat in the movie All the President’s Men recommended). Indeed, all of the above journals listed above are on Beale’s list of pay-to-publish journals. Many of these journals have acquired a reputation of loose standards of publication, with only a superficial review, in return for charging a fee to authors for having their papers published on the journal’s website.

Obviously the mathematical community, and in fact the entire scientific community, needs to tighten standards for peer review and to oppose any form of “peer-reviewed” publication that involves only a perfunctory review.

Along this line, some say that we should simply ignore papers that claim incorrect values of pi, or even all articles in pay-to-publish journals, in the same way that mathematicians typically ignore email messages from writers who claim to have proven the Riemann hypothesis, or that computer scientists typically ignore writers claiming to have proven that P = NP, or that physicists typically ignore writers claiming to have devised a “theory of everything.” But in that case many legitimate papers would be excluded. Indeed, it is a grave disservice to the quality papers published in these journals for the editors’ loose standards to allow poor quality and clearly erroneous manuscripts to also appear.

In any event, there is a real danger that as a growing number of papers are published with erroneous or questionable results, other papers may cite them, thus starting a food chain of scholarship that is, at its base, mistaken. Such errors may only be rooted out years after legitimate mathematicians and scientists have cited and applied their results, and then labored in vain to understand paradoxical conclusions.

So what will the future bring? Increasing confusion, resulting from growing numbers of questionable and false published results, many in presumably peer-reviewed sources? We all have a stake in this battle.

]]>The honor goes to Michael Rao of the Ecole Normale Superieure de Lyon in France. He has completed a computer-assisted proof to complete the inventory of pentagonal shapes, the last remaining holdout. He identified 371 scenarios for how corners of pentagons might fit together, and then checked, by means of an algorithm, each scenario. In the end, his computer program determined that the

Continue reading French mathematician completes proof of tessellation conjecture

]]>The honor goes to Michael Rao of the Ecole Normale Superieure de Lyon in France. He has completed a computer-assisted proof to complete the inventory of pentagonal shapes, the last remaining holdout. He identified 371 scenarios for how corners of pentagons might fit together, and then checked, by means of an algorithm, each scenario. In the end, his computer program determined that the 15 known families of pentagonal tilings is a complete set.

A team of researchers led by Casey Mann of the University of Washington, Bothell had been working on a similar effort, and conceded that Rao had beaten them to the finish.

Rao’s effort must still be subjected to peer review, but Thomas Hales of the University of Pittsburgh, who recently proved the Kepler conjecture (that the supermarket scheme for stacking oranges is the optimal method) by means of a computer-assisted algorithm, has independently reconstructed much of Rao’s proof, and so researchers are relatively sure that Rao’s proof will hold up.

Additional details about Rao’s proof and the tessellation problem can be found in a very nice Quanta Magazine article by Natalie Wolchover.

]]>What’s more, some Hollywood stars and celebrities have bona fide scientific credentials and achievements. Perhaps the most notable example is Hedy Lamarr, an Austrian-American actress who starred in movies such as the 1938 film Algiers, directed by John Cromwell, and the 1949 film

Continue reading Are Hollywood stars qualified to comment on science?

]]>Nowadays it is not at all unusual for Hollywood stars to lend their public celebrity status to endorse or promote some cause. For example, Angelina Jolie has lent her name and support to international efforts dealing with the refugee crisis. Sean Penn personally assisted efforts to deal with the Haiti earthquake crisis.

What’s more, some Hollywood stars and celebrities have bona fide scientific credentials and achievements. Perhaps the most notable example is Hedy Lamarr, an Austrian-American actress who starred in movies such as the 1938 film Algiers, directed by John Cromwell, and the 1949 film Samson and Delilah, directed by Cecil B. DeMille. She and her musician friend George Antheil were credited with inventing the first radio device with a frequency-hopping signal that cannot be tracked or jammed. It was technologically difficult to produce the item at the time, but updated versions were later deployed by the U.S. Navy. In 1997, Lamarr and Antheil were posthumously inducted into the U.S. National Inventors Hall of Fame.

Some contemporary Hollywood figures with scientific credentials include actress Mayim Bialik, who received a PhD in neuroscience from UCLA, actor-director Ben Miller, who studied for a PhD in solid state physics from Cambridge, and singer-songwriter Brian May, who received a PhD in astrophysics from Imperial College London.

Several Hollywood figures have lent their support to various scientific causes, notably global warming. Perhaps the best example here is Lenoardo DiCaprio, who played a role in the documentary The 11th Hour, and, in the 2007 Oscar ceremony, appeared with former U.S. Vice President Al Gore to announce new environmental policies for the Oscar awards.

Others who have been outspoken on global warming include Bjork, Emma Watson, Pharrell Williams, Emma Thompson, Akon and Arnold Schwarzenegger.

Unfortunately, in many cases Hollywood figures are clearly out of their league, and have promoted causes or made declarations that can only be described as pseudoscience. Here are some notable examples:

- Oprah Winfrey: Oprah Winfrey is widely regarded as one of the most influential women in the world; until recently, when she finally ended her weekly TV shows, she had 40 million regular viewers. She regularly features guests who promote highly questionable “alternative” health therapies, ranging from thyroid “remedies” to USD$30,000 “Thermage” machines, which the promoters claim use radio waves to smooth wrinkles and tighten skin. Among her many guests, she featured Jenny McCarthy, who claimed that MMR vaccination caused her son’s autism (more on this in the next item), and thus led considerable impetus to the anti-vaccination movement.
- Jenny McCarthy: Ms. McCarthy publicly blamed her child’s autism on his MMR vaccination and has played a leading role in the anti-vaccination movement. This is in spite of the fact that the one (and only) study claiming a link was later thoroughly debunked, and numerous other in-depth studies have found no link whatsoever. Partly a result of McCarthy’s activism, in 2015 the U.S. suffered its worst measles outbreak in 20 years. Similar outbreaks have been reported in Europe.
- Suzanne Somers: Ms. Somers promotes numerous highly questionable health practices. She suggests daily injections of estrogen for women (despite well-known health risks), taking 60 vitamins and supplements per day; and wearing “nanotechnology patches” to help sleep, lose weight and to promote “overall detoxification.”
- Ben Stein: Filmmaker Ben Stein produced the movie Expelled: No Intelligence Allowed, which dismissed evolution as a myth, alleged that countering voices have been persecuted, and even argued that Darwin’s theory paved the path to the Holocaust. Clips from scientists were shown out of context, and a very one-sided view of several other issues and events was presented.
- Gwyneth Paltrow: Ms. Paltrow has a long history of advocating numerous highly questionable health products, often promoted through her Goop brand. Her latest item is skin stickers, which promise to rebalance the energy frequency in our bodies. Goop also claimed that these stickers employed carbon fiber materials used in NASA space suits. Needless too say, “rebalancing the energy frequency in our bodies” is utter scientific nonsense, but even the claim about NASA is false, quickly denied by NASA. Paltrow has also campaigned against genetically modified foods, in spite of the fact that a recent in-depth report by the National Academy of Sciences found “no differences that would implicate a higher risk to human health from eating GE foods than from eating their non-GE counterparts.”

All of this raises the question of why the public places so much trust in Hollywood figures. Surely it is no secret that hardly any of these people are qualified to comment on scientific matters. Part of the reason, sadly, is the overall scientific illiteracy of the public.

But even here, scientists must share part of the blame. For all too long, researchers have focused exclusively on their studies, avoiding public interaction and involvement. The events of recent years should make it very clear that this approach is not working. Instead, scientists, mathematicians and others in technical fields must engage in dialogue with the public, writing articles and books targeted to the public, and also seeking opportunities to engage with persons of other disciplines, including the arts and humanities.

After all, we live in a worldwide society that is more dependent on science and technology than ever before. Thus it behooves everyone to become more knowledgeable about science and its implications for society, and for scientific researchers to better share their world with the public, not just research findings but also the excitement, wonder and awe of the research enterprise. We have only our ignorance to lose.

]]>A good many times I have been present at gatherings of people who, by the standards of the traditional culture, are thought highly educated and who have with considerable gusto been expressing their incredulity at the illiteracy of scientists. Once or twice

Continue reading Carlos Rovelli’s “Reality Is Not What It Seems”

]]>Back in 1959, the influential British scholar C. P. Snow gave a lecture entitled The two cultures and the scientific revolution. In this discourse Snow warned of a widening divide between the scientific world on one hand and the humanities on the other: “This polarization is a sheer loss to us all.” Snow wrote,

A good many times I have been present at gatherings of people who, by the standards of the traditional culture, are thought highly educated and who have with considerable gusto been expressing their incredulity at the illiteracy of scientists. Once or twice I have been provoked and have asked the company how many of them could describe the Second Law of Thermodynamics. The response was cold: it was also negative. Yet I was asking something which is about the scientific equivalent of: “Have you read a work of Shakespeare’s?”

I now believe that if I had asked an even simpler question — such as, What do you mean by mass, or acceleration, which is the scientific equivalent of saying, “Can you read?” — not more than one in ten of the highly educated would have felt that I was speaking the same language. So the great edifice of modern physics goes up, and the majority of the cleverest people in the western world have about as much insight into it as their neolithic ancestors would have had.

So what can be done to bridge this unfortunate and destructive divide? One hopeful sign of progress is that more and more accomplished scientists and mathematicians are taking up the challenge to communicate the excitement of their field to the wider public.

Many of us remember Carl Sagan’s Cosmos TV series, which introduced modern science in general and planetary science in particular to a wide audience in a very appealing format, first broadcast on the U.S. Public Broadcasting Service in 1980. More recently, Neil DeGrasse Tyson narrated a new version of Cosmos, which has been similarly successful.

In the general arena of mathematics, physics, cosmology and astronomy, perhaps the most successful recent expositions are Brian Greene’s books The Elegant Universe and The Fabric of the Cosmos, which again were developed into a relatively successful TV series that reached millions. Others with a background in this general arena, who have written successfully for the larger public, include John Barrow, Paul Davies, Alan Guth, Lee Smolin, Leonard Susskind, Lisa Randall and Max Tegmark.

The latest entry in this genre is Carlos Rovelli’s Reality Is Not What It Seems: The Journey to Quantum Gravity. In this book, Rovelli attempts to lay the philosophical and historical foundations for recent research in physics in general, and loop quantum gravity in particular.

Rovelli starts by telling of the ancient Greek scholar Leucippus and his disciple Democritus, who was later described by the Roman scholar Seneca as “the most subtle of the Ancients.” Democritus, who lived about 450 BCE, was one of the first to argue that there had to be “atoms” that comprise all material things. Democritus observed that matter could not be continuous and infinitely divisible, because (as Aristotle later reported the argument) no matter how many of these presumably infinitely small pieces were woven together they would still have no extension.

With the development of modern chemistry in the 18th and 19th century, most scientists were convinced that atoms had to be real, but some still demurred, citing the lack of definitive evidence. In 1897, for example, Ernst Mach declared, “I do not believe that atoms exist!” The first definitive proof of the “atomic hypothesis” was provided by an obscure, rebellious 25-year-old working at the Swiss patent office, namely Albert Einstein. Einstein developed a theory to explain Brownian motion, and was able to calculate the size of atoms and molecules for the first time.

Rovelli then recounts how physicists fretted in the late 19th century over a nagging discrepancy between Newton’s laws of motion and the laws of electromagnetic fields, as discovered by Faraday and mathematized by Maxwell. Maxwell’s theory led to the derivation of the speed of electromagnetic waves, but with respect to what? Again it was Einstein, who in 1905 showed that by abandoning the notion of absolute time, the two theories could be brought into agreement (except for gravitation, which had to wait for his general theory of relativity in 1915).

In another reference to ancient philosophy and literature, Rovelli points out that in Dante, in his *Inferno*, appears to have comprehended the basic notion that the space around us is a 3-sphere, as deduced by Einstein.

Rovelli moves on to more modern physics, including the fundamentally discrete nature of all things, e.g., Einstein’s finding that light consists of discrete quanta, and the development of quantum mechanics in the 20th century by Niels Bohr, Werner Heisenberg, Paul Dirac and others. Rovelli then turns his attention to the lingering problem of how to reconcile the laws of quantum mechanics, which govern the very small with remarkable precision, with those of general relativity, which govern the large-scale structure of space-time.

Here Rovelli again hearkens back to Democritus and argues that the fabric of space-time must itself be granular, and in fact given by a networked grid — a “spin network.” These are the “atoms” of space and time, and in fact it follows that our perception of time flowing uniformly forward is but an illusion at the macro scale. Rovelli also points out how loop quantum gravity also suggests that the Big Bang might be a misnomer — our universe may have arisen in a “Big Bounce” from an earlier universe.

Rovelli finally addresses the question of empirical confirmation. He acknowledges that definitive tests of loop quantum gravity are still lacking, but he points out that supersymmetry, which is thought to be an underpinning of string theory (the other, better known theory of quantum gravity), has suffered severe setbacks, because not one of the hypothesized supersymmetric particles has appeared in the latest experiments at the Large Hadron Collider.

All of this is described in a very lucid manner — Rovelli and his translators clearly have a remarkable talent for this type of exposition. And their exposition is accompanied rather effectively with numerous graphics to illustrate the increasingly subtle concepts that are presented.

In spite of the maxim that every equation will halve book sales, Rovelli doesn’t demur, and inserts, here and there, the real equations of the theories he is discussing, as much for their beauty as anything else. These include the equations of general relativity and also the equations of loop quantum gravity, confirming that observation that if a theory can’t be summarized by equations that fit on a T-shirt, then something must be wrong.

Rovelli’s book was reviewed by physicist Lisa Randall in the New York Times. Randall complimented Rovelli on his attempt to bring recent research in physics to a broader audience, but she faulted him on certain details. For example, she noted that Rovelli had given the ratio of the size of universe (the largest dimension) to the Planck scale (the smallest dimension) as 10^{120}, whereas the actual ratio is 10^{60}. In a response, Rovelli acknowledges the error, although he points out that in loop quantum gravity, it is most natural to compare areas, where the ratio is 10^{120}.

Secondly, Randall criticized Rovelli for presenting a theory (in the context of the Big Bounce) that “isn’t sufficiently well developed to do the necessary calculations to establish such a claim.” That is a valid criticism, but oddly this same general criticism could be leveled at Randall’s own recent book Dark Matter and the Dinosaurs.

Perhaps the most significant criticism is Randall’s observation that perhaps Rovelli has tried too hard to connect modern physics to the writings of the ancients: “Ideas about relativity or gravity in ancient times weren’t the same as Einstein’s theory.” The present blogger has to agree with this general assessment — it is easy to over-romanticize the past.

In spite of these flaws, Rovelli has written a marvelous book, definitely one to place on your stack (or your iPad or Kindle) for summer reading. The present blogger looks forward to additional works by this talented writer.

]]>The conference will focus on the five areas of Jonathan’s Borwein’s research:

Applied analysis, optimisation and convex functions. Chairs: Regina Burachik and Guolin Li. Education. Chairs: Judy-anne Osborn and Namoi Borwein. Experimental mathematics and visualization. Chair: David H. Bailey. Financial mathematics. Chair: Qiji (Jim) Zhu. Number theory, special functions and pi. Chair: Richard Brent.A total of 36 speakers will give presentations.

The meeting will be held at Noah’s on the Beach in Newcastle, New South Wales, Australia, which

Continue reading Jonathan Borwein Commemorative Conference

]]>The conference will focus on the five areas of Jonathan’s Borwein’s research:

- Applied analysis, optimisation and convex functions. Chairs: Regina Burachik and Guolin Li.
- Education. Chairs: Judy-anne Osborn and Namoi Borwein.
- Experimental mathematics and visualization. Chair: David H. Bailey.
- Financial mathematics. Chair: Qiji (Jim) Zhu.
- Number theory, special functions and pi. Chair: Richard Brent.

A total of 36 speakers will give presentations.

The meeting will be held at Noah’s on the Beach in Newcastle, New South Wales, Australia, which is not far from the University of Newcastle, where Professor Borwein spent the last eight years of his career.

For additional information, see the JBCC Conference website.

]]>The book presents a comprehensive analysis of the issue, delving into nuclear physics, astrophysics, cosmology, biology and philosophy. It is entertainingly written, yet does not compromise in detail. The authors mercifully relegate some of the more technical material to footnotes, but even the footnotes are remarkably useful and well documented. The book is

Continue reading Is the universe fine-tuned for intelligent life?

]]>The book presents a comprehensive analysis of the issue, delving into nuclear physics, astrophysics, cosmology, biology and philosophy. It is entertainingly written, yet does not compromise in detail. The authors mercifully relegate some of the more technical material to footnotes, but even the footnotes are remarkably useful and well documented. The book is arguably the best treatment of the topic since the monumental Anthropic Cosmological Principle by Barrow and Tipler.

For several decades, researchers have puzzled over deeply perplexing indications, many of them highly mathematical in nature, that the universe seems inexplicably well-tuned to facilitate the evolution of complex molecular structures and sentient creatures.

Some of these “cosmic coincidences” include the following (these and numerous others are presented and discussed in detail by Lewis and Barnes):

By the way, although one can imagine living organisms based on other elements, carbon is by far is the most suitable element for the construction of complex molecules, as required for any conceivable form of living or sentient beings (pg. 268). In any event, nuclear chemistry precludes any heavier elements (i.e., elements beyond hydrogen, helium, lithium and beryllium) if carbon cannot form.

Numerous “explanations” have been proposed over the years to explain these difficulties. One of the more widely accepted explanations is the multiverse, combined with the anthropic principle. The theory of inflation, mentioned above, suggests that our universe is merely one pocket that separated from many others in the very early universe. Similarly, string theory suggests that there our universe is merely one speck in an enormous landscape of possible universes, by one count 10^{500} in number, each corresponding to a different Calabi-Yau manifold.

Thus, the thinking goes, we should not be surprised that we find ourselves in a universe that has somehow beaten the one-in-10^{120} odds to be life-friendly (to pick just the cosmological constant paradox), because it had to happen somewhere, and, besides, if our universe were not life-friendly, then we would not be here to talk about it. In other words, these researchers propose that the multiverse (or the “cosmic landscape”) actually exists in some sense, but acknowledge that the vast, vast majority of these universes are utterly sterile — either very short-lived or else completely devoid of atoms or other structures, much less sentient living organisms like us contemplating the meaning of their existence.

However, many researchers (Lee Smolin, Joseph Ellis and Joseph Silk, to name just three) remain extremely uncomfortable with hypothesizing a multiverse and invoking the anthropic principle. For one thing, it sounds too much like a tautology with no real substance. More importantly, proposing a staggeringly large number of unseen universes, all to explain the cosmic coincidences, is a flagrant violation of Occam’s razor (“Entities must not be multiplied beyond necessity”). Isn’t there a better explanation than this?!

Lewis and Barnes explore these issues in substantial detail. In this regard, they follow a long line of very eminent researchers who have puzzled over these same problems in published books and articles (some dating back to the 1970s): John Barrow, Bernard Carr, Sean Carroll, Paul Davies, David Deutsch, Brian Greene, Alan Guth, Edward Harrison, Stephen Hawking, Andre Linde, Roger Penrose, John Polkinghorne, Martin Rees, Lee Smolin, Leonard Susskind, Max Tegmark, Frank Tipler, Alexander Vilenkin, Steven Weinberg, Frank Wilczek, among others.

Another useful reference is Luke Barnes’s 2011 paper, which summarizes many of these issues.

In the end, the Lewis-Barnes book does not offer any firm answers — only more questions. The one thing that is certain, though, is that our knowledge of the basic underlying mathematical laws governing the universe is incomplete. If examination of these paradoxes eventually leads to a greater understanding of these laws, it will have all been worthwhile.

]]>On 21 March 2017 the Norwegian Academy of Science and Letters announced that the 2017 Abel Prize for mathematics, thought by many to be on a par with the Nobel Prize, has been awarded to Yves Meyer for his groundbreaking work on wavelets.

Many of the leading awards made in the field of mathematics are for highly abstract theoretical work. But wavelet theory is certainly in the area of applied mathematics, as it is now used in many different real-world arenas. Applications include data compression, acoustic noise

Continue reading Yves Meyer wins the Abel Prize for wavelet work

]]>On 21 March 2017 the Norwegian Academy of Science and Letters announced that the 2017 Abel Prize for mathematics, thought by many to be on a par with the Nobel Prize, has been awarded to Yves Meyer for his groundbreaking work on wavelets.

Many of the leading awards made in the field of mathematics are for highly abstract theoretical work. But wavelet theory is certainly in the area of applied mathematics, as it is now used in many different real-world arenas. Applications include data compression, acoustic noise reduction, biomedical imaging, digital movie projection, economics, image correction of Hubble space telescope images, and the recent detection, by the LIGO team, of gravitational waves created in the wake of the collision of two black holes.

The Abel Prize includes a cache award of six million Norwegian kroner, or approximately 675,000 Euros or 715,000 USD.

Scientists have for many years used Fourier analysis, which was first developed in the 19th century by Joseph Fourier, to analyze signals and other periodic phenomenon. Beginning in the 1950s, with the development of the fast Fourier transform (FFT), the discrete Fourier transform has been employed very extensively in science and engineering. Mobile phones employ the FFT in encoding and decoding signals sent to/from cell towers.

The FFT is also heavily used in scientific computation, because it permits one to economically perform a convolution operation. As a single example, for sufficiently high numeric precision the most efficient algorithm for multiplying two very high-precision numbers is to treat the multiplication as a linear convolution, which can be evaluated very rapidly using an FFT.

However, discrete Fourier transforms, and corresponding FFT algorithms, have their limitations. Fourier analysis is fine for analyzing periodic behavior of an entire dataset, but in the real world periodic behavior is often a feature of only a small part of a dataset, such as in a sparse dataset.

For many applications of this type, wavelets are superior. Wavelets are wave-like oscillations with amplitudes that start at zero, then increase, then decrease back to zero. Practical applications are facilitated by the development of fast computational algorithms, analogous to the FFT, that are suitable for large-scale computation as well as mobile applications, such as speech recognition and image analysis.

There is an extensive literature on wavelets. A rather good introduction to the topic is available in the Wikipedia page on wavelets. A good source for a more detailed treatments is a Ten Lectures on Wavelets by Ingrid Daubechies. Additional information on Meyers’ career and research on wavelets is available in a Scientific American article and also in a Quanta Magazine article.

]]>Continue reading Exoplanets, 4 billion-year-old life, Fermi’s paradox and zero-one laws

]]>These discoveries were made by carefully analyzing the transit of these planets in front of the host star; conveniently, the planets are in a plane that is in a direct line between the star and earth. What’s more, by careful data analysis the astronomers were even able to deduce the size and mass of these planets, confirming that all seven are roughly the size of earth. While all seven are possible harbors for life, three seem particularly plausible, namely e, f and g in the figure.

Some researchers dispute these findings. But earlier studies have already confirmed the existence of life 3.5 billion years ago. So either way it is clear that life formed on earth almost “immediately” after its surface solidified, or, in other words, within a few million years (an eye-blink in cosmic time) after the earliest epoch that life possibly could have arisen.

If life formed so quickly here on earth, surely it has also formed on many of the estimated 100 billion other planets surrounding stars in the Milky Way. And if life has arisen on numerous other planets, surely, after several billion years of evolution, at least some of these planets are now homes to full-fledged technological civilizations, at least as advanced as ours; in fact, almost certainly they are far more advanced, since it is highly unlikely that after billions of years that they are exactly at our level. With their advanced technology, surely they have been able not only to witness the rise of life and civilization on our planet, but also to contact us or otherwise disclose their existence, deliberately or inadvertently. Yet decades of determined high-tech searches by the SETI project and other groups have come up empty-handed. WHERE IS EVERBODY?

Fermi’s paradox, as this conundrum is known, has been analyzed in great detail by many writers since 1950 when Fermi first posed it. Astronomer Frank Drake, for example, proposed his now-famous Drake Equation:

N = R^{*} f_{p} n_{e} f_{l} f_{i} f_{c} L

which estimates the number of technological civilizations in the galaxy. For additional information on the scientific debate, see our previous blogs (Blog A and Blog B), or books by Stephen Webb (2002), John Gribbin (2011) and Paul Davies (2011).

The bottom line is that there are no good answers. Here is a quick summary of some of the proposed explanations and common rejoinders:

*They are under strict orders not to disclose their existence*. Rejoinder: This explanation (often termed the “zookeeper’s hypothesis”) falls prey to the inescapable fact that it just takes one small group in just one extraterrestrial civilization to dissent and break the pact of silence. It seems utterly impossible that a ban of this sort could be imposed, without a single exception over millions of years, on a vast galactic society of advanced civilizations, each with billions of individuals (if earth society is any guide), dispersed over many star systems and planets.*They exist, but are too far away*. Rejoinder: Such arguments ignore the potential of rapidly advancing technology. For example, once a civilization is sufficiently advanced, it could send “von Neumann probes” to distant stars, which could scout out suitable planets, land, and then construct additional copies of themselves, using the latest software beamed from the home planet. In one recent analysis, researchers found that 99% of all star systems in the Milky Way could be explored in only about five million years, which is an eye-blink in the multi-billion-year age of the Milky Way. Already, scientists are planning to send fleets of nanocraft to visit nearby stars such as Alpha Centauri. And astronomers hope to soon be able to detect signatures of life on nearby exoplanets. So given that alien civilizations almost certainly have far more advanced technology than we do, why hasn’t the earth been visited and/or contacted, many times over?*They exist, but have lost interest in interstellar communication and/or exploration*. Rejoinder: Given that Darwinian evolution, which is widely believed to be the mechanism guiding the development of biology everywhere in the universe, strongly favors organisms that explore and expand their dominion, it is hardly credible that*each and every individual*, in*each and every distant civilization*forever lacks interest in space exploration, or (as in item #1 above) that a galactic society is 100% effective, over many millions of years and over many billions of individuals in numerous civilizations, in enforcing a ban against those who wish to explore the galaxy or communicate with emerging societies such as ours.*They are not interested in making contact with such a primitive species as us.*As with #1 and #3, while the majority of individuals and civilizations might not be interested in a species such as us, surely among this vast society at least some individuals and some civilizations are interested. By analogy, while most humans and even most scientists are not at all interested in ants, a few researchers are very interested, and they study ant species all over the world in great detail. Also, we are now learning how to communicate with many species on earth, great and small.*They are calling, but we do not yet recognize the signal*. Rejoinder: While most agree that the SETI project still has much searching to do, this explanation doesn’t apply to signals that are sent with the express purpose of communicating to a newly technological society such as us, in a form that we could easily recognize (e.g., by microwave or light signals). And as with item #1, #3 and #4, it is hard to see how multiple galactic societies could forever enforce, without any exceptions, a global ban on such targeted communications. Why are they making it so hard for us to find them?*Civilizations like us invariably self-destruct*. From human experience we have survived 200 years of technological adolescence, and have not yet destroyed ourselves in a nuclear or biological apocalypse. In any event, within a decade or two human civilization will spread to the Moon and to Mars, and then its long-term existence will be largely impervious to calamities on earth.*They visited earth and planted DNA.*Rejoinder: Although the notion that life began elsewhere (i.e., “directed panspermia”) has been proposed by some scientists, detailed analyses of DNA have found no evidence of anything artificial, and, what’s more, this does not solve the problem of the origin of life — it just pushes it to some other star system.*WE ARE ALONE, at least within the Milky Way galaxy and possibly beyond*. Rejoinder: This hypothesis flies in the face of the “principle of mediocrity,” namely the presumption, dominant since the time of Copernicus, that there is nothing special about earth or human society. More importantly, it also flies in the face of virtually all of the recent discoveries in this general arena — extrasolar planets, ancient life, molecular biogenesis and more — which suggest that life is pervasive in the universe.

Those who have studied probability theory will recall various “zero-one” laws. Colloquially speaking, if an event has nonzero probability, then eventually, under independent repetitions, it certainly will occur (i.e., it will occur with probability one). Conversely, if an event has zero probability, then no matter how many independent repetitions are performed, it will never occur (i.e., it will occur with probability zero). There is no other logical probability other than zero or one.

Other specific examples of zero-one laws include the Borel-Cantelli lemmas and the Hewitt-Savage law. The first Borel-Cantelli lemma says (under appropriate conditions) that if the sum of the probabilities of a sequence of events is finite, then the probability that infinitely many of them occur is zero. The second Borel-Cantelli lemma says (again, under appropriate conditions) that if the sum of the probabilities diverges and the events are independent, then the probability that infinitely many of them occur is one.

Zero-one laws seem to be analogous to Fermi’s paradox. If, as seems reasonable, there are many alien civilizations (or even just one), each with billions of individuals (as with the human species), having arisen from Darwinian evolution and thus imbued with a drive to expand and explore, then it seems exceedingly improbable that not a single individual or group of individuals from even one of these civilizations has visited the earth, attempted to make contact, or has even merely disclosed their own existence, deliberately or inadvertently. In other words, with billions of possibilities, the probability should be unity that one or more would contact us or at least permit their existence to be disclosed to a civilization such as ours.

The other possibility is, of course, rather disquieting: For reasons we evidently cannot yet fathom, we represent the end of a long string of exceedingly unlikely events (possibilities include the origin of life, the origin of complex multicellular life, the origin of intelligent life, the avoidance of various planetary catastrophes, the maintenance of a hospitable environment over billions of years, etc.), whose probability of simultaneously occurring is virtually zero. For example, there may be some great filter (e.g., a gamma-ray burst or the like) that invariably ends societies such as ours before they advance to the technology level or venture to the stars or colonize the galaxy, and we somehow have miraculously avoided this common fate so far. Either way, we are alone — the first and only technological civilization in the Milky Way if not beyond.

So which is it, probability one or probability zero? Is intelligent life pervasive in the universe, or is humanity a highly improbable freak of nature?

What is the answer to Fermi’s paradox? The present author certainly does not know. But it is clear that the research topics behind Fermi’s paradox (exoplanets, origin of life, SETI, etc.) are among the most significant scientific questions that our species has addressed.

I, for one, hope that I live to see the day that this question is finally resolved.

]]>This volume consists of 27 chapters, grouped into six sections, which collectively address questions of reproducibility in a broad range of scientific disciplines, ranging from medicine, physical sciences, life sciences, social sciences and even

Continue reading Reproducibility: Principles, Problems, Practices, and Prospects

]]>This volume consists of 27 chapters, grouped into six sections, which collectively address questions of reproducibility in a broad range of scientific disciplines, ranging from medicine, physical sciences, life sciences, social sciences and even an article on reproducibility in literature studies. Statistical issues that arise in reproducibility studies are also discussed.

The book arose out of a growing consensus that reproducibility difficulties plague a surprising wide range of scientific disciplines. Here are just a few of recent cases that have attracted widespread publicity:

- In 2017 the Reproducibility Project was able to replicate only two of five key studies in cancer research.
- In 2015, in a separate study by the Reproducibility Project, only 39 of 100 psychology studies could be replicated.
- In 2015, a study by the U.S. Federal Reserve was able to reproduce only 29 of 67 economics studies.
- In 2014, backtest overfitting emerged as a major problem in computational finance.
- In 2012, Amgen researchers reported that they were able to reproduce fewer than 10 of 53 cancer studies.

Our particular article for this volume, “Facilitating reproducibility in scientific computing: Principles and practice” (preprint available here), addresses reproducibility in the context of mathematical and scientific computing. We observe that far from being immune from these difficulties, in fact the mathematical and scientific computing field has some serious problems:

- Very few published computational studies include full details of the algorithms, computer equipment, code and data used.
- The field is, if anything, quite laggard in developing an ethic of reproducibility, such as the need to carefully document all numerical experiments.
- In many cases, even the original authors of a computational study can no longer reproduce their published results — the actual codes used are no longer available or have been changed, etc.
- Few studies save their code and data on permanent, publicly accessible data repositories.
- Many studies employ questionable statistical methods or metrics.
- Many studies employ questionable methods to report performance.
- Numerical reproducibility has emerged has a significant issue, particularly on very large-scale computations that greatly magnify sensitivity to numerical round-off error.

Some may be disturbed at some of these developments, but the present authors actually regard efforts such as the publication of this volume to be a good sign. We see it as evidence that many fields, from mathematical computing to cancer research and climate modeling, are recognizing the need for improvement and are taking positive steps to correct these problems.

As a single example, recently the American Association for the Advancement of Science adopted new guidelines for papers submitted to its flagship journals, including *Science*. Under these new guidelines, authors are required to provide more background information on their work, and papers are subjected to a review of their statistical methods in addition to the discipline-specific review.

More than anything else, though, such changes may well promote a fundamental culture change in the way science is done. That will be progress indeed.

]]>