The book presents a comprehensive analysis of the issue, delving into nuclear physics, astrophysics, cosmology, biology and philosophy. It is entertainingly written, yet does not compromise in detail. The authors mercifully relegate some of the more technical material to footnotes, but even the footnotes are remarkably useful and well documented. The book is

Continue reading Is the universe fine-tuned for intelligent life?

]]>The book presents a comprehensive analysis of the issue, delving into nuclear physics, astrophysics, cosmology, biology and philosophy. It is entertainingly written, yet does not compromise in detail. The authors mercifully relegate some of the more technical material to footnotes, but even the footnotes are remarkably useful and well documented. The book is arguably the best treatment of the topic since the monumental Anthropic Cosmological Principle by Barrow and Tipler.

For several decades, researchers have puzzled over deeply perplexing indications, many of them highly mathematical in nature, that the universe seems inexplicably well-tuned to facilitate the evolution of complex molecular structures and sentient creatures.

Some of these “cosmic coincidences” include the following (these and numerous others are presented and discussed in detail by Lewis and Barnes):

By the way, although one can imagine living organisms based on other elements, carbon is by far is the most suitable element for the construction of complex molecules, as required for any conceivable form of living or sentient beings (pg. 268). In any event, nuclear chemistry precludes any heavier elements (i.e., elements beyond hydrogen, helium, lithium and beryllium) if carbon cannot form.

Numerous “explanations” have been proposed over the years to explain these difficulties. One of the more widely accepted explanations is the multiverse, combined with the anthropic principle. The theory of inflation, mentioned above, suggests that our universe is merely one pocket that separated from many others in the very early universe. Similarly, string theory suggests that there our universe is merely one speck in an enormous landscape of possible universes, by one count 10^{500} in number, each corresponding to a different Calabi-Yau manifold.

Thus, the thinking goes, we should not be surprised that we find ourselves in a universe that has somehow beaten the one-in-10^{120} odds to be life-friendly (to pick just the cosmological constant paradox), because it had to happen somewhere, and, besides, if our universe were not life-friendly, then we would not be here to talk about it. In other words, these researchers propose that the multiverse (or the “cosmic landscape”) actually exists in some sense, but acknowledge that the vast, vast majority of these universes are utterly sterile — either very short-lived or else completely devoid of atoms or other structures, much less sentient living organisms like us contemplating the meaning of their existence.

However, many researchers (Lee Smolin, Joseph Ellis and Joseph Silk, to name just three) remain extremely uncomfortable with hypothesizing a multiverse and invoking the anthropic principle. For one thing, it sounds too much like a tautology with no real substance. More importantly, proposing a staggeringly large number of unseen universes, all to explain the cosmic coincidences, is a flagrant violation of Occam’s razor (“Entities must not be multiplied beyond necessity”). Isn’t there a better explanation than this?!

Lewis and Barnes explore these issues in substantial detail. In this regard, they follow a long line of very eminent researchers who have puzzled over these same problems in published books and articles (some dating back to the 1970s): John Barrow, Bernard Carr, Sean Carroll, Paul Davies, David Deutsch, Brian Greene, Alan Guth, Edward Harrison, Stephen Hawking, Andre Linde, Roger Penrose, John Polkinghorne, Martin Rees, Lee Smolin, Leonard Susskind, Max Tegmark, Frank Tipler, Alexander Vilenkin, Steven Weinberg, Frank Wilczek, among others.

Another useful reference is Luke Barnes’s 2011 paper, which summarizes many of these issues.

In the end, the Lewis-Barnes book does not offer any firm answers — only more questions. The one thing that is certain, though, is that our knowledge of the basic underlying mathematical laws governing the universe is incomplete. If examination of these paradoxes eventually leads to a greater understanding of these laws, it will have all been worthwhile.

]]>On 21 March 2017 the Norwegian Academy of Science and Letters announced that the 2017 Abel Prize for mathematics, thought by many to be on a par with the Nobel Prize, has been awarded to Yves Meyer for his groundbreaking work on wavelets.

Many of the leading awards made in the field of mathematics are for highly abstract theoretical work. But wavelet theory is certainly in the area of applied mathematics, as it is now used in many different real-world arenas. Applications include data compression, acoustic noise

Continue reading Yves Meyer wins the Abel Prize for wavelet work

]]>On 21 March 2017 the Norwegian Academy of Science and Letters announced that the 2017 Abel Prize for mathematics, thought by many to be on a par with the Nobel Prize, has been awarded to Yves Meyer for his groundbreaking work on wavelets.

Many of the leading awards made in the field of mathematics are for highly abstract theoretical work. But wavelet theory is certainly in the area of applied mathematics, as it is now used in many different real-world arenas. Applications include data compression, acoustic noise reduction, biomedical imaging, digital movie projection, economics, image correction of Hubble space telescope images, and the recent detection, by the LIGO team, of gravitational waves created in the wake of the collision of two black holes.

The Abel Prize includes a cache award of six million Norwegian kroner, or approximately 675,000 Euros or 715,000 USD.

Scientists have for many years used Fourier analysis, which was first developed in the 19th century by Joseph Fourier, to analyze signals and other periodic phenomenon. Beginning in the 1950s, with the development of the fast Fourier transform (FFT), the discrete Fourier transform has been employed very extensively in science and engineering. Mobile phones employ the FFT in encoding and decoding signals sent to/from cell towers.

The FFT is also heavily used in scientific computation, because it permits one to economically perform a convolution operation. As a single example, for sufficiently high numeric precision the most efficient algorithm for multiplying two very high-precision numbers is to treat the multiplication as a linear convolution, which can be evaluated very rapidly using an FFT.

However, discrete Fourier transforms, and corresponding FFT algorithms, have their limitations. Fourier analysis is fine for analyzing periodic behavior of an entire dataset, but in the real world periodic behavior is often a feature of only a small part of a dataset, such as in a sparse dataset.

For many applications of this type, wavelets are superior. Wavelets are wave-like oscillations with amplitudes that start at zero, then increase, then decrease back to zero. Practical applications are facilitated by the development of fast computational algorithms, analogous to the FFT, that are suitable for large-scale computation as well as mobile applications, such as speech recognition and image analysis.

There is an extensive literature on wavelets. A rather good introduction to the topic is available in the Wikipedia page on wavelets. A good source for a more detailed treatments is a Ten Lectures on Wavelets by Ingrid Daubechies. Additional information on Meyers’ career and research on wavelets is available in a Scientific American article and also in a Quanta Magazine article.

]]>Continue reading Exoplanets, 4 billion-year-old life, Fermi’s paradox and zero-one laws

]]>These discoveries were made by carefully analyzing the transit of these planets in front of the host star; conveniently, the planets are in a plane that is in a direct line between the star and earth. What’s more, by careful data analysis the astronomers were even able to deduce the size and mass of these planets, confirming that all seven are roughly the size of earth. While all seven are possible harbors for life, three seem particularly plausible, namely e, f and g in the figure.

Some researchers dispute these findings. But earlier studies have already confirmed the existence of life 3.5 billion years ago. So either way it is clear that life formed on earth almost “immediately” after its surface solidified, or, in other words, within a few million years (an eye-blink in cosmic time) after the earliest epoch that life possibly could have arisen.

If life formed so quickly here on earth, surely it has also formed on many of the estimated 100 billion other planets surrounding stars in the Milky Way. And if life has arisen on numerous other planets, surely, after several billion years of evolution, at least some of these planets are now homes to full-fledged technological civilizations, at least as advanced as ours; in fact, almost certainly they are far more advanced, since it is highly unlikely that after billions of years that they are exactly at our level. With their advanced technology, surely they have been able not only to witness the rise of life and civilization on our planet, but also to contact us or otherwise disclose their existence, deliberately or inadvertently. Yet decades of determined high-tech searches by the SETI project and other groups have come up empty-handed. WHERE IS EVERBODY?

Fermi’s paradox, as this conundrum is known, has been analyzed in great detail by many writers since 1950 when Fermi first posed it. Astronomer Frank Drake, for example, proposed his now-famous Drake Equation:

N = R^{*} f_{p} n_{e} f_{l} f_{i} f_{c} L

which estimates the number of technological civilizations in the galaxy. For additional information on the scientific debate, see our previous blogs (Blog A and Blog B), or books by Stephen Webb (2002), John Gribbin (2011) and Paul Davies (2011).

The bottom line is that there are no good answers. Here is a quick summary of some of the proposed explanations and common rejoinders:

*They are under strict orders not to disclose their existence*. Rejoinder: This explanation (often termed the “zookeeper’s hypothesis”) falls prey to the inescapable fact that it just takes one small group in just one extraterrestrial civilization to dissent and break the pact of silence. It seems utterly impossible that a ban of this sort could be imposed, without a single exception over millions of years, on a vast galactic society of advanced civilizations, each with billions of individuals (if earth society is any guide), dispersed over many star systems and planets.*They exist, but are too far away*. Rejoinder: Such arguments ignore the potential of rapidly advancing technology. For example, once a civilization is sufficiently advanced, it could send “von Neumann probes” to distant stars, which could scout out suitable planets, land, and then construct additional copies of themselves, using the latest software beamed from the home planet. In one recent analysis, researchers found that 99% of all star systems in the Milky Way could be explored in only about five million years, which is an eye-blink in the multi-billion-year age of the Milky Way. Already, scientists are planning to send fleets of nanocraft to visit nearby stars such as Alpha Centauri. And astronomers hope to soon be able to detect signatures of life on nearby exoplanets. So given that alien civilizations almost certainly have far more advanced technology than we do, why hasn’t the earth been visited and/or contacted, many times over?*They exist, but have lost interest in interstellar communication and/or exploration*. Rejoinder: Given that Darwinian evolution, which is widely believed to be the mechanism guiding the development of biology everywhere in the universe, strongly favors organisms that explore and expand their dominion, it is hardly credible that*each and every individual*, in*each and every distant civilization*forever lacks interest in space exploration, or (as in item #1 above) that a galactic society is 100% effective, over many millions of years and over many billions of individuals in numerous civilizations, in enforcing a ban against those who wish to explore the galaxy or communicate with emerging societies such as ours.*They are not interested in making contact with such a primitive species as us.*As with #1 and #3, while the majority of individuals and civilizations might not be interested in a species such as us, surely among this vast society at least some individuals and some civilizations are interested. By analogy, while most humans and even most scientists are not at all interested in ants, a few researchers are very interested, and they study ant species all over the world in great detail. Also, we are now learning how to communicate with many species on earth, great and small.*They are calling, but we do not yet recognize the signal*. Rejoinder: While most agree that the SETI project still has much searching to do, this explanation doesn’t apply to signals that are sent with the express purpose of communicating to a newly technological society such as us, in a form that we could easily recognize (e.g., by microwave or light signals). And as with item #1, #3 and #4, it is hard to see how multiple galactic societies could forever enforce, without any exceptions, a global ban on such targeted communications. Why are they making it so hard for us to find them?*Civilizations like us invariably self-destruct*. From human experience we have survived 200 years of technological adolescence, and have not yet destroyed ourselves in a nuclear or biological apocalypse. In any event, within a decade or two human civilization will spread to the Moon and to Mars, and then its long-term existence will be largely impervious to calamities on earth.*They visited earth and planted DNA.*Rejoinder: Although the notion that life began elsewhere (i.e., “directed panspermia”) has been proposed by some scientists, detailed analyses of DNA have found no evidence of anything artificial, and, what’s more, this does not solve the problem of the origin of life — it just pushes it to some other star system.*WE ARE ALONE, at least within the Milky Way galaxy and possibly beyond*. Rejoinder: This hypothesis flies in the face of the “principle of mediocrity,” namely the presumption, dominant since the time of Copernicus, that there is nothing special about earth or human society. More importantly, it also flies in the face of virtually all of the recent discoveries in this general arena — extrasolar planets, ancient life, molecular biogenesis and more — which suggest that life is pervasive in the universe.

Those who have studied probability theory will recall various “zero-one” laws. Colloquially speaking, if an event has nonzero probability, then eventually, under independent repetitions, it certainly will occur (i.e., it will occur with probability one). Conversely, if an event has zero probability, then no matter how many independent repetitions are performed, it will never occur (i.e., it will occur with probability zero). There is no other logical probability other than zero or one.

Other specific examples of zero-one laws include the Borel-Cantelli lemmas and the Hewitt-Savage law. The first Borel-Cantelli lemma says (under appropriate conditions) that if the sum of the probabilities of a sequence of events is finite, then the probability that infinitely many of them occur is zero. The second Borel-Cantelli lemma says (again, under appropriate conditions) that if the sum of the probabilities diverges and the events are independent, then the probability that infinitely many of them occur is one.

Zero-one laws seem to be analogous to Fermi’s paradox. If, as seems reasonable, there are many alien civilizations (or even just one), each with billions of individuals (as with the human species), having arisen from Darwinian evolution and thus imbued with a drive to expand and explore, then it seems exceedingly improbable that not a single individual or group of individuals from even one of these civilizations has visited the earth, attempted to make contact, or has even merely disclosed their own existence, deliberately or inadvertently. In other words, with billions of possibilities, the probability should be unity that one or more would contact us or at least permit their existence to be disclosed to a civilization such as ours.

The other possibility is, of course, rather disquieting: For reasons we evidently cannot yet fathom, we represent the end of a long string of exceedingly unlikely events (possibilities include the origin of life, the origin of complex multicellular life, the origin of intelligent life, the avoidance of various planetary catastrophes, the maintenance of a hospitable environment over billions of years, etc.), whose probability of simultaneously occurring is virtually zero. For example, there may be some great filter (e.g., a gamma-ray burst or the like) that invariably ends societies such as ours before they advance to the technology level or venture to the stars or colonize the galaxy, and we somehow have miraculously avoided this common fate so far. Either way, we are alone — the first and only technological civilization in the Milky Way if not beyond.

So which is it, probability one or probability zero? Is intelligent life pervasive in the universe, or is humanity a highly improbable freak of nature?

What is the answer to Fermi’s paradox? The present author certainly does not know. But it is clear that the research topics behind Fermi’s paradox (exoplanets, origin of life, SETI, etc.) are among the most significant scientific questions that our species has addressed.

I, for one, hope that I live to see the day that this question is finally resolved.

]]>This volume consists of 27 chapters, grouped into six sections, which collectively address questions of reproducibility in a broad range of scientific disciplines, ranging from medicine, physical sciences, life sciences, social sciences and even

Continue reading Reproducibility: Principles, Problems, Practices, and Prospects

]]>This volume consists of 27 chapters, grouped into six sections, which collectively address questions of reproducibility in a broad range of scientific disciplines, ranging from medicine, physical sciences, life sciences, social sciences and even an article on reproducibility in literature studies. Statistical issues that arise in reproducibility studies are also discussed.

The book arose out of a growing consensus that reproducibility difficulties plague a surprising wide range of scientific disciplines. Here are just a few of recent cases that have attracted widespread publicity:

- In 2017 the Reproducibility Project was able to replicate only two of five key studies in cancer research.
- In 2015, in a separate study by the Reproducibility Project, only 39 of 100 psychology studies could be replicated.
- In 2015, a study by the U.S. Federal Reserve was able to reproduce only 29 of 67 economics studies.
- In 2014, backtest overfitting emerged as a major problem in computational finance.
- In 2012, Amgen researchers reported that they were able to reproduce fewer than 10 of 53 cancer studies.

Our particular article for this volume, “Facilitating reproducibility in scientific computing: Principles and practice” (preprint available here), addresses reproducibility in the context of mathematical and scientific computing. We observe that far from being immune from these difficulties, in fact the mathematical and scientific computing field has some serious problems:

- Very few published computational studies include full details of the algorithms, computer equipment, code and data used.
- The field is, if anything, quite laggard in developing an ethic of reproducibility, such as the need to carefully document all numerical experiments.
- In many cases, even the original authors of a computational study can no longer reproduce their published results — the actual codes used are no longer available or have been changed, etc.
- Few studies save their code and data on permanent, publicly accessible data repositories.
- Many studies employ questionable statistical methods or metrics.
- Many studies employ questionable methods to report performance.
- Numerical reproducibility has emerged has a significant issue, particularly on very large-scale computations that greatly magnify sensitivity to numerical round-off error.

Some may be disturbed at some of these developments, but the present authors actually regard efforts such as the publication of this volume to be a good sign. We see it as evidence that many fields, from mathematical computing to cancer research and climate modeling, are recognizing the need for improvement and are taking positive steps to correct these problems.

As a single example, recently the American Association for the Advancement of Science adopted new guidelines for papers submitted to its flagship journals, including *Science*. Under these new guidelines, authors are required to provide more background information on their work, and papers are subjected to a review of their statistical methods in addition to the discipline-specific review.

More than anything else, though, such changes may well promote a fundamental culture change in the way science is done. That will be progress indeed.

]]>Here is a sample

Continue reading Space, Time and the Limits of Human Understanding

]]>Here is a sample of some of the many chapters that may be of interest to readers of this blog (chapter numbers are shown in brackets):

- [1] Francesca Biagioli, “Space as a source and as an object of knowledge”
- [9] Nicholas Maxwell, “Relativity theory may not have the last word on the nature of time: Quantum theory and probabilism”
- [10] Gerard ‘t Hooft, “Nature’s bookkeeping system”
- [13] Norbert Straumann, “Hermann Weyl’s space-time geometry and its impact on theories of fundamental interactions”
- [15] Joan A. Vaccaro, “An anomaly in space and time and the origin of dynamics”
- [18] Mary Leng, “Geometry and physical space”
- [20] Paul Ernest, “Paradox? The mathematics of space-time and the limits of human understanding”
- [21] Reuben Hersh, “‘Now’ has an infinitesimal positive duration”
- [23] Julian Barbour, “The fundamental problem of dynamics”
- [24] James Isenberg, “General relativity, time and determinism”
- [31] Randall E. Auxier, “Evolutionary time and the creation of the space of life”
- [32] David H. Bailey and Jonathan M. Borwein, “A computational mathematics view of space, time and complexity”
- [35] Alexander K. Dewdney, “Godel incompleteness and the empirical sciences”
- [Afterword] Noam Chomsky, “Science, mind and limits of understanding”

The volume is available from Springer both in hardback and also as an e-book.

]]>Ken Ono was the son of Takashi Ono, a Japanese mathematician who taught at the University of Pennsylvania. Ono’s field of research has closely paralleled the writings of famed Inidian mathematician Srinivasa Ramanujan. Among other things, Ono significantly extended Ramanujan’s work on partition congruences and mock theta functions, and, with

Continue reading “My Search for Ramanujan”

]]>Ken Ono was the son of Takashi Ono, a Japanese mathematician who taught at the University of Pennsylvania. Ono’s field of research has closely paralleled the writings of famed Inidian mathematician Srinivasa Ramanujan. Among other things, Ono significantly extended Ramanujan’s work on partition congruences and mock theta functions, and, with two collaborators, developed a framework for the Rogers-Ramanujan identities, solving a long-standing open problem that had its roots in the writing of Ramanujan. Ono has been honored with a number of awards, including a Presidential Early Career Award from U.S. President Bill Clinton in 2000. Ono served as mathematical consultant for the movie The Man Who Knew Infinity, which was based on Kanigel’s biography of Ramanujan.

Ono’s book recounts his troubled youth, where he chafed under the stern upbringing of his Japanese “tiger parents,” who, like many other first-generation Asian-Americans, were obsessed that their children must achieve at the very top of their class. First Ono gave up his violin lessons, and then he dropped out of high school altogether. Subsequently he attended college at the University of Chicago, although, as he confesses, he was more interested in frat parties and bicycle racing than his studies. Nonetheless he did manage to get accepted to the mathematics graduate program at the University of California, Los Angeles, studying under Basil Gordon. His troubled upbringing, however, continued to haunt him, and he even attempted suicide at one point of despondency. All along, he had classic “impostor syndrome” psychological problems.

Only later, when Ono adopted Ramanujan as a role model of sorts, did he finally regain his footing, and soon established himself as a first-rate research mathematician. In the end, he reconciled with his parents, bonding with them in ways that he never could while growing up. In the process he did some very important research work, as mentioned above.

So how good is Ken Ono’s book? Let me state it very plainly: This is an outstanding book. It is lucidly and evocatively written, telling a tale that rings true to many who have trained to be professional mathematicians, but with a universality that transcends the field of mathematics. This book has few peers, either in emotional intensity or in its delightful and inspiring connection to Ramanujan. Some mathematical content is included, yet it is so disarmingly written that it can be read by persons with a broad range of backgrounds — nothing more than basic high-school mathematics is required. This book will go far to build bridges to other disciplines of science, and to the arts and humanities as well.

The pain that Ono must have experienced is vividly illustrated by a series of inset quotes representing the “voices” that Ono recalled, rooted in his stern upbringing. Example:

Ken-chan, your parents are disappointed in you. You are embarrassment. Look at that professor’s children. Unlike you, they study all of time, and they what you should be. You sloppy. You spoiled. Your mother sacrificed her life for you, so you do your part. What wrong with you? You want play all of time?

Later, when in graduate school, when he became discouraged with the classes and fierce competition of other students, similar voices echoed in his mind:

Ken-chan, of course classes hard, too hard for you. What you expect? You don’t belong here. All that time you spend on bike, these students prepare for graduate school. You not good enough to be mathematician.

In the end, Ono’s work poring over the writings of Ramanujan, and working to fill in the many gaps and unproven assertions in those papers, proved immensely fulfilling and uplifting to him — a truly spiritual experience. As he described it,

I have read most of Ramanujan’s papers multiple times. I have read virtually everything ever written about him, and I have read and reread his letters and notebooks many times. … The deeper I dig, the more in awe I am of Ramanujan. … How was it possible for an untrained youth ignorant of modern mathematics to produce those wonderful formulas? Reading Ramanujan’s writings has become a spiritual experience for me.

His experience is not unlike that of Carl Sagan, who wrote that “science invariably elicits a sense of reverence and awe,” and Albert Einstein, who wrote “Only one who has devoted his life to [scientific] ends can have a vivid realization of what has inspired these men and given them the strength to remain true to their purpose in spite of countless failures. It is cosmic religious feeling that gives a man such strength.”

I just wish other mathematicians and scientists could write about their lives and work so effectively. Maybe more of us should try!

]]>In this article we argue that the field of mathematical and scientific computing lags behind other fields in establishing a culture and tools to ensure reproducibility. All too often, the authors of computations, even those that are published in peer-reviewed conferences and journals, have not fully documented their algorithms, code, input data

Continue reading Enhancing reproducibility in mathematical and scientific computing

]]>In this article we argue that the field of mathematical and scientific computing lags behind other fields in establishing a culture and tools to ensure reproducibility. All too often, the authors of computations, even those that are published in peer-reviewed conferences and journals, have not fully documented their algorithms, code, input data and output, nor have they made this code and data available on a public repository. Further, in all too many cases even the authors themselves no longer have the source code and other data that was used for their runs. Thus it is increasingly difficult for other researchers (or even the same researchers) to reproduce published work.

We list a set of seven recommendations to address this:

- Share data, software, workflows, and details of the computational environment that generate published findings in open trusted repositories.
- Persistent links should appear in the published article and include a permanent identifier for data, code, and digital artifacts upon which the results depend.
- To enable credit for shared digital scholarly objects, citation should be standard practice.
- To facilitate reuse, adequately document digital scholarly artifacts.
- Use Open Licensing when publishing digital scholarly objects.
- Journals should conduct a reproducibility check as part of the publication process and should enact the TOP standards at level 2 or 3.
- To better enable reproducibility across the scientific enterprise, funding agencies should instigate new research programs and pilot studies.

Full details are available in the Science article.

]]>The Breakthrough Prize in mathematics (USD$3 million) was awarded to Jean Bourgain of the Institute for Advanced Study in Princeton, New Jersey. Bourgain’s work

Continue reading Breakthrough Foundation announces 2017 prizes in math, physics and life sciences

]]>The Breakthrough Prize in mathematics (USD$3 million) was awarded to Jean Bourgain of the Institute for Advanced Study in Princeton, New Jersey. Bourgain’s work touches on a wide range of topics, including the geometry of Banach spaces, high-dimensional convexity, harmonic analysis, ergodic theory and nonlinear partial differential equations (with applications to mathematical physics). In recent years he has published on average 10 papers per year. He previously received the Fields Medal, one of the highest honors in mathematics.

One particularly interesting recent result of his is the “L2 decoupling theorem,” published in collaboration with Bourgain’s colleague Ciprian Demeter. Among other things, their result allows for the accurate estimation of various exponential integrals and exponential sums. An introduction to the L2 decoupling theorem is available in an ArXiv paper written by Borgain and Demeter.

The 2017 Breakthrough Prizes in Fundamental Physics (USD$3 million each) were awarded to Joseph Polchinski of the University of California, Santa Barbara, and Andrew Strominger and Cumrun Vafa, both of Harvard University. In 1995, Polchinski showed that string theory contains objects of two or more dimensions, called “branes.” In 1996 Strominger and Vafa used string theory to calculate the entropy (information content) of a black hole, confirming some predictions by Stephen Hawking that black holes leak radiation and ultimately explode.

Also honored in this year’s Breakthrough Prize festivities were the recipients of a previously announced Special Prize in Fundamental Physics. This was awarded to Ronald Drever and Kip Thorne of the California Institute of Technology, and Rainer Weiss of the Massachusetts Institute of Technology, for their pioneering work in the conception, design and execution of the Laser Interferometer Gravitational-Wave Observatory (LIGO) experiment, which on 11 February 2016 announced the detection of gravitational waves emitted from the collision of two black holes. The three physicists share a USD$1 million award, and the 1,012 members of the LIGO team share an additional USD$2 million.

Five Breakthrough Prizes (USD $3 million each) were awarded in the life sciences, in each case to a researcher in the field of molecular biology:

- Stephen J. Allege of Harvard, for this work in explaining how cells sense and then respond to DNA damage, which may have implications for cancer research.
- Harry F. Noller of U.C. Santa Cruz, who helped decipher the structure and function of ribosomes and RNA.
- Roeland Nusse of Stanford and the Howard Hughes Medical Institute, who discovered the first Wnt gene, which plays an important role in embryo, stem cell and bone development, and also in the progression of cancer.
- Huda Zoghbi of Bayler College of Medicine and the Howard Hughes Medical Institute, who discovered that a mutation to the SCA1 gene can result in a serious neurodegenerative disorder. She also helped uncover the cause of Rett syndrome, which affects young girls.
- Yoshinori Ohsumi of the Tokyo Institute of Technology, who helped uncover how cells recycle themselves.

In addition to the USD$3 million prizes mentioned above, the Breakthrough Foundation also awarded six USD$100,000 New Horizon prizes to young researchers. The three physics prizes were awarded to Asimina Arvanitaki of the Perimeter Institute for Theoretical Physics in Ontario, Canada, Peter Graham of Stanford and Surjeet Rajendran of the University of California, Berkeley (who split one prize); Simone Giombi of Princeton University and Xi Yin of Harvard University (who split another prize); and Frans Pretorius of Princeton University.

The three mathematics New Horizon prizes were awarded to Mohammed Abouzaid of Columbia University; Hugo Duminil-Copin of the University of Geneva in Switzerland; and Benjamin Elias of the University of Oregon and Geordie Williamson of Kyoto University (who split one prize).

Additional details on this year’s Breakthrough Prizes are available at the Breakthrough Foundation site, and in an article in the New York Times.

Congratulations to all of these fine recipients!

]]>The Trends in International Mathematics and Science Study (TIMSS) is an international test to compare the achievement of fourth and eighth grade students in mathematics and science. It has been administered every four years since 1995, thus providing a 20-year period for study of educational trends around the world.

In November 2016, results for the

Continue reading Asian tigers roar in the latest TIMSS math-science rankings

]]>The Trends in International Mathematics and Science Study (TIMSS) is an international test to compare the achievement of fourth and eighth grade students in mathematics and science. It has been administered every four years since 1995, thus providing a 20-year period for study of educational trends around the world.

In November 2016, results for the latest test (taken in 2015) became available. Below is a condensed summary of the test results for 8th grade students, ranked by average math score. For full details, see this report, available from the U.S. National Center for Education Statistics.

Educational system | Average math | Average science |

Singapore | 621 | 597 |

Republic of Korea | 606 | 556 |

China Taipei | 599 | 569 |

China Hong Kong | 594 | 546 |

Japan | 586 | 571 |

Russian Federation | 538 | 544 |

Canada | 527 | 526 |

Ireland | 523 | 530 |

United States | 518 | 530 |

England | 518 | 537 |

Hungary | 514 | 527 |

Norway | 512 | 509 |

Australia | 505 | 512 |

Sweden | 501 | 522 |

Italy | 494 | 499 |

New Zealand | 493 | 513 |

Once again, much to the consternation of educational leaders in the United States, its performance in mathematics and science is only so-so, and progress has been modest at best. From 2011 to 2015, math scores increased from 509 to 518, while science scores increased from 525 to 530 — increases that while certainly welcome are, statistically speaking, barely significant. This is in spite of decades of hand-wringing and political battles.

Matt Larson, president of the U.S. National Council of Teachers of Mathematics said that while the slight progress is heartening, “Certainly we have much more work to do and achievement is not as high as we would like to have it.”

Needless to say, this mediocre performance is hardly in keeping with a nation that, arguably more than any other on Earth, has hitched itself to the star of modern science and technology. High-tech manufacturing and other knowledge-intensive services now account for 40% of the U.S. gross domestic product, and directly employ approximately seven million persons (and indirectly employ many more). Tech giants such as Apple, Microsoft, Google, Facebook, IBM and Tesla are household names worldwide. The U.S. scientific research establishment, an extensive network of government laboratories, universities and industrial research organizations, is second to none.

Yet increasingly U.S. leadership in both science and technology is being challenged by the Asian tigers, as well as ambitious competitors in Europe, the Middle East, Australia and elsewhere. China now rates second in worldwide research and development, accounting for 20% of global R&D production, compared with 27% for the U.S. Even more startling is the fact that between 2003 and 2013, China increased its R&D investments by an average of 19.5% *per year*.

Many other first-world nations are similarly disappointed, and are vowing to try harder. Nick Gibb, U.K. school standards minister, responded, “We know there is more to do to narrow the attainment gap and that’s why this year alone we have invested £2.5billion through the Pupil Premium to tackle education inequality.”

The latest Australian scores show essentially no improvement in student achievement since 1995. Sue Thompson, director of the Australian Council for Educational Research, lamented the fact that Australian schools continue to fall behind, and that disparities between schools continues to exacerbate the shortfall of scores among disadvantaged students.

So what are school systems such as Singapore and Finland doing right? To begin with, teachers are better trained. According to the National Center on Education and the Economy, Singapore teachers are recruited from the top tier of the graduating class. On average, only one out of eight applicants for admission to teacher education programs is accepted, and only after a relatively grueling application process. For example, prospective teachers must have taken Singapore’s A-level exams, the most challenging of all exams. Teacher salaries are competitive with other college-educated professions.

Although 8th grade TIMSS scores are not available for Finland, that nation ranks seventh worldwide in 4th grade scores, behind only the Asian tigers and Russia, and ahead of any other nation in Western Europe. Again, the Finland educational system is highly selective — it is often more difficult being accepted into a teacher education program than into law or medicine. What’s more, since 1970 all teachers must have at least a master’s degree. Teachers typically spend four hours per day in the classroom, and spend two hours per week on professional development.

Anyone saying that they “know” how to fix the educational system, either in the U.S. or in Europe or other nations, is selling bogus goods. Many different “experiments” have been tried, often with disappointing results. The measures that do seem to bear some fruit, such as more rigorous teacher training and more focused efforts for low-income students, all require significant structural changes and major long-term investment. Yet to fail to try any of these changes or to refuse to make significant new investment in the educational enterprise is tantamount to saying that we are giving up, that the future educational achievement of our children is not worth it.

This is unacceptable in an era of accelerating progress in science and technology. Consider just for a moment the changes to our way of life and economy over the past 20 years:

- The Internet, including basic functions such as email and browsing, did not become widely available until the mid-to-late 1990s.
- Facebook did not appear until 2004.
- Smartphones did not appear until 2007.
- Hardware advances continue apace. The 2016 iPhone and Android devices are faster and have more memory than the world’s most powerful supercomputer in 1990.
- In 2011, IBM’s Watson, an artificial intelligence-based computer system, defeated Jeopardy! champions Ken Jennings and Brad Rutter. Now similar AI-based technology is being deployed into every sector of the economy.
- Self-driving cars and trucks, which are now being commercially deployed, were considered futuristic fantasies as recently as ten years ago. The success of firms such as Tesla and Uber is but a foretaste of the changes to come in the transportation arena.

Are you a bit breathless from all of these changes? All indications that the pace of change will only accelerate in the future. Indeed, we are facing a future when a large fraction of the world economy is based on mathematics, computing and science, and every person must be familiar with these subjects to be functioning, contributing members of society. We owe it to the next generation to provide the best education possible.

[Added 7 Dec 2016: Additional background, with data from PISA (another international educational test) is available in an Economist article.]

]]>This year’s prize was awarded for the recipients’ 2013 article The Computation of Previously

Continue reading Bailey, Borwein, Mattingly and Wightwick to receive the Levi L. Conant Prize from AMS

]]>This year’s prize was awarded for the recipients’ 2013 article The Computation of Previously Inaccessible Digits of Pi^2 and Catalan’s Constant, which appeared in the August 2013 issue of the Notices of the American Mathematical Society. The AMS summarizes the article as follows:

The article opens with a historical journey, from Archimedes to the computer age, with many interesting anecdotes along the way. It then goes on to discuss the remarkable “BBP” formula, discovered by Bailey together with Peter Borwein and Simon Plouffe. The formula allows one to calculate binary or hexadecimal digits of Pi beginning with the nth digit without first calculating any of the preceding n – 1 digits. The article leads readers through not only an elementary proof of the BBP formula but also the unconventional search that originally led to this formula as well as similar formulas for Catalan’s constant and Pi^2. The article also provides intriguing insights into the age-old question of whether the digits of Pi are truly randomly distributed.

The Conant Prize recognizes the best expository paper published in either the Notices of the AMS or the Bulletin of the AMS in the preceding five years. This year’s prize will be awarded Thursday, January 5, 2017, at the Joint Mathematics Meetings in Atlanta, Georgia. For additional details, see the announcement on the AMS website.

]]>