A perfect number is a positive integer whose divisors (not including itself) add up to the integer. The smallest perfect number is $6$, since $6 = 1 + 2 + 3$. The next is $28 = 1 + 2 + 4 + 7 + 14$, followed by $496 = 1 + 2 + 4 + 8 + 16 + 31 + 62 + 124 + 248$ and $8128 = 1 + 2 + 4 + 8 + 16 + 32

Continue reading Do odd perfect numbers exist? New results on an old problem

]]>A perfect number is a positive integer whose divisors (not including itself) add up to the integer. The smallest perfect number is $6$, since $6 = 1 + 2 + 3$. The next is $28 = 1 + 2 + 4 + 7 + 14$, followed by $496 = 1 + 2 + 4 + 8 + 16 + 31 + 62 + 124 + 248$ and $8128 = 1 + 2 + 4 + 8 + 16 + 32 + 64$ $+ 127 + 254 + 508 + 1016 + 2032 + 4064$.

The notion of a perfect number is at least 2300 years old. In approximately 300 BCE, Euclid showed (using modern notation) that if $2^p – 1$ is a prime number (which implies, by the way, that $p$ itself is prime), then $2^{p-1} (2^p – 1)$ is a perfect number. For example, when $p = 5$, we have $2^5 – 1 = 31$, which is prime, and $2^4 (2^5 – 1) = 16 \cdot 31 = 496$ is perfect. In approximately 100 CE, the Greek mathematician Nicomachus noted that 8128 is a perfect number, and stated, without proof, that every perfect number is of the form $2^{n-1} (2^n – 1)$ (also omitting the clear condition that $n$ must be prime). In the first century CE, the Hebrew theologian Philo of Alexandria asserted that Earth was created in $6$ days and the moon orbits Earth in $28$ days, since $6$ and $28$ are perfect. In the early fifth century CE, the Christian theologian Augustine of Hippo repeated the assertion that God created Earth in $6$ days because $6$ is the smallest perfect number.

In the 12th century, the Egyptian mathematician Ismail ibn Fallūs calculated the 5th, 6th and 7th perfect numbers $(33550336, 8589869056$ and $137438691328$), plus some additional ones that are incorrect. The first known mention of the 5th perfect number in European history is in a manuscript written by an unknown writer between 1456 and 1461. The 6th and 7th were identified by the Italian mathematician Pietro Cataldi in 1588; he also proved that every perfect number obtained using Euclid’s formula ends in $6$ or $8$.

In the 18th century, Euler proved that Euclid’s formula $2^{p-1} (2^p – 1)$, for prime $2^p – 1$, yields all even perfect numbers. He also introduced the $\sigma (N)$ notation (also known as the divisor function) for the sum of the divisors of $N$, including $N$ itself. Thus we may write his result as follows: If $N$ is a positive even integer, then $\sigma(N) = 2N$ (i.e., $N$ is perfect) if and only if $N = 2^{p-1} (2^p – 1)$ for some prime of the form $2^p – 1$. This result is now known as the Euclid-Euler theorem. For additional history and background on perfect numbers, see this Wikipedia article.

The Euclid-Euler theorem pretty well wrapped up the case for even perfect numbers. But what about odd perfect numbers (OPNs)? Do any such integers exist? This question has remained stubbornly unanswered for centuries. Numerous restrictions are known on the properties of any possible OPN, with more restrictions being proved with every passing year. Present-day mathematicians echo the sentiment of James Sylvester, who wrote

a prolonged meditation on the subject has satisfied me that the existence of any one such [odd perfect number] — its escape, so to say, from the complex web of conditions which hem it in on all sides — would be little short of a miracle.

Here are some of the known restrictions on any possible OPN $N$, listed in roughly chronological order:

- $N$ must not be divisible by 105 (Sylvester, 1888).
- If $N$ is not divisible by $3, 5$ or $7$, it must have at least $27$ prime factors (Norton, 1960).
- $N$ must have at least seven distinct prime factors (Pomerance, 1974).
- $N$ must have at least $75$ prime factors and at least $9$ distinct prime factors (Hare, 2005 and Nielsen, 2006). This was later extended to $101$ prime factors (Ochem and Rao, 2012).
- $N$ must be congruent to either $1 (\bmod 12)$ or $117 (\bmod 468)$ or $81 (\bmod 324)$ (Roberts, 2008).
- The largest prime factor $p$ of $N$ must satisfy $p > 10^8$ (Goto and Ohno, 2008).
- $N \gt 10^{1500}$ (Ochem and Rao, 2012).

(See this MathWorld article for references to these and other items.)

Recently mathematicians have taken a slightly different tack on the problem — searching for spoof odd perfect numbers, namely integers that resemble odd perfect numbers but don’t quite meet all the requirements. There is historical precedent here: in 1638 Rene Descartes observed that $198585576189 = 3^2 \cdot 7^2 \cdot 11^2 \cdot 13^2 \cdot 22021$ almost satisfies Euclid’s formula.

To see this, first note two properties of the sigma function: (a) $\sigma(pq) = \sigma(p)\sigma(q)$, if $p$ and $q$ are relatively prime; and (b) $\sigma(p^m) = 1 + p + p^2 + \cdots p^m$ if $p$ is prime and $m \gt 0$. With regards to Descartes’ number, one might be tempted to write $$\hat{\sigma}(198585576189) = \sigma(3^2) \sigma(7^2) \sigma(11^2) \sigma(13^2) \sigma(22021)$$ $$= (1 + 3 + 3^2)(1 + 7 + 7^2) (1 + 11 + 11^2) (1 + 13 + 13^2) (1 + 22021) = 397171152378 = 2 \cdot 198585576189.$$ However, this calculation is incorrect, since $22021$ is not prime; in fact $\sigma(22021) = 23622$, not $22022$ as in the above. Thus the above calculation, which appears at first glance to certify that $198585576189$ is an OPN, is invalid. The correct sigma value for Descartes’ number is $\sigma(198585576189) = 426027470778$.

In 1999, John Voight of Dartmouth University discovered a different variety of spoof odd perfect number: $−22017975903 = 3^4 \cdot 7^2 \cdot 11^2 \cdot 19^2 \cdot (−127)$. Here again one might be tempted to write $$\hat{\sigma}(−22017975903) = \sigma(3^4) \sigma(7^2) \sigma(11^2) \sigma(19^2) \sigma(-127)$$ $$= (1 + 3 + 3^2 + 3^3 + 3^4) (1 + 7 + 7^2) (1 + 11 + 11^2) (1 + 19 + 19^2) (1 + (-127)) = -44035915806 = 2 \cdot (−22017975903).$$ But again, this calculation is invalid, in this case because the integer in question is negative, and $\sigma(-127) = -128$ not $-126$ as used above. But in both cases, these “spoofs” are definitely interesting, and possibly might inspire a line of attack to certify that OPNs simply cannot exist.

The most recent development here is due to a team led by Pace Nielsen and Paul Jenkins of Brigham Young University, subsequently joined by Michael Griffin and Nick Andersen, who embarked on a computer search for additional spoof odd perfect numbers. After roughly 60 CPU-years of computing, the team found 21 spoofs with six or fewer prime bases. Here the team relaxed the criteria in several ways, allowing non-prime bases (as with Descartes), negative bases (as with Voight), and also spoofs whose bases share the same prime factors (by the rules, a given prime may appear only once in the factorization list).

Why the interest in spoofs? According to Nielsen,

Any behavior of the larger set has to hold for the smaller subset. … So if we find any behaviors of spoofs that do not apply to the more restricted class, we can automatically rule out the possibility of an OPN.

So far, they have discovered some interesting facts about the spoofs, but none of these properties would preclude the existence of real odd perfect numbers. For example, the team has found that every one of the 21 spoofs that they have uncovered, with the exception of Descartes’ example, has at least one negative base. If this numerical observation could be proven, then this would prove that no odd perfect numbers exist, since by definition the bases of odd perfect numbers must be both positive and prime numbers.

So the search continues. Will this approach work? Voight, for instance, is not sure that even with the BYU team’s result that we are close to a final attack on the problem. Paul Pollack of the University of Georgia adds,

It would be great if we could stare at the list of spoofs and see some property and somehow prove there are no OPNs with that property. That would be a beautiful dream if it works, but it seems too good to be true.

For additional details and background, see this Quanta article by Steve Nadis, from which some of the above information was taken.

]]>The modern field of artificial intelligence (AI) began in 1950 with Alan Turing’s landmark paper Computing machinery and intelligence, which outlined the principles of AI and proposed the Turing test. Although early researchers were confident that AI systems would soon be a reality, inflated promises and expectations led to disappointment in the 1970s and again in the 1980s.

A breakthrough of sorts came in the late 1990s and early 2000s with the emergence of Bayes-theorem-based methods, which quickly displaced the older methods based mostly on formal reasoning. When combined with steadily advancing

Continue reading Can computers do mathematical research?

]]>The modern field of artificial intelligence (AI) began in 1950 with Alan Turing’s landmark paper Computing machinery and intelligence, which outlined the principles of AI and proposed the Turing test. Although early researchers were confident that AI systems would soon be a reality, inflated promises and expectations led to disappointment in the 1970s and again in the 1980s.

A breakthrough of sorts came in the late 1990s and early 2000s with the emergence of Bayes-theorem-based methods, which quickly displaced the older methods based mostly on formal reasoning. When combined with steadily advancing computer technology, a gift of Moore’s Law, practical and effective AI systems finally began to appear.

One notable milestone in modern AI technology came in March 2016, when a computer program named “AlphaGo,” developed by researchers at DeepMind, a subsidiary of Alphabet (Google’s parent company), defeated Lee Se-dol, a South Korean Go master, 4-1 in a 5-game tournament, an achievement that many observers had not expected to occur for decades, if ever. Then in October 2017, Deep Mind researchers developed from scratch a new program, called AlphaGo Zero, which was programmed only with the rules of Go and a simple reward function and then instructed to play games against itself. After just three days of training (4.9 million training games), the AlphaGo Zero program had advanced to the point that it defeated the earlier Alpha Go program 100 games to zero, and after 40 days of training, AlphaGo Zero’s performance was as far ahead of champion human players as champion human players are ahead of amateurs. See this Economist article, this Scientific American article and this Nature article.

AI and machine-learning methods are being used for more than playing Go. They are used in Apple’s Siri and Amazon’s Alexa voice recognition systems, in Facebook’s facial recognition API, in Apple’s 3-D facial recognition hardware and software, and in Tesla’s “autopilot” facility. See this earlier Math Scholar blog for additional discussion.

The present author recalls discussing the future of mathematics with Paul Cohen, who in 1963 proved that the continuum hypothesis is independent from the axioms of Zermelo-Fraenkel set theory. Cohen was convinced that the future of mathematics, and much more, lies in artificial intelligence. Reuben Hersch recalls Cohen saying specifically that at some point in the future mathematicians would be replaced by computers. So how close are we to Cohen’s vision?

In fact, computer programs that discover new mathematical identities and theorems are already a staple of the field known as experimental mathematics. Here is just a handful of the many computer-based discoveries that could be mentioned:

- A new formula for pi with the property that it permits one to rapidly calculate binary or hexadecimal digits of pi at an arbitrary starting position, without needing to calculate digits that came before.
- The surprising fact that if the Gregory series for pi is truncated to 10
^{n}/2 terms for some n, the resulting decimal value is remarkably close to the true value of pi, except for periodic errors that are related to the “tangent numbers.” - Evaluations of Euler sums (compound infinite series) in terms of simple mathematical expressions.
- Evaluations of lattice sums from the Poisson equation of mathematical physics in terms of roots of high-degree integer polynomials.
- A new result for Mordell’s cube sum problem.

In most of the above examples, the new mathematical facts in question were found by numerical exploration on a computer, and then later proven rigorously, the old-fashioned way, by mostly human efforts. So what about computers actually proving theorems?

Actually, this is also old hat at this point in time. Perhaps the best example is Thomas Hales’ 2003 proof of the Kepler conjecture, namely the assertion that the simple scheme of stacking oranges typically seen in a supermarket has the highest possible average density for any possible arrangement, regular or irregular. Hales’ original proof met with some controversy, since it involved a computation documented by 250 pages of notes and 3 Gbyte of computer code and data. So Hales and his colleagues began entering the entire proof into a computer proof-checking program. In 2014 this process was completed and the proof was certified.

In November 2019, researchers at Google’s research center in Mountain View, California, published results for a new AI theorem-proving program. This program works with the HOL-Light theorem prover, which was used in Hales’ proof of the Kepler conjecture, and can prove, essentially unaided by humans, many basic theorems of mathematics. They have provided their tool in an open-source release, so that other mathematicians and computer scientists can experiment with it.

The Google AI system was trained on a set of 10,200 theorems that the researchers had gleaned from several sources, including many sub-theorems of Hales’ proof of the Kepler conjecture. Most of these theorems were in the area of linear algebra, real analysis and complex analysis, but the Google researchers emphasize that their approach is very broadly applicable. In the initial release, their software was able to prove 5919, or 58% of the training set. When they applied their software to a set of 3217 new theorems that it had not yet seen, it succeeded in proving 1251, or 38.9%.

Mathematicians are already envisioning how this software can be used in day-to-day research. Jeremy Avigad of Carnegie Mellon University sees it this way:

You get the maximum of precision and correctness all really spelled out, but you don’t have to do the work of filling in the details. … Maybe offloading some things that we used to do by hand frees us up for looking for new concepts and asking new questions.

For additional details see the Google authors’ technical paper, and a New Scientist article by Leah Crane.

Present-day computerized theorem provers are typically categorized as automated theorem provers (ATPs), which use computationally intensive methods to search for a proof; and interactive theorem provers, which rely on an interplay with humans — verifying the correctness of an argument and checking proofs for logical errors. Researchers in the field, however, acknowledge that both types of software are still a far cry from a completely independent computer-based mathematical reasoning system.

For one thing, these tools have been met with cold shoulders by many present-day mathematicians, in part because they require considerable study and practice to become proficient in using them, and in part because of a general distaste for the notion of automating mathematical thought.

But some mathematicians are embracing these tools. Kevin Buzzard of Imperial College London, for example, has begun to focus his research on computerized theorem provers. But he acknowledges, “Computers have done amazing calculations for us, but they have never solved a hard problem on their own. … Until they do, mathematicians aren’t going to be buying into this stuff.”

Emily Riehl, of Johns Hopkins University, teaches students to write mathematical proofs using a computerized tool. She reports that these tools help students to rigorously formulate their ideas. Even for her own research, she says that “using a proof assistant has changed the way I think about writing proofs.”

Vladimir Voevodsky of Princeton University, after finding an error in one of his own published results, was an ardent advocate of using computers to check proofs, until his death in 2017. Timothy Gowers of Cambridge, who won the Fields Medal in 1998, goes even further, saying that major mathematical journals should prepare for the day when the authors of all submitted papers must first certify that their results have been verified by a computerized proof checker.

Josef Urban of the Czech Institute of Informatics, Robotics and Cybernetics believes that a combination of computerized theorem provers and machine learning tools is required to produce human-like mathematical research capabilities. In July 2020 his group reported on some new mathematical conjectures generated by a neural network that was trained on millions of theorems and proofs. The network proposed more than 50,000 new formulas, but, as they acknowledged, many of these were duplicates: “It seems that we are not yet capable of proving the more interesting conjectures.”

A research team led by Christian Szegedy of Google Research sees automated theorem provers as a subset of the field of natural language processing, and plans to capitalize on recent advances in the field to demonstrate solid mathematical reasoning. He and other researchers have proposed a “skip-tree” task scheme that exhibits suprisingly strong mathematical reasoning capabilities. Out of thousands of generated conjectures, about 13% were both provable and new, in the sense of not merely being duplicates of other theorems in the database.

For additional examples and discussion of recent research in this area, see this Quanta Magazine article by Stephen Ornes.

So where is all this heading? With regards to computer mathematics, Timothy Gowers predicts that computers will be able to outperform human mathematicians by 2099. He says that this may lead to a brief golden age, when mathematicians still dominate in original thinking and computer programs focus on technical details, “but I think it will last a very short time,” given that there is no reason that computers cannot eventually become proficient at the more creative aspects of mathematical research as well.

Futurist Ray Kurzweil predicts that at an even earlier era (roughly 2045), machine intelligence will first meet, then transcend human intelligence, leading to even more powerful technology, in a dizzying cycle that we can only dimly imagine (a singularity). Like Gowers, Kurzweil does not see any reason that creative aspects of human thinking, such as mathematical reasoning, will be immune from these developments.

Not everyone is overjoyed with these prospects. Bill Joy, for one, is concerned that in Kurzweil’s singularity, humans could be relegated to minor players, if not ultimately extinguished. However, it must be acknowledged even today, AI-like systems already handle many important decision-making processes, ranging from finance and investment to weather prediction, using decision-making processes that humans only dimly understand.

One implication of all this is that mathematical training, both for mathematics majors and other students, must aggressively incorporate computer technology and teach computer methods for mathematical analysis and research, at all levels of study. Topics for prospective mathematicians should include a solid background in computer science (programming, data structures and algorithms), together with statistics, numerical analysis, machine learning and symbolic computing (or at least the usage of a symbolic computing tool such as *Mathematica*, *Maple* or *Sage*).

In a similar way, university departments of engineering, physics, chemistry, finance, medicine, law and social sciences need to significantly upgrade their training in computer skills — computer programming, machine learning, statistics and graphics. Large technology firms such as Amazon, Apple, Facebook, Google and Microsoft are already aggressively luring top mathematical, computer science and machine learning talent. Other employers, in other fields, will soon be seeking the same pool of candidates.

In short, one way or the other intelligent computers are coming, and are destined to transform fields ranging from mathematical research to law and medicine. Society in general must find a way to accommodate this technology, and to deal respectfully with the many people whose lives will be affected. But not all is gloom and doom. Mathematician Steven Strogatz envisions a mixed future:

]]>Maybe eventually our lack of insight would no longer bother us. After all, AlphaInfinity could cure all our diseases, solve all our scientific problems and make all our other intellectual trains run on time. We did pretty well without much insight for the first 300,000 years or so of our existence as Homo sapiens. And we’ll have no shortage of memory: we will recall with pride the golden era of human insight, this glorious interlude, a few thousand years long, between our uncomprehending past and our incomprehensible future.

Peter was a prolific mathematician, with over 200 publications, including several books. His research included works in classical analysis, computational number theory, Diophantine number theory and symbolic computing. Many of these papers were co-authored with his brother

Continue reading Peter Borwein dies at 67

]]>Peter was a prolific mathematician, with over 200 publications, including several books. His research included works in classical analysis, computational number theory, Diophantine number theory and symbolic computing. Many of these papers were co-authored with his brother Jonathan Borwein, and some were co-authored with his father David Borwein (and at least one was authored by all three of the Borwein family mathematicians). More generally, many of Peter’s papers were in the realm of experimental mathematics, wherein the computer is utilized an essential tool in the process of mathematical research.

Perhaps Peter’s best-known paper is “On the rapid computation of various polylogarithmic constants,” published in *Mathematics of Computation* in 1997. Among other things, this paper presented what is now known as the “BBP formula for pi,” together with a surprisingly simple algorithm for calculating binary digits of pi beginning at an arbitrary starting position, without needing to calculate any of the preceding digits. The central idea behind the paper was discovered by Peter, who demonstrated it for the relatively simple case of the natural logarithm of two; Simon Plouffe then used a computer program to discover the related formula for pi. Variations of the BBP formula have been used in conjunction with numerous recent record-breaking calculations of pi.

Peter and Jonathan established the Centre for Experimental and Constructive Mathematics (CECM) at Simon Fraser University (in Burnaby, British Columbia, Canada), in 1993. Later Peter helped found the Centre for Interdisciplinary Research in the Mathematical and Computational Sciences (IRMACS) at Simon Fraser, where he served as Director for many years. He also served on the editorial boards of several journals, including the *Ramanujan Quarterly* and the *Electronic Transactions on Numerical Analysis*.

Peter was a co-recipient (1993) of the Chauvenet Prize and the Merten Hasse Prize, both awarded by the Mathematical Association of America, and a co-recipient of the “Academic of the Year Award” (1996), awarded by the Confederation of University Faculty Associations of British Columbia.

Several colleagues have remarked how Peter was an inspiration for their work. Veselin Jungic, a prominent professor of mathematics at Simon Fraser, remarked in an email that Peter was “my friend, mentor, and a role model.”

The present author certainly is among those for whom Peter was an inspiration, as I have had the distinct privilege of collaborating with Peter Borwein on several occasions. This began in 1983, when I read an article in *SIAM Review*, co-authored by Peter and Jonathan, summarizing some of their recent discoveries of quadratically convergent algorithms for pi and elementary functions. After reading their article, I enthusiastically began programming some of their techniques, then called Peter and Jonathan to discuss my computational results. Thus began a productive and very enjoyable collaboration, beginning with the paper “Ramanujan, modular equations, and approximations to Pi,” published in the *American Mathematical Monthly* in 1989, and continuing with over 100 joint papers and books with myself and Peter and/or Jonathan.

Sadly, Jonathan Borwein passed away in 2016 (see also the Jonathan Borwein Memorial website). The death of Peter Borwein has now compounded an incalculable loss to the field of experimental mathematics.

Many will miss Peter, not just for his substantial mathematical achievements, but also for his humor, wit, and the astonishing grace with which he faced his condition of multiple sclerosis, which left him confined to a wheelchair, increasingly unable to pursue his research, and increasingly dependent on family and caregivers. I recall visiting Peter in January 2019 at his home in Burnaby, British Columbia. In spite of his paralysis and infirmity, I was astonished at his pleasant demeanor and ever-present humor. Would that we could all bear our misfortunes with as much strength and courage!

For additional details, see Peter Borwein’s obituary from the *Vancouver Sun and Province*.

Paul Erdős, one of the twentieth century’s most unique mathematicians, was known to travel from colleague to colleague, often arriving unannounced, and to immediately delve into some particularly intriguing research problem. See this article and this book for some additional background on this influential mathematician.

One of his more interesting conjectures is his “conjecture on arithmetic progressions,” sometimes referred to as the “Erdős-Turán conjecture.” It can be simply stated as follows: If $A$ is a set of positive integers such that $$\sum_{k \in A} \frac{1}{k} = \infty,$$ then $A$ contains arithmetic progressions of

Continue reading Two mathematicians’ triple play

]]>Paul Erdős, one of the twentieth century’s most unique mathematicians, was known to travel from colleague to colleague, often arriving unannounced, and to immediately delve into some particularly intriguing research problem. See this article and this book for some additional background on this influential mathematician.

One of his more interesting conjectures is his “conjecture on arithmetic progressions,” sometimes referred to as the “Erdős-Turán conjecture.” It can be simply stated as follows: If $A$ is a set of positive integers such that $$\sum_{k \in A} \frac{1}{k} = \infty,$$ then $A$ contains arithmetic progressions of any length, or, in other words, $A$ contains subsets of the form $\{a, a+h, a+2h, a+3h, \cdots, a+(n-1)h\}$, for arbitrarily large $n$.

This conjecture was originally posed in a different form in 1936 — namely that any set of integers with positive natural density contains infinitely many three-term progressions. This was proven by Klaus Roth in 1952. In 1975, Szemeredi extended this result to arbitrarily long arithmetic progressions. Erdős posed the specific form of the conjecture given in the previous paragraph in 1976.

As of 2020, the conjecture remains unproven. Timothy Gowers of Cambridge, who received the Fields Medal in 1998, quipped, “I think many people regarded it as Erdős’ number-one problem.”

A breakthrough has just been reported: Thomas Bloom of Cambridge and Olof Sisask of Stockholm University have proved the conjecture for triples, i.e., for arithmetic progressions of length three. In particular, their result is the following: If a set $A$ of positive integers contains no non-trivial three-term arithmetic progressions, then $|A|<< N/(\log N)^{1+c}$, for some absolute constant $c>0$. Or, stated another way, if $A$ is a set of positive integers such that $$\sum_{k \in A} \frac{1}{k} = \infty,$$ then $A$ contains three-term arithmetic progressions (infinitely many, in fact).

Several of the many mathematicians who have worked on variations of this problem over the years are impressed. Nets Katz of the California Institute of Technology, says, “This result was kind of a landmark goal for a lot of years.” “It’s a big deal.”

For full details, see the Bloom-Sisask paper (which has not yet been peer-reviewed) here. For an excellent overview of the problem and its history, including comments by several who have worked on the problem in the past, see this well-written Quanta Magazine article by Erica Klarreich.

]]>The standard model of physics, namely the framework of laws at the foundation of modern physics, has reigned supreme since the 1970s, confirmed to great precision in a vast array of experimental tests. Among other things, the standard model predicted the existence of the Higgs boson, which was experimentally discovered in 2012, nearly 50 years after it was first predicted.

Yet physicists have recognized for many years that the standard model cannot be the final answer. For example, quantum theory and general relativity are known to be mathematically incompatible. String theory and

Continue reading How old is the universe? New results clash

]]>The standard model of physics, namely the framework of laws at the foundation of modern physics, has reigned supreme since the 1970s, confirmed to great precision in a vast array of experimental tests. Among other things, the standard model predicted the existence of the Higgs boson, which was experimentally discovered in 2012, nearly 50 years after it was first predicted.

Yet physicists have recognized for many years that the standard model cannot be the final answer. For example, quantum theory and general relativity are known to be mathematically incompatible. String theory and Loop quantum gravity are being explored as potential frameworks to resolve this incompatibility, but neither is remotely well-developed enough to qualify as a new “theory of everything.” Other difficulties may exist as well.

But there is only so far that mathematical analysis can go in the absence of solid experimental results. As Sabine Hossenfelder has emphasized, beautiful mathematics published in a vacuum of experimental data can lead physics astray.

One significant experimental anomaly that does not appear to be going away, and which may point to a fundamental weakness in the standard model, is the discrepancy in values of the age of the universe, or, equivalently, in the Hubble constant, based on different experimental approaches.

The Hubble constant $H_0$ is a measure of the rate of expansion of the universe, and is directly connected to estimates of the age $A$ of the universe via the relation $A = 1 / H_0$. Units must be converted here, since the age of the universe is normally cited in billions of years, whereas the Hubble constant is usually given in kilometers per second per megaparsec (a megaparsec is $3.08567758128 \times 10^{19}$ km). Also, an adjustment factor is normally applied to this formula to be fully in conformance with the big bang model.

The trouble is, the best current experimental results give conflicting values for the Hubble constant, and thus, equivalently, for the age of the universe. See this previous Math Scholar article for an overview of the problem, as of August 2019.

One method to determine $H_0$ is based on the flat Lambda cold dark matter (Lambda-CDM) model of the universe, combined with careful measurements of the cosmic microwave background (CMB) data from the Planck satellite. The latest (2018) result from the Planck team yielded $H_0 = 67.4 \pm 0.5$, which corresponds to $13.77$ billion years for the age of the universe.

Another approach is to employ a more traditional astronomical technique, based on observations of Cepheid variable stars, combined with parallax measurements as a calibration. In 2016, a team of astronomers using the Wide Field Camera 3 (WFC3) of the Hubble Space Telescope obtained the value $H_0 = 73.24 \pm 1.74$, corresponding to $12.67$ billion years for the age of the universe.

Clearly, these two sets of values differ by significantly more than the combined error bars of the two measurements. What is going on?

In an attempt to resolve the “Hubble tension,” as this controversy is now called, several research teams, using different approaches, have launched studies hoping to resolve the issue. But rather than resolve the issue, their latest results only deepen the controversy.

In March 2019, a research team working with the Hubble Space Telescope reported that based on observations of 70 long-period Cepheid variable stars in the Large Magellanic Cloud, they were able to refine their estimate to $H_0 = 74.03 \pm 1.42$. Needless to say, this new result does not help to resolve the discrepancy of the Cepheid group’s result with the Planck team’s result — it moves in the other direction.

In July 2019, a group headed by Wendy Freedman at the University of Chicago reported results from another experimental approach, known as the “Tip of the Red Giant Branch” (TRGB). Their approach, which is analogous to but independent from the approach taken with Cepheid variable stars, is to analyze a surge in helium burning near the end of a red giant star’s lifetime. Using this scheme, they reported $H_0 = 69.8 \pm 1.7$. This is slightly more than the Planck team value ($67.8$), but not nearly enough to close the gap with the Cepheid approach.

Another group, called $H_0$ Lenses in COSMOGRAIL’s Wellspring (HoLiCOW) [yes, that is the acronym], also announced results in July 2019. Their study is based on gravitational lensing of distant quasars by an intervening galaxy. When this happens, multiple time-delayed images of the galaxy appear at the edges of the intervening galaxy, when viewed by earth-bound astronomers. The HoLiCOW project’s latest result is $H_0 = 73.3 \pm 1.76$, which is reasonably close to the Cepheid result, but not to the Planck result.

In February 2020, the group headed by Wendy Freedman at the University of Chicago updated their TRGB study with additional consistency checks. Their updated result was $H_0 = 69.6 \pm 1.7$, a value slightly larger than their earlier figure, but still hopelessly inconsistent with the Cepheid value.

In July 2020, a team based at Princeton University announced a new result, based on the same Lambda-CDM model as the Planck team, but using the Atacama Cosmology Telescope (ACT) in Chile. Their result is $H_0 = 67.6 \pm 1.1$. This is within $0.3\%$ of the Planck team’s result.

A group headed by researchers at the University of Oregon also reported results in July 2020. They employed the “baryonic Tully-Fisher relation” (bTFR) as a distance estimator. Using 50 galaxies with accurate distances (from either Cepheid or TRGB measurements), they calibrated the bFTR on a large scale. After applying this calibrated bTFR model to 95 independent galaxies, they found $H_0 = 75.1 \pm 2.3$.

Needless to say, researchers are perplexed by the latest reports: the Planck team (based on the Lambda-CDM model) reports $H_0 = 67.4 \pm 0.5$; the Princeton group reports $H_0 = 67.6 \pm 1.1$; the Chicago team reports (updated) $H_0 = 69.6 \pm 1.7$; the HoLiCOW team reports $H_0 = 73.3 \pm 1.76$; the Cepheid team reports $H_0 = 74.03 \pm 1.42$; and the Oregon team reports $H_0 = 75.1 \pm 2.3$. Obviously these results cannot all simultaneously be correct. For example, the Oregon team’s figure ($75.1$) represents a five-sigma discrepancy from the Planck figure ($67.4$).

See this Quanta Magazine article for an overview of the experimental results as of February 2020 (prior to the two July 2020 studies mentioned above).

While each of these teams is hard at work scrutinizing their methods and refining their results, researchers are increasingly considering the unsettling possibility that one or more of the underlying physical theories are just plain wrong, at least on the length and time scales involved.

Key among these theories is the Lambda-CDM model of big bang cosmology. Yet physicists and cosmologists are loath to discard this model, because it explains so much so well:

- The cosmic microwave background radiation and its properties.
- The large-scale structure and distribution of galaxies.
- The present-day observed abundances of the light elements (hydrogen, deuterium, helium and lithium).
- The accelerating expansion of the universe, as observed in measurements of distant galaxies and supernovas.

As Lloyd Knox, a cosmologist at the University of California, Davis, explains,

The Lambda-CDM model has been amazingly successful. … If there’s a major overhaul of the model, it’s hard to see how it wouldn’t look like a conspiracy. Somehow this ‘wrong’ model got it all right.

Various modifications to the Lambda-CDM model have been proposed, but while some of these changes partially alleviate the Hubble constant discrepancy, others make it worse. None are taken very seriously in the community at the present time.

Adam Riess, an astronomer at Johns Hopkins University in Baltimore, Maryland, is reassured that the Princeton ACT team’s result was so close to the Planck team’s result, and he hopes that additional experimental results will close the gap between the competing values. Nonetheless, he ventures, “My gut feeling is that there’s something interesting going on.”

For additional details and discussion, see this Scientific American article, this Quanta article and this Nature article.

In spite of the temptation to jump to conclusions, throwing out the standard model or big bang cosmology, considerable caution is in order. After all, in most cases anomalies are eventually resolved, usually as some defect of the experimental process or as a faulty application of the theory.

A good example of an experimental defect is the 2011 announcement by Italian scientists that neutrinos emitted at CERN (near Geneva, Switzerland) had arrived at the Gran Sasso Lab (in the Italian Alps) 60 nanoseconds sooner than if they had traveled at the speed of light. If upheld, this finding would have constituted a violation of Einstein’s theory of relativity. As it turns out, the experimental team subsequently discovered that the discrepancy was due to a loose fiber optic cable that had introduced an error in the timing system.

A good example of misapplication of underlying theory is the solar neutrino anomaly, namely a discrepancy in the number of observed neutrinos emanating from the interior of the sun from what had been predicted (incorrectly, as it turned out) based on the standard model. In 1998, researchers discovered that the anomaly could be resolved if neutrinos have a very small but nonzero mass; then, by straightforward application of standard model, the flavor of neutrinos could change enroute from the sun to the earth, thus resolving the discrepancy. Takaaki Kajita and Arthur McDonald received the 2015 Nobel Prize in physics for this discovery.

In any event, sooner or later some experimental result may be found that fundamentally upsets currently accepted theoretical theories, either for a specific framework such as Lambda-CDM big bang cosmology, or even for the foundational standard model. Will the “Hubble tension” anomaly ultimately overturn these basic theories? Only time will tell.

]]>The statistics are staggering: As of 1 June 2020, according to the Johns Hopkins University database, the U.S. had logged over 1.811 million confirmed cases of Covid-19 and over 105,000 deaths. The U.K. was next, with over 277,000 confirmed cases and over 38,000 deaths. Worldwide, over 6.3 million cases had been confirmed, with more than 376,000 deaths. If current trends continue, the U.S. death toll alone will soon exceed that of all wars in its history except for the Civil War and World War II.

The economic costs have been

Continue reading Covid-19 and the worth of a human life

]]>The statistics are staggering: As of 1 June 2020, according to the Johns Hopkins University database, the U.S. had logged over 1.811 million confirmed cases of Covid-19 and over 105,000 deaths. The U.K. was next, with over 277,000 confirmed cases and over 38,000 deaths. Worldwide, over 6.3 million cases had been confirmed, with more than 376,000 deaths. If current trends continue, the U.S. death toll alone will soon exceed that of all wars in its history except for the Civil War and World War II.

The economic costs have been similarly astounding. On May 8, the U.S. Department of Labor reported that the U.S. unemployment rate had risen to 14.7% (adjusted by some economists to 19.5%), substantially higher than the peak (10.0%) of the 2008-2009 recession. Trillions of dollars (and pounds, euros, yen, renminbi and other currencies) have already been spent by governments worldwide in an attempt to prevent a major economic collapse, and even more stimulus will likely be required in the coming months. Much of the economic fallout will be long-lasting, as many businesses, large and small, particularly in retail services and travel, may never fully recover.

At the present time, national, state and local governments are grappling with the difficult decision of when to relax stay-at-home orders and reopen their economies. Clearly the tradeoff is difficult, and the risks are great: if a relaxation of restrictions happens too fast, then the city/state/nation risks a “second wave” resurgence, which may require a return to very restrictive measures and result in even more deaths and economic costs. As *New York Times* columnist Paul Krugman observes, “What good is increasing G.D.P. if it kills you?”

But costs cannot be completely ignored: How much economic devastation is a society willing to accept to save additional lives? Clearly there is no easy answer, particularly given the uncertainties in how the Covid-19 pandemic spreads, and in which measures are most effective in controlling it.

New York Governor Andrew Cuomo recently expressed his view, shared by numerous others, in these terms: “How much is a human life worth? … To me, I say the cost of a human life, a human life is priceless. Period.”

But a quick reflection shows that this view cannot be taken to its logical extreme. Suppose, for point of discussion, that some nation has reduced its number of new Covid-19 cases to very near zero (such as New Zealand and Australia, among others, have reportedly achieved). Suppose also that the cost of shutting down the economy of such a nation for one additional month is USD$1 trillion, and that such a measure is predicted to save roughly an additional ten lives, in conjunction with reasonable testing and contact tracing programs. Is it worth continuing a nearly full-scale economic shutdown to possibly save a handful of additional lives?

Clearly, in almost every major national economy, USD$1 trillion could save far more than a handful of lives if spent in other ways, such as for improved highway and transit infrastructure (which could save hundreds if not thousands of lives *each year*), or in conversion from coal or oil to cleaner forms of energy (which again could likely save thousands of additional lives *each year* in reduced air pollution and black lung disease). For that matter, even a modest boost in funding for anti-smoking programs could save many lives.

In fact, one can argue that a governmental body can and should utilize such reckonings in its planning, because to fail to do so, e.g., to make decisions on primarily political grounds, listening to various special interest groups instead of scientists, almost certainly will lead to public funds NOT being allocated in ways that save the most lives.

All of this raises a basic question: How much is society willing to spend per human life saved?

Such reckonings have a long history, as partially recounted in this article by Adam Rogers. In 1968, Thomas Schelling, an economist who later won the Nobel Memorial Prize in Economics for his work on game theory, wrote a chapter in the book Problems in Public Expenditure Analysis provocatively entitled “The Life You Save May Be Your Own.”

In this article Schelling introduced the concept of the “value of statistical life” (VSL), namely how much money society is willing to pay to reduce by one the expected risk of untimely death.

In 1981, economist Kip Viscusi of Vanderbilt University recommended using VSL to make decisions of how much additional “hazard pay” a worker should receive for doing a job with a known risk of accidental death. He reckoned that if 1 in 10,000 workers died on the job in a given year, and in return each received an extra $300 per year in wages, then VSL for such workers is roughly $3M, which in today’s dollars would be roughly $8.9M.

This is not too far from the current reckoning, used in various calculations, which is a nominal $10M. The U.S. Environmental Protection Agency (EPA), for example, has been using the figure $9.4M in cost-benefit analyses of environmental measures such as reducing auto pollution and water pollution.

There are actually numerous venues where VSL analyses can be (and are being) applied:

*Reckonings of hazard pay*: As mentioned above, VSL analyses can and should be used to calculate hazard pay premiums for workers in dangerous jobs. The consensus of researchers such as Viscusi is that many workers do NOT receive sufficient additional compensation commensurate with their job risk (see also the table below). Along this line, the U.S. Congress has considered (but not yet passed) measures to provide additional hazard pay for essential workers at risk during the Covid-19 pandemic.*Liability insurance premiums*: Insurance company actuaries use VSL analyses to set liability insurance premiums for businesses and individuals as protection from wrongful death lawsuits, say, for instance, if one of the owned vehicles is involved in a fatal accident. Again, the consensus of observers in the field is that many firms and individuals do NOT have sufficient insurance coverage commensurate with their risk.*Life insurance premiums*: Individuals should consider the value of their own life, in terms of what financial support would be required for one’s spouse or family in case of death. Again, many financial advisors report that their clients’ life insurance policies are often insufficient.*Governmental environmental and safety standards*: Here, again, some nominal VSL figures can and should be considered in setting environmental, health and safety standards. As noted above, the U.S. EPA currently uses the figure $9.4M.

In April 2020, a group of researchers at the University of Wyoming released a study, to appear in the *Journal of Benefit-Cost Analysis*, that analyzed the cost-effectiveness of social distancing measures, including shutdowns of large portions of the economy, currently being taken in the United States to combat Covid-19.

Here is a brief summary of their analysis: After reviewing studies in the literature, they assume that U.S. Gross Domestic Product (GDP) would decrease 2% ($6.5T) this year without social distancing and shutdowns, but that with social distancing and shutdowns the GDP will shrink by 6.2% ($13.7T). Thus the cost will be $7.2T. Next (also after reviewing published studies in the literature), they assume that social distancing and shutdown measures will save 1.24 million lives. Using a VSL figure of $10M, they conclude that the benefit will be $12.4T. Thus social distancing and shutdowns save $5.2T. Yes, that is over 5 trillion dollars net savings.

In other words, social distancing and shutdown measures taken so far and projected in the next few months are definitely cost-effective. As Kip Viscusi (the Vanderbilt University economist mentioned above) observes, “Unless you have a really catastrophic outcome, the health benefits of social distancing swamp the costs.”

VSL considerations raise the question of whether workers in relatively hazardous job categories are truly being sufficiently compensated for the risk they assume.

To that end, here is a table of some high-risk U.S. occupations (taken from a 2019 USA Today press report), together with median annual wages and yearly fatal injuries per 100,000 (2017 data). Appended to this table, in the column “VSL premium,” is a calculation of what the wage premium should be for this class of worker, based on a reckoning of VSL = $10,000,000.

Occupation | Median annual wage | Fatalities/100,000 | VSL premium |
---|---|---|---|

Fishers and related fishing workers | $28,310 | 100.0 | $10,000 |

Logging workers | $38,840 | 87.3 | $8,730 |

Aircraft pilots and flight engineers | $111,930 | 51.3 | $5,130 |

Roofers | $38,970 | 45.2 | $4,520 |

Refuse and recyclable material collectors | $36,160 | 34.9 | $3,490 |

Structural iron and steel workers | $52,610 | 33.3 | $3,330 |

Driver/sales workers and truck drivers | $37,610 | 26.9 | $2,690 |

Farmers, ranchers and other agricultural managers | $69,620 | 24.0 | $2,400 |

First-line supervisors of landscaping, lawn service and groundskeeping workers | $47,030 | 21.0 | $2,100 |

Electrical power-line installers and repairers | $69,380 | 18.6 | $1,860 |

Needless to say, these data indicate that many occupations are not being paid an equitable wage, based on their risk of fatality. Note, for example, that if one were to subtract the appropriate VSL premium ($10,000) from the median annual wages ($28,310) of fishers and related fishing workers, one would conclude that these workers are really only being paid $18,310, or approximately $8.80 per hour, which does not even meet the minimum wage in most U.S. states. Clearly these workers should be paid a significantly higher wage, commensurate with their risk.

In short, analyses based on nominal figures for the Value of Statistical Life (VSL) are useful in several contexts in today’s society, including reckonings of hazard pay and insurance premiums, and by governmental agencies in crafting environmental, health and safety programs. A recently published analysis of the cost-effectiveness of Covid-19 mitigation programs (social distancing and resulting shutdowns of large portions of the economy), summarized above, shows that these measures are indeed cost-effective, even though the costs are staggering.

Many recoil at the notion of assigning a numerical value to human life. The present author himself finds such reckonings rather disconcerting. So perhaps because of their controversial nature, these reckonings will mostly be relegated to academic studies, actuarial analyses, tort cases and governmental agencies, rather than directly in the public eye.

But given that glaring discrepancies in the value of life are implicitly in effect in numerous arenas of society, particularly in wages for workers in dangerous occupations, does it help to turn a blind eye to these injustices? Does it really help to refuse do some hard reckoning in this area? Probably not.

For additional details and discussion, see this Wired article by Adam Rogers, this FiveThirtyEight.com article by Amelia Thomson-DeVeaux, this USA Today press report, this New York Times essay by Paul Krugman and this University of Wyoming study, mentioned above. See also this analysis on risks and benefits for Covid-19 strategies, by Marcos Lopez de Prado (Cornell University) and Alexander Lipton (Hebrew University of Jerusalem and MIT).

]]>Exactly how life first emerged on Earth (the “abiogenesis” problem) remains a critical unsolved question in biology. Was it inevitable, given a favorable environment, or was it a fantastically improbable event? All we know for sure is that it occurred at least 3.8 billion years ago and possibly more than 4 billion years ago. The fact that life arose relatively soon after the surface of the Earth solidified indicates to some that abiogenesis was inevitable, but there is no way to know for sure. For further

Continue reading The origin of life in an inflationary universe

]]>Exactly how life first emerged on Earth (the “abiogenesis” problem) remains a critical unsolved question in biology. Was it inevitable, given a favorable environment, or was it a fantastically improbable event? All we know for sure is that it occurred at least 3.8 billion years ago and possibly more than 4 billion years ago. The fact that life arose relatively soon after the surface of the Earth solidified indicates to some that abiogenesis was inevitable, but there is no way to know for sure. For further details, see this Math Scholar blog.

One leading hypotheses is that ribonucleic acid (RNA), which operates in biology alongside its more familiar cousin DNA, played a key role in the earliest abiogenesis events — a notion known as the RNA world hypothesis. For example, researchers recently found that certain RNA molecules can greatly increase the rate of specific chemical reactions, including, remarkably, the replication of parts of other RNA molecules. Thus perhaps RNA, or an even a more primitive molecule similar to RNA, could have “self-catalyzed” itself in this manner, perhaps with the assistance of some related molecules. Perhaps then some larger conglomerates of such compounds, packaged within simple hydrophobic compounds as membranes, could together have formed some very primitive cells.

Nonetheless, the RNA world hypothesis faces major challenges as an explanation of abiogenesis.

In May 2009, a team led by John Sutherland of the University of Cambridge solved one problem that had perplexed researchers for at least 20 years, namely how the four basic nucleotides (building blocks) in RNA chains could have spontaneously assembled. Sutherland and his team first discovered one combination of chemicals assumed to be present on the primordial Earth that formed the RNA nucleotides cytosine and uracil. Then in May 2016, a team led by German chemist Thomas Carell found a plausible way to form adenine and guanine, the other two nucleotides. Finally, in November 2018, Carell’s team announced that they had found a single set of plausible reactions that could have formed all four RNA nucleotides on the early Earth. See this Nature article for additional details.

Nonetheless, researchers in the abiogenesis arena are still stuck with a stubborn unanswered question: How could large chains of RNA, sufficiently long to be the basis of primitive self-replicating evolutionary life, have spontaneously formed in the primordial Earth’s water-rich environment, which is thermodynamically unfavorable for the formation of such chains? The current consensus is that any such self-replicating RNA molecule would need at least 40-60 nucleotide bases (rungs in the chain), and most likely over 100, to possess even a minimal self-replicating function. What’s more, a pair of such molecules may be necessary, if one is to serve as a template for replication. Yet the largest number of bases that have been reproducibly demonstrated in laboratory experiments is 10, and the probability of successful formation drops sharply as the number of bases increases.

In a new paper published in *Nature Scientific Reports*, Japanese astronomer Tomonori Totani proposes a solution to this conundrum. He first reviews the relevant RNA world literature and analyzes the process of RNA formation and the prospects for this happening on a given planet in considerable detail. Then he calculates the probability of spontaneous assembly of a sufficiently long RNA chain to be the basis of life.

Interestingly, Totani finds that this probability is negligibly small on our planet, and minuscule even in the observable universe to which we belong, which contains approximately 10^{22} stars. But Totani finds that this probability would be virtually 100% in the much larger universe created in the inflationary epoch just following the Big Bang, which is estimated to contain approximately 10^{100} stars, most of which are beyond the “horizon” visible from Earth. Under this hypothesis, the fact that we reside on such an exceedingly fortunate planet to have been a home for RNA-based life is merely a consequence of the anthropic principle — if we did not reside on such a fortuitous planet, we would not be here to discuss the issue.

By way of background, the inflationary Big Bang cosmology is the theory, first proposed in the 1980s by physicist Alan Guth, that in the first tiny fraction of a second after the Big Bang, the fabric of space exploded by a factor of roughly 10^{36} [Guth1997]. The inflation hypothesis explained two paradoxes: the “flatness problem” (in the very early universe, the ratio of the actual mass density of the universe to the critical density must have been exceedingly close to one), and the “horizon problem” (the fact that different regions on opposite sides of the universe appear to have identical characteristics, even though no physical force, even light rays, could have communicated between them). The inflation theory is now widely accepted in the field, although some demur, as we will see later.

In previous Math Scholar articles (see article A and article B), we discussed the nagging conundrum known as Fermi’s paradox: If the universe or even just the Milky Way is teeming with life, why do we not see evidence of even a single other technological civilization? After all, if such a civilization exists at all say in the Milky Way, almost certainly it is thousands or millions of years more advanced, and thus exploring and communicating with habitable planets in the Milky Way would be a relatively simple and inexpensive undertaking, even for a small group of individuals.

Numerous solutions have been proposed to Fermi’s paradox, but almost all of them have devastating rejoinders. Arguments such as “extraterrestrial (ET) societies are under a strict global command not to disturb Earth,” or “ETs have lost interest in space research and exploration,” or “ETs are not interested in a primitive planet such as Earth,” or “ETs have moved on to more advanced communication technologies,” all collapse under the principle of diversity, a fundamental feature of evolution-based life (even assuming a very general, not-necessarily-carbon-based definition of life). In particular, it is hardly credible that in a vast, diverse ET society, and much less credible if there are numerous such societies, that not a single individual or group of individuals has ever attempted to contact Earth, using a means of communication that an emerging technological society such as ours could quickly and easily recognize. And note that once such a signal has been sent to Earth, it cannot be called back, according to known laws of physics.

Some researchers (see this PBS television show for instance) have claimed that since only 70 years or so have elapsed since radio/TV and radio telescope transmissions began on Earth, this means that only ETs within 70 light-years of Earth, if any such exist, would even know of our existence. But this is clearly groundless, because networks of lights have been visible on Earth for hundreds of years, other evidences of civilization (Egyptian pyramids, etc.) have been visible for thousands of years, large animal species, including early hominins, have been visible for millions of years, and atmospheric signatures of life have been evident for billions of years.

Arguments that exploration and/or communication are technologically “too difficult” for an ET society immediately founder on the fact that human society is on the verge of launching such technologies today, and ET societies, as mentioned above, are almost certainly thousands or millions of years more advanced. As a single example, since we now have rapidly improving exoplanet detection, analysis and imaging facilities, surely any ET society in the Milky Way galaxy has far superior facilities that can observe Earth. Also, within a few decades it will be possible to launch “von Neumann probes” that land on distant planets or asteroids, construct extra copies of themselves (with the latest software beamed from the home planet), and then launch these probes to other stars, thus exploring the entire galaxy within at most a million years or so [Nicholson2013]. Such probes could beam details of their discoveries back to the home planet and, importantly, also initiate communication with promising planets. Along this line, gravitational lenses, which utilize a star’s gravitational field as an enormously magnifying telescope, could be used to view images of distant planets such as Earth and to initiate communication with these planets [Landis2016].

So why have we not seen any such probes or communications? There are no easy answers. See this previous Math Scholar article for more discussion of proposed solutions and rejoinders to Fermi’s paradox.

One plausible resolution of Fermi’s paradox, although it is sternly resisted from many quarters, is the “rare Earth” explanation: Earth is a unique planet with characteristics fostering a long-lived biological regime leading to intelligent life [Ward2000,Gribbin2018]. Under this explanation, the reason we have not seen any evidence of the existence of ET civilizations, or any unmistakable attempts by an ET civilization to contact us, is simply that they do not exist, at least not within a vast distance from Earth. Clearly Totani’s analysis fits with the rare Earth explanation.

There are other arguments as well that suggest that Earth is significantly more special than typically recognized, in spite of promising observations of extrasolar planets (see this earlier Math Scholar article for details and references):

- To form more complex compounds, the RNA world scenario requires ultraviolet light at a certain moderate energy level, which the early Earth provided. Out of some 4000 recently discovered exoplanets, only one has both a moderate temperature regime for liquid water and satisfies the UV light criterion.
- Virtually all exoplanets orbiting red dwarf stars, which are much more numerous in the Milky Way than are planets orbiting our type of star, are unlikely to harbor life because of frequent flares of sterilizing X-ray radiation, and any atmosphere would be quickly stripped away.
- Many recently discovered exoplanets that have a solid crust, and thus are candidates for life, are “toffee planets,” with surface rocks hot enough to stretch like toffee candy, and are unlikely to feature plate tectonics, which is known to be essential for a long-lasting moderate regime for life.
- Our solar system is also quite special, in that it includes small planets like Earth but also large planets like Jupiter, which have cleared out debris and reduced asteroid impacts on Earth. Also, the solar system’s position in the Milky Way is rather special — close enough to the galaxy’s center to have sufficient concentrations of heavier elements for complex chemistry, yet not so close as to be bathed in sterilizing radiation.

For additional discussion why Earth and our solar system are quite possibly unique in the Milky Way for harboring life, see this 2018 Scientific American article by John Gribbin.

There are, of course, some significant qualifications and rejoinders to Totani’s analysis. To begin with, although the inflationary scenario of the early Big Bang offers elegant solutions to several vexing paradoxes of the observed universe, and is widely accepted in the field, it has significant difficulties that do not seem to be going away, such as how the inflation process started and stopped. Paul Steinhart, one of the early proponents of inflation, more recently has expressed his doubts. At the least, it now seems likely that the inflation theory will need to be significantly modified, although various suggested modifications do not appear to affect Totani’s central conclusion. See this earlier Math Scholar article for details.

Secondly, Totani’s analysis only applies to a carbon-based (RNA-based) biology. But in his defense, although one can imagine living organisms based on other elements, carbon is by far is the most suitable element for the construction of complex molecules, as required for any conceivable form of living or sentient beings.

Finally, Totani’s calculations, although very well documented and based on the latest published research in the field, still are relatively tentative, and could easily be upset by a breakthrough in laboratory studies of the RNA world hypothesis. Totani himself offers at least one potential refutation of his analysis:

If extraterrestrial organisms of a different origin from those on Earth are discovered in the future, it would imply an unknown mechanism at work to polymerize nucleotides much faster than random statistical processes.

So was the origin of life on the early Earth an inevitable albeit remarkable event, bound to happen within a few tens or hundreds of millions of years after the formation of the early Earth? Or was it a freak accident of nature, and are we, as descendants of that exceedingly improbable event, the only sentient beings within a vast volume of the observable universe able to comprehend this astounding fact? Time will tell.

Either way, we await further research in this arena. It is an exciting time to be alive!

]]>As this is being written (April 2020), the entire world is gripped in the throes of the rapidly spreading and deadly Covid-19 pandemic. International travel has been greatly curtailed worldwide; many businesses, large and small, have shut their doors; many K-12 schools and universities have closed; and entire regions and nations, encompassing well over one billion people, have been ordered to remain in their homes.

As of the current date (28 April 2020), the Johns Hopkins University Coronavirus Resource Center has tallied 3,062,000

Continue reading Pseudoscience in the age of Coronavirus

]]>As this is being written (April 2020), the entire world is gripped in the throes of the rapidly spreading and deadly Covid-19 pandemic. International travel has been greatly curtailed worldwide; many businesses, large and small, have shut their doors; many K-12 schools and universities have closed; and entire regions and nations, encompassing well over one billion people, have been ordered to remain in their homes.

As of the current date (28 April 2020), the Johns Hopkins University Coronavirus Resource Center has tallied 3,062,000 confirmed cases and 212,000 deaths worldwide. The U.S. far and away leads in both statistics, with over 1,013,000 cases and 57,000 deaths. Spain is next, with 232,000 cases and 24,000 deaths, followed by Italy, France, Germany, U.K., Turkey, Iran, Russia and China. By time you read this, these grim statistics are certain to be even higher.

Sadly, given the highly connected nature of modern global society, we almost certainly will face similar pandemics in the future, and so it is imperative to marshall all the scientific resources available to develop effective means to contain and treat the current pandemic and to prevent, contain and treat future pandemics. We have already seen the need to educate the public about the mathematics of exponential growth, the biology of transmission (e.g., the Covid-19 virus can be transmitted by someone who has no clear symptoms of infection), and the potential danger of mutant strains that may require new rounds of social distancing, business closures, treatment and vaccination.

All of this will require a substantially greater level of technical knowledge about Covid-19 and other potential diseases in the general public. At the very least, a large fraction of the public must clearly understand and be willing to comply with emergency decrees designed by scientists to stem the tide of infection.

Incredibly, just as modern society faces these daunting challenges, which, more than ever before, require a scientifically literate public, millions of reasonably well-educated and otherwise intelligent people in highly industrialized nations are embracing pseudoscience, ranging from bogus and untested medical “cures” to astrological predictions and treatments. Yes, astrology — the absurd and utterly unscientific notion that the future is determined by the positions of a few stars and planets when one is born.

A 2018 Pew Research poll found that 29% of U.S. adults believe in astrology. A 2018 National Science Foundation poll found that 37% of Americans view astrology as “very scientific” or “sort of scientific,” a figure that rose to 44% among 18-34-year-olds [NSB2018]. Studies have found similar trends in other nations, including the U.K., among others.

Thus perhaps it is not surprising to read of astrological predictions and cures for Covid-19. One popular astrologer has claimed that he has learned, via astrology, that the coronavirus will quickly culminate before vanishing almost overnight. He added, “I see the collective south node, which represents our collective fate, is conjunct with Venus — an air planet of gas — in Capricorn which is the sign of suffering and material darkness.” Another astrologer notes that the first case in Wuhan, China emerged while “Jupiter, the planet of international travel, moved into Capricorn. Saturn, which can be a planet of limitation and restriction, rules Capricorn, and this influence placed a damper on Jupiter’s usually expansive nature.”

Some might dismiss these astrologers and their customers as engaging in harmless fun. But other groups are promoting and profiting from equally pseudoscientific medicinal agents to treat or prevent Covid-19, and, evidently, many of their customers believe that these remedies are effective.

As a single example, for many years regulatory agencies have targeted promoters of “essential oils” (various fragrant essences such as lemon oil or lavender oil), who have claimed a broad range of medicinal powers for their products. In 2014, the U.S. Food and Drug Administration (FDA) issued warning letters (see A and B for example) to various firms selling these products, citing unsubstantiated claims that certain oils are effective against the Ebola virus; can hep prevent conditions including cancer and heart disease; can treat Alzheimer’s disease and other cognitive impairments; have an anti-tumor effect on various cancer cells; or can treat Herpes, MRSA, shingles, whooping cough, flu, lupus and warts.

Thus it was not surprising that some of these same firms, among others, are now under fire from government agencies such as the U.S. Federal Trade Commission for making spurious claims that their essential oil products can treat or prevent Covid-19, or that persons who have recently lost employment due to Covid-19 can make good money selling these products via these firms’ multilevel marketing operations.

Numerous other firms and individuals have promoted dubious Covid-19 cures. Conservative talk radio host Alex Jones has promoted a silver-based toothpaste that he claims will kill the coronavirus, saying (falsely), “The patented nanosilver we have, the Pentagon has come out and documented and Homeland Security has said this stuff kills the whole SARS-corona family at point-blank range.” The New York Attorney General’s office has issued a cease-and-desist order against Jones for these claims. A similar colloidal silver product was promoted on a TV show hosted by televangelist Jim Bakker, drawing a lawsuit from the Missouri Attorney General.

The FDA has also issued warnings to at least one firm that has marketed “fraudulent and dangerous” chlorine dioxide products, under names such as “Miracle Mineral Solution,” for various medical purposes, and, most recently, for prevention and treatment of Covid-19:

Despite previous warnings, the FDA is concerned that we are still seeing chlorine dioxide products being sold with misleading claims that they are safe and effective for the treatment of diseases, now including COVID-19. The sale of these products can jeopardize a person’s health and delay proper medical treatment.

Sadly, in this discussion we cannot avoid the “elephant in the room,” namely the unfortunate promotion of unproven Covid-19 treatments by U.S. President Donald J. Trump. Trump first mentioned the antimalarial drugs chloroquine and hydroxychloroquine for Covid-19 in a White House press briefing on 19 March 2020. He reiterated the recommendation on 4 April, and doubled down on the claim the next day, even after Anthony Fauci and other U.S. government scientists warned that these drugs were not proven treatments for Covid-19.

Many in the public took these recommendations quite seriously. U.S. pharmacies reported that after Trump’s comments on 4 April 2020, prescriptions for chloroquine and hydroxychloroquine surged by a factor 46 compared with the average daily rate prevailing before. Partly as a result, many patients suffering from lupus, for which these drugs are legitimate prescribed treatments, reported not being able to find the drugs for their condition.

Warnings by Fauci and others about these drugs being unproven for Covid-19 were confirmed a few days later when a Brazil study of chloroquine for Covid-19 was halted after several patients developed potentially fatal heart arrhythmia. The FDA then reiterated its directive that neither drug should be used for Covid-19, except possibly for patients in a closely monitored hospital or clinical trial setting.

Then on 23 April 2020, Trump suggested that sunlight or even ingesting disinfectants would be effective against the coronavirus. In the wake of negative reactions to these comments, the next day Mr. Trump said that he was only being sarcastic, although his broadcast TV comments showed no indication that he was not being serious.

Immediately after Trump’s disinfectant comments, emergency management authorities across the U.S. and in several other nations were flooded with inquiries, and found it necessary to broadcast emergency messages warning people not to ingest disinfectants. Officials in Washington State warned on Twitter, “Please don’t eat tide pods or inject yourself with any kind of disinfectant,” and reiterated its directive to rely only on official medical advice about Covid-19. Manufacturers of the U.S. consumer products Clorox and Lysol issued warnings not to use their products in this way.

Subsequent reactions to Mr. Trump’s comments have been largely predictable based on partisan affiliation, with opponents of Mr. Trump highlighting these comments, and defenders of Mr. Trump downplaying them and arguing that a focus on these comments is deflecting attention from more significant issues. But setting aside political squabbles, this episode has amply underscored a larger issue: the public’s relatively low level of scientific literacy and resulting susceptibility to misinformation and misdirection.

A sampling of the public’s level of scientific understanding on Covid-19 was given by the results of a Pew Research Center study, dated 8 April 2020. It found that 29% of the American public were convinced that the Covid-19 virus was made in a laboratory, either intentionally or accidentally (whereas genomic evidence quite conclusively shows that the virus had a natural origin). The poll also found that 25% of the American public expects a vaccine will be publicly available in the next few months (whereas Anthony Fauci said on 15 April 2020 that a vaccine is at least one year to 18 months out, although a few months might be shaved off the schedule).Misinformation on Covid-19 is not limited to the U.S. In the past week or two, more than twenty 5-G cellular antenna towers in the U.K. were destroyed by arsonists convinced by conspiracy theorists that these antennas cause Covid-19 infections (such claims have been repeatedly debunked). Another conspiracy theory holds that Bill Gates designed the coronavirus in an attempt to save the planet by depopulating it (he did not).

Why in an age of unparalleled scientific progress, and unparalleled scientific challenges as well, are so many turning to utterly pseudoscientific products, pursuits and conspiracy theories? And why are so many so poorly informed on life-and-death issues related to Covid-19? While many point fingers to the failings of various public leaders, some of whom are indifferent if not downright hostile to science and technology, scientists themselves must shoulder some of blame.

Indeed, while many of us have been successful in our day-to-day battles — proving theorems, computing simulations, performing laboratory work, analyzing data, authoring journal articles and obtaining grants — we are badly losing the war for the hearts and minds of the public. Relatively few researchers attempt to communicate directly to the public; relatively few operate a blog or other forms of outreach; relatively few visit public schools or venture outside their laboratories; and hardly any have tried to engage with the public in active research work.

And yet scientists have a great story to tell. What could be more exciting than the history of scientific progress over the past years, decades and centuries? Just within the past 100 years, researchers have discovered the theory of relativity, quantum mechanics and the standard model; unraveled the structure of DNA; sequenced the human genome; discovered the accelerating universe; observed planets orbiting thousands of distant stars; and detected the collisions of black holes and neutron stars (see this Math Scholar article for additional details).

Spurred by these scientific advances, human technology has also advanced at an astonishing pace: worldwide life expectancy has increased from 29 years as recently as 1880 to 71 today; transportation has advanced from horse-and-buggy to jet airplanes within the lifetimes of people still alive (4.3 *billion* airline passenger trips were taken in 2019); Moore’s Law has advanced by a whopping factor of 80 million since 1965, propelling computer technology to devices and capabilities unthinkable just 20 years ago; the internet now brings the entire world’s knowledge to one’s smartphone (roughly 50% of the world’s population now has one), and has greatly alleviated the isolation caused by the current pandemic; and genome sequencing and artificial intelligence are just getting started (see this Math Scholar article for additional details).

Yes, scientific progress is very real, a shining light in a world beset by gloom and doom. Let’s share the excitement!

]]>The volume has been published by Springer, and is available for purchase from the Springer website, or from Amazon.com.

The individual papers are authored by many of Jonathan Borwein’s colleagues and collaborators. Here

Continue reading “From Analysis to Visualization: A Celebration of the Life and Legacy of Jonathan M. Borwein”

]]>The volume has been published by Springer, and is available for purchase from the Springer website, or from Amazon.com.

The individual papers are authored by many of Jonathan Borwein’s colleagues and collaborators. Here is the table of contents:

- Applied analysis, optimization and convexity:
- “Introduction,” by Regina S. Burachik and Guoyin Li.
- “Symmetry and the monotonicity of certain Riemann sums,” by David Borwein, Jonathan M. Borwein and Brailey Sims.
- “Risk and utility in the duality framework of convex analysis,” by R. Terrell Rockafellar.
- “Characterizations of robust and stable duality for linearly perturbed uncertain optimization problems,” by Nguyen Dinh, Miguel A. Goberna, Marco A. Lopez and Michel Volle.
- “Comparing averaged relaxed cutters and projection methods,” by Reinier Diaz Millan, Scott B. Lindstrom and Vera Roshchina.

- Education:
- “Introduction,” by Naomi Simone Borwein.
- “On the educational legacies of Jonathan M. Borwein,” by Naomi Simone Borwein and Judy-anne Heather Osborn.
- “How mathematicians learned to stop worrying and love the computer,” by Keith Devlin.
- “Crossing boundaries: Fostering collaboration between mathematics educators and mathematicians in initial teacher education,” by Merrilyn Goos.
- “Mathematics education in the computational age: Challenges and opportunities,” by Kathryn Holmes.
- “Mathematics education for indigenous students in preparation for engineering and information technologies,” by Collin Phillips and Fu Ken Ly.
- “Origami as a teaching tool for indigenous mathematics education,” by Michael Assis and Michael Donovan.
- “Dynamic visual models: Ancient ideas and new technologies,” by Damir Jungic and Veselin Jungic.
- “A random walk through experimental mathematics,” by Eunice Y. S. Chan and Robert M. Corless.

- Financial mathematics:
- “Introduction,” by David H. Bailey and Qiji J. Zhu.
- “A holistic approach to empirical analysis: The insignificance of P, hypothesis testing and statistical significance,” by Morris Altman.
- “Do financial gurus produce reliable forecasts?,” by David H. Bailey, Jonathan M. Borwein, Amir Salehipour and Marcos Lopez de Prado.
- “Entropy maximization in finance,” by Jonathan M. Borwein and Qiji J. Zhu.

- Number theory, special functions and pi:
- “Introduction,” by Richard P. Brent.
- “Binary constant-length substitutions and Mahler measures of Borwein polynomials,” by Michael Baake, Michael Coons and Neil Manibo.
- “The Borwein brothers, pi and the AGM,” by Richard P. Brent.
- “The road to quantum computational supremacy,” by Cristian S. Calude and Elena Calude.
- “Nonlinear identities for Bernoulli and Euler polynomials,” by Karl Dilcher.
- “Metrical theory for small linear forms and applications to interference alignment,” by Mumtaz Hussain, Seyyed Hassan Mahboubi and Abolfazl Seyed Motahari.
- “Improved bounds on Brun’s constant,” by Dave Platt and Tim Trudgian.
- “Extending the PSLQ algorithm to algebraic integer relations,” by Matthew P. Skerritt and Paul Vrbik.
- “Short walk adventures,” by Armin Straubhaar and Wadim Zudilin.

Though many do not recognize the fact, behind the disturbing headlines that dominate the news today, scientific progress marches forward, unabated and undiminished. Just within the past 100 years, researchers have discovered the theory of relativity, quantum mechanics and the standard model; unraveled the structure of DNA; sequenced the human genome; discovered the accelerating universe; observed extrasolar planets orbiting thousands of distant stars; and detected the collisions of black holes. See this Math Scholar article for additional details.

Spurred by these scientific advances, human technology has advanced at an astonishing pace: advances in medical

Continue reading Why are people embracing astrology in an age of science?

]]>Though many do not recognize the fact, behind the disturbing headlines that dominate the news today, scientific progress marches forward, unabated and undiminished. Just within the past 100 years, researchers have discovered the theory of relativity, quantum mechanics and the standard model; unraveled the structure of DNA; sequenced the human genome; discovered the accelerating universe; observed extrasolar planets orbiting thousands of distant stars; and detected the collisions of black holes. See this Math Scholar article for additional details.

Spurred by these scientific advances, human technology has advanced at an astonishing pace: advances in medical technology and living conditions have increased worldwide life expectancy from 29 years as recently as 1880 to 71 today; transportation has advanced from horse-and-buggy to jet airplanes within the lifetimes of people still alive (currently 4.3 *billion* airline passenger trips are taken worldwide each year); Moore’s Law has advanced by a whopping factor of 80 million since 1965, propelling computer technology to devices and capabilities unthinkable just 20 years ago; the internet now brings the entire world’s knowledge to one’s smartphone, one of which is now in the hands of roughly 50% of the world’s population; and genome sequencing (which has advanced even faster than Moore’s Law) and artificial intelligence are just getting started. See this Math Scholar article for additional details.

This same spirit of relentless scientific progress has extended to a broader range of social and economic indicators: crime is down significantly over the past few decades (in spite of headlines to the contrary); so are deaths in war worldwide, normalized to the Earth’s population; many diseases and medical conditions have been conquered or controlled; hundreds of millions fewer worldwide live in extreme poverty (the number in extreme poverty drops by approximately 700,000 *every day*); and many more are living in democratic societies. See this Math Scholar article for additional details.

Yes, progress is real. Ours is truly a scientific age.

But none of this is license for complacency, since human society faces truly daunting problems in the years ahead, ranging from growing levels of income inequality to looming environmental and even biological threats.

As this is being written (March 2020), the entire world is gripped in the throes of the rapidly spreading and deadly COVID-19 pandemic. International travel has been greatly curtailed worldwide; many businesses, large and small, have shut their doors; many K-12 schools and universities have closed; and entire regions and nations have been ordered to remain in their homes. Sadly, given the more connected nature of modern society, we may well face similar pandemics in the future, and so we need to formulate better means to prevent them and deal with them. This will require a substantially greater level of scientific literacy in the general population worldwide, so as to appreciate the threats of mutating pathogens and the challenges of countering them.

At the same time, we face the ever-growing peril of global warming, which is even more potentially destructive and threatening to human life than COVID-19, because many of the potential long-term consequences of global warming may be irreversible. There is no question about the scientific consensus here. At least 97% of climate science researchers agree with the central conclusion that the Earth is warming *and* that human activity is the primary cause.

But sadly, in spite of years of public discussion, somber warnings by scientists, countless nature shows showing the effects of global warming, as well as severe wildfires, storm surges and hurricanes (which are likely exacerbated by climate change), large numbers of the public simply do not regard global warming to even be real, much less a major threat. In a 2017 Pew Research Center survey, 23% of Americans denied that there is any solid evidence that the Earth has been warming, and of those who acknowledge warming, nearly half doubted that it is due to human activities. For additional details on the urgent challenge of global warming, see this Math Scholar article.

In short, it is abundantly clear that society today, at all levels, is more dependent on science and technology than ever before, both in everyday life and as a source of economic growth and stability. And society, more than ever before, also faces grim threats and challenges, which must be addressed soberly and scientifically if we are to find workable solutions. To deal with pandemics, we must develop a broad range of new vaccines and antiviral agents, and do so on a much more rapid time scale than in the past. To deal with global warming, we must develop new clean energy technologies, inexpensive and easily deployable, which can meet the energy needs of both the major industrialized nations and very poor regions. And hundreds of other challenges could be listed, including some that we can only dimly foresee at the present time.

Incredibly, just as modern society faces these daunting challenges, which, more than ever before, require a scientifically literate and scientifically involved public, millions of reasonably well-educated and otherwise intelligent people in highly industrialized nations are embracing astrology — yes astrology, the absurd and utterly unscientific notion that one’s personality and future life are determined by the positions of a few stars and planets at the moment one is born, in an enclosed hospital room, months after one’s genome was biologically set in place at conception!

According to a 2018 Pew Research poll, 29% of U.S. adults believe in astrology, and 61% believe in at least one of the following: “spiritual energy can be located in physical things”, “psychics”, “reincarnation” and “astrology”. Surprisingly, the percentages are substantially higher (47% and 78%, respectively) among those adults who list a religious preference “nothing in particular.” The paradoxical conclusion is that the “nones,” in current parlance, may be turning away from traditional Judeo-Christian monotheism but are embracing astrology and other pseudoscientific worldviews [Gecewicz2018].

Other polls have found similar results. A 2018 poll published by the U.S. National Science Foundation found that 37% of Americans view astrology as “very scientific” or “sort of scientific,” a percentage that has increased in recent years, up from only 31% in 2006. While these increase are seen for all age groups in the survey, they are particularly pronounced among the younger set. Among 18-24-year-olds and 25-34-year-olds, respectively, the figures were both 44%, up from 36% and 33%, respectively, in 2006 [NSB2018].

In short, there has been a significant increase, not decrease, in acceptance of astrology as “scientific” in U.S. society, with the increase particularly pronounced among the younger “Millennial” and “Gen X” age groups. Studies have found similar trends in other nations, including the U.K., for instance.

In a 2018 article in *The Atlantic*, Julie Beck describes this “New Age of Astrology” [Beck2018] as having been greatly facilitated by the rise of the internet. One can find astrology-related websites to fit almost any hobby, interest or lifestyle, including cat breeds, types of French fries and poetry. Beck quotes Lucie Greene, director of a cultural innovation tracking group, saying, “Over the past two years, we’ve really seen a reframing of New Age practices, very much geared toward a Millennial and young Gen X quotient.” She quotes a senior editor at one of the more popular horoscope websites who says that traffic “has grown really exponentially.” Another editor says that their site received 150% more traffic in 2017 than the year before.

Along this line, there has been a significant increase in what might be termed “pseudomedicine” — the promotion of pseudoscientific medicinal products promising to treat numerous ills and conditions. One movement here is “essential oils” — the claims that certain fragrant essences have a broad range of medicinal powers. One catalogue used by essential oil devotees includes treatments for hundreds of conditions, covering hundreds of pages. Needless to say, such claims are utterly without peer-reviewed scientific basis.

In 2014, the U.S. Food and Drug Administration issued a warning letter to the CEO of one of the companies selling these products. The FDA letter cited numerous violations in marketing materials by this firm and its multi-level marketing agents, including claims that the Ebola virus cannot survive in the presence of certain oils; that regular use of a certain oil may help prevent conditions including cancer and heart disease, and can treat cognitive impairments; and that components of one particular essential oil have an anti-tumor effect on various cancer cells, including cancers of the prostate, colon, cervix, bladder and brain, as well as leukemia cells and fibrosarcoma cells.

Along this line, Hollywood actress Gwyneth Paltrow has been promoting numerous products via her lifestyle brand “Goop” as having medicinal effects. For example, Goop has promoted stickers to be attached to one’s body, called “Body Vibes,” which Goop claims can “rebalance the energy frequency in our bodies.” At one point Goop even claimed that these stickers are constructed out of material used by NASA to line space suits, but NASA quickly denied this. Other products marketed by Goop include objects the size of a small egg that are to be inserted by women for medicinal value, again an utterly unscientific claim for which Goop was fined $145,000. But Goop continues, even thrives. Its latest venture is a Netflix show Goop Lab, which presents hours of pseudoscientific discussions and promotions.

Why in an age of unparalleled scientific progress, and unparalleled scientific challenges as well, are so many turning to utterly pseudoscientific pursuits and products? While many can point fingers to various public leaders who are indifferent, if not downright hostile, to science and technology, scientists themselves (encompassing a broad range of mathematical, physical, biological and social science disciplines) must shoulder some of blame.

Indeed, while many of us have been successful in various battles — proving theorems, computing simulations, performing laboratory work, analyzing data, authoring journal articles and obtaining grants — we are badly losing the war for the hearts and minds of the public.

What can be done? Here are some suggestions:

- Start or contribute to a blog.
- Visit schools and give public lectures.
- Write articles for science news forums.
- Study creative writing, arts and humanities to sharpen communication skills.
- Ensure that those researchers who are effective communicators are properly recognized in hiring, promotion, tenure and research funding decisions.
- Promote interdisciplinary coursework and studies at universities that combine the arts with science, working in synergy rather than in opposition to other fields.
- Find ways to involve the public in research projects, for example by inviting the public to help with field studies or lending home computer cycles for data analysis.

Scientists have a great story to tell. What could be more exciting than the history of breathtaking progress over the past years, decades and centuries? Let’s share the excitement!

]]>