![](https://media.springernature.com/full/springer-static/image/art%3A10.1038%2Fs41586-024-08252-9/MediaObjects/41586_2024_8252_Fig1_HTML.png)
Schematic of GenCast’s procedure to generate a weather forecast. Credit: Ilan Price, Alvaro Sanchez-Gonzalez, Ferran Alet, Tom R. Andersson, Andrew El-Kadi, Dominic Masters, Timo Ewalds, Jacklynn Stott, Shakir Mohamed, Peter Battaglia, Remi Lam and Matthew Willson [Nature]
- In 2011, IBM’s “Watson” computer system defeated two premier champions of the American quiz show Jeopardy!.
- In 2017, AlphaGo Zero, developed by DeepMind, a subsidiary of Alphabet (Google’s parent company), defeated an earlier program, also developed by DeepMind, which in turn had defeated the world’s best Go player, a feat that many observers had not expected to see for decades. By one measure, AlphaGo Zero’s performance is as far above the world’s best Go player as the world’s best Go player is above a typical amateur.
- In 2020, AlphaFold 2, also developed by DeepMind, scored 92% on the 2020 Critical Assessment of Protein Structure Prediction (CASP) test, far above the 62% achieved by the second-best program in the competition. Nobel laureate Venki Ramakrishnan of Cambridge University exulted,” This computational work represents a stunning advance on the protein-folding problem, a 50-year-old grand challenge in biology. It has occurred decades before many people in the field would have predicted.” See below for additional details.
- In January 2023, researchers with the Search for Extraterrestrial Intelligence (SETI) project announced that they are deploying machine-learning techniques to sift through large datasets of microwave data. As Alexandra Witze wrote in Nature, “Will an AI be the first to discover alien life?”
- In 2022-2023, OpenAI launched ChatGPT, a software system with a remarkable facility to generate surprisingly cogent text on many topics and to interact with humans. This in turn has launched an incredible boom in AI research and development by numerous research laboratories and private firms that shows no sign of abating. In November 2024, Apple began providing OpenAI software in iPhones and other devices.
- Nvidia, which produces computer processor chips favored for many AI systems, has seen explosive demand for its products, with its stock increasing 10-fold from November 2022 to November 2024. This in turn has spurred a furious contest to develop competing processors by other tech firms, including Advanced Micro Devices, Amazon and several startups.
- The financial industry already relies heavily on financial machine learning methods, and a major expansion of these technologies is coming, possibly displacing or obsoleting thousands of highly paid workers.
- In December 2024, Renaissance Philanthropy and XTX Markets announced the “AI for Math Fund,” which will support research and development in production-grade tools and datasets for applying AI in mathematical research. Prominent UCLA mathematician Terence Tao will serve on the advisory board. Tao’s vision for the future is research mathematicians working in close-knit collaboration with AI-based tools; see this Math Scholar blog for details.
![](https://media.nature.com/w1219/magazine-assets/d41586-021-03499-y/d41586-021-03499-y_19875774.jpg)
Model of human nuclear pore complex, built using AlphaFold2; credit: Agnieszka Obarska-Kosinska, Nature
More details on #3: Computational protein folding
Proteins are the workhorses of biology. There are thousands of proteins in human biology, and many millions in the larger biological kingdom. Each protein is specified as a string of amino acids, typically several hundred or several thousand long, where each amino acid is specified by a three-letter “word” of DNA. The key to biology, however, is the three-dimensional shape of the protein — how a protein “folds.” Protein shapes can be investigated experimentally, but this is an expensive, error-prone and time-consuming laboratory operation, so for many years researchers have pursued computational protein folding.
Given the daunting challenge and importance of the protein folding problem, in 1994 a community of researchers in the field organized a biennial competition known as Critical Assessment of Protein Structure Prediction (CASP). At each iteration of the competition, the organizers announce a set of problems, to which worldwide teams of researchers then apply their best current tools to solve. In 2018, the CASP competition had a new entry: AlphaFold, a machine-learning-based program developed by DeepMind. For the 2020 CASP competition, the DeepMind team developed a new program, known as AlphaFold 2. It achieved a 92% average score, far above the 62% achieved by the second-best program in the competition.
“It’s a game changer,” exulted German biologist Andrei Lupas, who has served as an organizer and judge for the CASP competition. “This will change medicine. It will change research. It will change bioengineering. It will change everything.” Lupas mentioned how AlphaFold 2 helped to crack the structure of a bacterial protein that Lupas himself has been studying for many years. “The [AlphaFold 2] model … gave us our structure in half an hour, after we had spent a decade trying everything.” Nobel laureate Venki Ramakrishnan of Cambridge University added, “This computational work represents a stunning advance on the protein-folding problem, a 50-year-old grand challenge in biology. It has occurred decades before many people in the field would have predicted.”
DeepMind takes on climate modeling
Computational climate modeling is one of the most important grand challenges in the high-performance computing field. Multiple international research teams, and many thousands of individual researchers worldwide, are involved in developing and refining large-scale supercomputer simulations. These simulations typically involve general circulation models (GCMs), which apply the laws of physics to simulate physical processes in Earth’s atmosphere and oceans. Researchers report their advances in large conferences, such as the annual meetings of the American Geophysical Union. The supercomputers used in these efforts are among the world’s most powerful (and expensive) systems on the planet.
Given their success with protein folding, DeepMind researchers, among others, have turned their attention to the challenge of long-term climate modeling. Their approach is to apply machine learning programs to the large trove of historical weather data, or perhaps to employ hybrid approaches that combine analyses of historical datasets with physics-based simulations. And here again they have been very successful.
In July 2024, a team of DeepMind researchers announced that their NeuralGCM program can accurately track various climate metrics for multiple decades, and exhibit phenomena such as realistic frequency and trajectories of tropical hurricanes, all using substantially more modest computational resources. Their Nature abstract claims, “our approach offers orders of magnitude computational savings over conventional GCMs.”
DeepMind’s new weather forecasting system
Needless to say, virtually every sector of modern society relies on reliable weather forecasting, including airlines, trucking, package delivery, education and countless other businesses and governmental agencies. Accurate forecasts of extreme weather events (hurricanes, snowstorms, heat waves, etc.), which may require major deployments of equipment, emergency workers or even evacuations, are particularly crucial. Thus for several decades governmental bodies worldwide have relied on extensive large-scale monitoring networks and computer simulations, implementing ever-more-sophisticated physical models and numerical algorithms, and running on state-of-the-art supercomputers, to generate forecasts.
Weather forecasting is closely connected with computational climate modeling — indeed, in high-level terms a climate model is merely a longer-term weather forecasting model, since they employ very similar computational techniques and run on comparable high-performance computing systems.
The latest development here is that the DeepMind team has applied its machine learning climate modeling technology to the problem of weather forecasting, and again they have been smashingly successful. In Nature articles published in December 2024 (see here and here), they present results for “GenCast,” a probabilistic weather model that, in a suite of tests, outperforms the best conventional physics-based numerical weather prediction program, and requires much more modest computational resources.
At the current time, the state-of-the-art in operational weather forecasting is the ENS program employed by the European Centre for Medium-Range Weather Forecasts (ECMWF), which has centers in the U.K., Italy and Germany. In their paper, DeepMind researchers found that GenCast was able to generate an ensemble of 15-day global forecasts (five days longer than typical ENS runs), at 12 hour steps and an impressive 0.25 degree latitude-longitude resolution, for more than 80 surface and atmospheric variables, in only eight minutes runtime. Its accuracy was superior to the ENS program on 97.2% of the 1320 targets they evaluated. Further, they found that GenCast was significantly better in predicting extreme weather, including hurricane tracks.
Three major advances
It is hard to overstate the impact of these developments. Consider just the three main items noted above:
- Development of practical computational protein folding technology.
- Development of practical (and much less expensive) long-term climate modeling technology.
- Development of practical (and much less expensive) 10-day and 15-day weather forecasts.
Any one of these developments would be a scientific achievement of the first order, truly cause for celebration. Yet each has been notched within a two-year period, coincidentally by the same organization (DeepMind), although it must be emphasized that the DeepMind work feeds on conceptual and software contributions of thousands of other researchers, and in fact researchers in other organizations are hard at work further developing and improving DeepMind’s work.
What will the future hold?
So where is all this heading? In a 2011 Time article futurist Ray Kurzweil predicted an era, roughly in 2045, when machine intelligence will meet, then transcend human intelligence. Such future intelligent systems will then design even more powerful technology, resulting in a dizzying advance that we can only dimly foresee at the present time. Kurzweil provides more details of this vision in his book The Singularity Is Near.
Futurists such as Kurzweil certainly have their skeptics and detractors. Sun Microsystem founder Bill Joy is concerned that humans could be relegated to minor players in the future, if not extinguished. Indeed, in many cases AI systems already make decisions that humans cannot readily understand or gain insight into. But even setting aside such concerns, there is considerable concern about the societal, legal, financial and ethical challenges of such technologies, as exhibited by the current backlash against technology, science and “elites” today.
One implication of all this is that education programs in engineering, finance, medicine, law and other fields will need to change dramatically to train students in the usage of emerging AI technology. And even the educational system itself will need to change, perhaps along the lines of massive open online courses (MOOC). It should also be noted that large technology firms such as Amazon, Apple, Facebook, Google and Microsoft are aggressively luring top AI talent, including university faculty, with huge salaries. But clearly the field cannot eat its seed corn in this way; some solution is needed to permit faculty to continue teaching while still participating in commercial R&D work.
But one way or the other, intelligent computers are coming. Society must find a way to accommodate this technology, and to deal respectfully with the many people whose lives will be affected. But not all is gloom and doom. Steven Strogatz envisions a mixed future:
Maybe eventually our lack of insight would no longer bother us. After all, AlphaInfinity could cure all our diseases, solve all our scientific problems and make all our other intellectual trains run on time. We did pretty well without much insight for the first 300,000 years or so of our existence as Homo sapiens. And we’ll have no shortage of memory: we will recall with pride the golden era of human insight, this glorious interlude, a few thousand years long, between our uncomprehending past and our incomprehensible future.