Evolutionary computing and artificial intelligence

Courtesy Ahmed Medhat Othman

Evolutionary computing

Evolution has been studied mathematically since the early 1900s, with works by D’Arcy Thompson, Ronald Fisher and others. Among other things, these analyses quantified estimates of how many generations of a given species would be required to achieve a certain level of observed change. With the rise of computer technology in the 1960s, computational simulations were devised to study evolution.

From here is was a relatively straightforward step to apply these same evolution-mimicking simulations to other applications as well, an approach originally termed genetic algorithms. In a typical application, potential engineering design parameters are varied with each generation, often with many thousands of competing variants, and then only the top-scoring variants are included in the next generation.

There are numerous variations and alternative strategies of this general genetic algorithm approach, going by names such as evolutionary strategies, evolutionary programming, neuroevolution and swarm intelligence.

As a single example of the application of this approach, a 2013 study used a genetic algorithm approach to find an optimal solar light splitter material, searching over a space of 19,000 perovskite materials (i.e., hybrid organic-inorganic lead or tin halide-based materials as the light-harvesting active layer). The researchers found that a genetic algorithm approach produced optimal parameter combinations six times faster than a random search, which rose to 12-17 times better if one added certain chemical arguments to the search.

Artificial intelligence

These are heady times for the field of artificial intelligence and machine learning. AI-based systems are being fielded in science, engineering, speech recognition, image recognition, business management and finance, to name just a few, and many more are in development.

Go playing board

As a single remarkable example, in 2016 researchers at Deep Mind, a subsidiary of Alphabet (Google’s parent company) developed an AI-based computer program, named AlphaGo, based on “deep neural networks,” that defeated Le Se-dol, the world’s champion Go player. This defeat came as a considerable shock to the worldwide Go community, as computer supremacy was not expected for at least a decade or two, if ever, due to the very subtle nature of successful Go strategies.

Then in October 2017, DeepMind researchers unveiled a new program, called AlphaGo Zero. This program bypassed the first step of AlphaGo, namely inputing tens of thousands of recorded games. Instead, the DeepMind researchers merely programmed the rules of Go, with a simple reward function that rewarded games won, and had it play games against itself. Initially, the program flailed, but it quickly got better and substantially increased its level of skill. After just three days of playing against itself, the AlphaGo Zero program had advanced to the point that it defeated the earlier AlphaGo program 100 games to zero. After 40 days of training, AlphaGo Zero was as far ahead of Lee Se-dol champion as Lee Se-Dol is ahead of typical amateurs.

For additional details, see the previous Math Scholar blog.

Evolutionary computing in AI

With the many successes in engineering, computational biology and computational chemistry under its belt, it was inevitable that the methodology of genetic algorithms would be applied to problems in artificial intelligence, combining these two promising technologies into one.

For example, Google recently applied the methods of evolutionary computing combined with neural networks (a variation that they term neuroevolution) to the problem of image recognition.

Researchers generated 1000 image recognition algorithms, each of which were trained using state-of-the-art deep neural networks to recognize a selected set of images. Then an array of 250 computers each selected two algorithms and had them attempt to identify an image. Only the one that scored higher proceeded to the next iteration. The survivor was then copied, but the copy was changed somewhat, mimicking mutations in natural evolution. The copy was then trained using the same dataset as used for the parent, and then placed in the batch of 1000 working algorithms. The process was then repeated.

Google researchers found that their neuroevolution scheme could achieve accuracies as high as 94.6%. Interestingly, the researchers found that one challenge was that the process often got “stuck” at a given level for a while, which could be seen as an analogy to the phenomenon of punctuated equilibria, first noted and popularized by biologist Stephen Jay Gould.

Google is now applying this technique in a more general context, as part of the independent OpenAI project, which is attempting to produce open-source software for artificial intelligence that can be used in a wide variety of applications.

Some additional details of the Google work are available in this Quartz article.

Conclusion

Lately it seems that new advances in artificial intelligence are announced almost on a weekly basis. Where is this heading? Almost certainly there will be major labor dislocations, as AI-enabled systems drive cars and trucks, and cook and serve food.

Artificial intelligence is already having major impact in the finance world, particularly combined with data systems that incorporate data as diverse as satellite images, weather and social media.

As we asked in a recent Math Scholar blog, are we heading for utopia or dystopia? Will AI combined with evolutionary computing and big data lead us to the Garden of Eden, or to a Hades, with millions of angry, disgruntled laid-off workers and others who have lost meaning in life joining to “wreck it all”?

It all depends on decisions that we will have to make, one way or the other, in the next few years.

Comments are closed.