2018: The year that artificial intelligence came of age

AI’s tortuous history

The field of artificial intelligence (AI) is actually rather old. Ancient Greek, Chinese and Indian philosophers developed principles of formal reasoning several centuries before Christ. In 1651, British philosopher Thomas Hobbes wrote in Leviathan that “reason … is nothing but reckoning (that is, adding and subtracting).” In 1843 century Ada Lovelace, widely considered to be the first computer programmer, ventured that machines such as Babbage’s analytical engine “might compose elaborate and scientific pieces of music of any degree of complexity or extent.”

In 1950 Alan Turing’s landmark paper Computing machinery and intelligence outlined the principles of AI and proposed a test, now known as the Turing test, for establishing whether true AI had been achieved. Early computer scientists were confident that true AI system would soon be a reality. In 1965 Herbert Simon predicted that “machines will be capable, within twenty years, of doing any work a man can do.” In 1970 Marvin Minsky declared “In from three to eight years we will have a machine with the general intelligence of an average human being.”

But this early optimism collided with hard reality. For example, early attempts at producing practical machine translation systems, which were presumed to be imminent, were slammed in a 1966 report. Weizenbaum’s ELIZA program attempted to emulate a psychotherapist, but the resulting dialogue was little more than a reassembly of the user’s input, and there was little indication of how to extend this to true AI. The inevitable backlash against inflated promises and expectations during the 1970s was dubbed the “AI Winter,” a phenomenon that sadly was repeated again, in the late 1980s and early 1990s, when a second wave of AI systems also resulted in disappointment.

In retrospect, these pioneers failed to appreciate the true difficulties of constructing true AI. These include limited computer power, the combinatorial explosion of logical branches, the requirement for commonsense knowledge, and the lack of appreciation for seemingly trivial human capabilities such as visual pattern recognition and physical motion. For additional details, see the Wikipedia article on the history of AI.

Machine learning and Moore’s Law to the rescue

A breakthrough of sorts came in the late 1990s and early 2000s with the development of machine learning, Bayes-theorem-based methods, which quickly displaced the older methods based mostly on formal reasoning. In other words, rather than trying to program an AI system as a large web of discrete logical reasoning operations, these researchers were content to use statistical machine learning schemes to automatically produce the reasoning tree. These new methods proved to be superior both in efficiency and in effectiveness.

The other major development was the inexorable rise of computing power and memory, gifts of Moore’s Law that have continued unabated for over 50 years. A typical 2018-2019-era smartphone is based on 8 nm technology, features up to 512 Gbyte flash memory and can perform trillions of operations per second. With such huge computing power and memory, greater in many respects than that of the world’s most powerful 2000-era supercomputers, previously unthinkable AI capabilities, such as 3-D facial recognition, can be provided directly to the consumer.

Deep Blue defeats Kasparov in chess, and Watson defeats humans in Jeopardy

One highly publicized advance came in 1997, when Garry Kasparov, the reigning world chess champion, was defeated by an IBM-developed computer system named “Deep Blue.” Deep Blue employed some new techniques, but for the most part it simply applied enormous computing power to store openings, look ahead many moves, apply alpha-beta tree pruning and never make mistakes.

This was followed in 2011 with the defeat of two champion contestants on the American quiz show “Jeopardy!” by an IBM-developed computer system named “Watson.” The Watson achievement was significantly more impressive as an AI demonstration, because it involved natural language understanding, i.e. the understanding of ordinary (and often tricky) English text. For example, the “Final Jeopardy” clue at the culmination of the contest, in the category “19th century novelists,” was the following: “William Wilkinson’s ‘An Account of the Principalities of Wallachia and Moldavia’ inspired this author’s most famous novel.” Watson correctly responded “Who is Bram Stoker?” [the author of Dracula], thus sealing the victory.

Legendary Jeopardy champ Ken Jennings conceded by writing on his tablet, “I for one welcome our new computer overlords.”

AlphaGo defeats the world’s champion Go players

The ancient Chinese game of Go involves placing black and white beads on a 19×19 grid. The game is notoriously complicated, with strategies that can only be described in vague, subjective terms. For these reasons, many observers did not expect Go-playing computer programs to beat the best human players for many years, if ever. See the earlier MathScholar blog for more details.

Go playing board

This pessimistic outlook changed abruptly in March 2016, when a computer program named “AlphaGo,” developed by researchers at DeepMind, a subsidiary of Alphabet (Google’s parent company), defeated Lee Se-dol, a South Korean Go master, 4-1 in a 5-game tournament. The DeepMind researchers further enhanced their program, which then in May 2017 defeated Ke Jie, a 19-year-old Chinese Go master thought to be the world’s best human Go player.

In developing the program that defeated Lee and Ke, DeepMind researchers fed their program 100,000 top amateur games and “taught” it to imitate what it observed. Then they had the program play itself and learn from the results, slowly increasing its skill.

AlphaGo Zero defeats Alpha Go 100 games to zero

In an even more startling development, in October 2017, Deep Mind researchers developed from scratch a new program, called AlphaGo Zero. For this program, the DeepMind researchers merely programmed the rules of Go, with a simple reward function that rewarded games won, and then instructed the program to play games against itself. This program was not given any records of human games, nor was it programmed with any strategies, general or specific.

Initially, the program merely scattered pieces seemingly at random across the board. But it quickly became more adept at evaluating board positions, and gradually increased in skill. Interestingly, along the way the program rediscovered many well-known elements of Go strategies used by human players, including anticipating its opponent’s probable next moves. But unshackled from the experience of humans, it developed new complex strategies never before seen in human Go games. After just three days of training and 4.9 million training games (with the program playing against itself), the AlphaGo Zero program had advanced to the point that it defeated the earlier Alpha Go program 100 games to zero.

Skill at Go (and several other games) is quantified by the Elo rating, which is based on the record of their past games. Lee’s rating is 3526, while Ke’s rating is 3661. After 40 days of training, AlphaGo Zero’s Elo rating was over 5000. Thus AlphaGo Zero was as far ahead of Ke as Ke is ahead of a good amateur player. Additional details are available in an Economist article, a Scientific American article and a Nature article.

AlphaZero conquers chess and shogi too

In December 2017, DeepMind announced that they had reconfigured the AlphaGo Zero program, dubbed AlphaZero for short, to play other games, including chess and shogi, a Japanese version of chess that is significantly more complicated and challenging. Recently (December 2018) the DeepMind researchers documented their groundbreaking work in a technical paper published in Science (see also this excellent New York Times analysis by mathematician Steven Strogatz).

In the paper, the researchers described various experiments they have run comparing their AlphaZero program to championship-grade software programs, including Stockfish, the 2016 Top Chess Engine Championship champion (significantly more powerful than the 1997 IBM Deep Blue system), and Elmo, the 2017 Computer Shogi Association champion. In a matches against Stockfish in chess, with AlphaZero playing white, AlphaZero won 29%, drew 70.6% and lost 0.4%. In a similar match against Elmo in shogi, with AlphaZero playing white, AlphaZero won 84.2%, drew 2.2% and lost 13.6%. Other comparison results are presented in the technical paper.

Just as impressive as these statistics is the fact that AlphaZero seemed to play with a human-like style. As Strogatz explains, describing the chess program,

[AlphaZero] played like no computer ever has, intuitively and beautifully, with a romantic, attacking style. It played gambits and took risks. … Grandmasters had never seen anything like it. AlphaZero had the finesse of a virtuoso and the power of a machine. It was humankind’s first glimpse of an awesome new kind of intelligence..

Other AI achievements

AI systems are doing much more than defeating human opponents in games. Here are just a few of the current commercial developments:

What will the future hold?

So where is all this heading? A recent Time article features an interview with futurist Ray Kurzweil, who predicts an era, roughly in 2045, when machine intelligence will meet, then transcend human intelligence. Such future intelligent systems will then design even more powerful technology, resulting in a dizzying advance that we can only dimly foresee at the present time. Kurzweil outlines this vision in his book The Singularity Is Near.

Futurists such as Kurzweil certainly have their skeptics and detractors. Sun Microsystem founder Bill Joy is concerned that humans could be relegated to minor players in the future, if not extinguished. Indeed, in many cases AI systems already make decisions that humans cannot readily understand or gain insight into. But even setting aside such concerns, there is considerable concern about the societal, legal, financial and ethical challenges of such technologies, as exhibited by the current backlash against technology, science and “elites” today.

One implication of all this is that education programs in engineering, finance, medicine, law and other fields will need to change dramatically to train students in the usage of emerging AI technology. And even the educational system itself will need to change, perhaps along the lines of massive open online courses (MOOC). It should also be noted that large technology firms such as Amazon, Apple, Facebook, Google and Microsoft are aggressively luring top AI talent, including university faculty, with huge salaries. But clearly the field cannot eat its seed corn in this way; some solution is needed to permit faculty to continue teaching while still participating in commercial R&D work.

But one way or the other, intelligent computers are coming. Society must find a way to accommodate this technology, and to deal respectfully with the many people whose lives will be affected. But not all is gloom and doom. Steven Strogatz envisions a mixed future:

Maybe eventually our lack of insight would no longer bother us. After all, AlphaInfinity could cure all our diseases, solve all our scientific problems and make all our other intellectual trains run on time. We did pretty well without much insight for the first 300,000 years or so of our existence as Homo sapiens. And we’ll have no shortage of memory: we will recall with pride the golden era of human insight, this glorious interlude, a few thousand years long, between our uncomprehending past and our incomprehensible future.

Comments are closed.