The future of artificial intelligence: Utopia or dystopia?

MIT cosmologist Max Tegmark is no stranger to controversy. In his 2014 book Our Mathematical Universe, Tegmark proposed that our universe and everything in it are merely mathematical structures operating according to certain rules of logic. He argued that this hypothesis answers Stephen Hawking’s question “What breathes fire into the equations?” — there is no need for anything breathing fire into mathematical equations to create the universe, because the universe is a set of mathematical equations.

In his latest book, Life 3.0: Being Human in the Age of Artificial Intelligence, Tegmark surveys some of the recent advances of the AI field, and then asks what kind of future we want. He argues that both utopia and dystopia are possible, depending on some difficult and wide-ranging decisions that we must make in the coming decade or two.

Progress in AI in the past few years has been undeniably breathtaking. Some of the more notable milestones include: (a) the 1997 defeat of chess champion Garry Kasparov by IBM’s Deep Blue system; (b) the 2011 defeat of two Jeopardy! champions by IBM’s Watson system; (c) the emergence of effective language translation facilities, exemplified by Google’s translate app; (d) the rise of practical voice recognition and natural language understanding, exemplified by Apple’s Siri and Amazon’s Echo; and, most recently, the 2017 defeat of the world champion Go player, Lee Se-dol, by Deep Mind’s AlphaGo program, a feat that had not been expected for decades if ever.

It is worth mentioning here that since the publication of Tegmark’s book, Deep Mind has developed a new Go-playing program, named AlphaGo Zero. This program taught itself to play Go with no human input (other than specifying the rules of the game). After just a few days of playing games against itself, it defeated the earlier AlphaGo program 100 games to zero. After one month, AlphaGo Zero’s Elo rating was as far above the world champion Go player as the world’s champion Go player is above a typical amateur.

The rise of superintelligence

With such examples, Tegmark argues that we can no longer dismiss the possibility of the emergence of an “artificial general intelligence” or “superintelligence,” namely a general-purpose intelligent system that exceeds human capabilities across a broad spectrum of potential applications. Such systems would have massive and possibly unimaginable consequences. For example, in one chapter Tegmark argues that advanced AI technology hugely changes the debate over Fermi’s paradox — if we can send intelligent software and nanotech spacecraft (such as those proposed by Yuri Milner and Stephen Hawking), instead of humans, to distant star systems, then the exploration of the universe is dramatically more feasible.

Closer to home, the rise of AI is raising concerns about how humans will find jobs in this brave new world. In the short run, self-driving cars and trucks will displace many thousands of human drivers, and increasingly capable intelligent robots are certain to displace humans both in manufacturing plants and also at construction sites. In the medium term, many white collar jobs will disappear, in the same way that many tax preparers have been replaced by software downloads and online tools. Already, IBM’s Watson system is being applied in medicine, such as in the treatment of cancer, and AI and big data are making huge waves in the finance world (see also this Bloomberg article). In the long term, it is not clear that any job categories will be unaffected. Will we face a future without jobs? How will we find purpose in life without meaningful work?

Tegmark illustrates the issues with an intriguing scenario: A team of computer scientists at a large corporation secretly develops an AI system, called “Prometheus.” To begin with, it becomes quite skilled in programming computer games that sell well, thus providing a significant income stream. From there the system starts producing online movies and TV shows, first entirely animated but later involving human actors, who use scripts and set designs produced by Prometheus, whose existence is hidden behind an opaque human organization. Then the Prometheus system expands its domain to engineer more powerful computer hardware (thus further enhancing its own capabilities), batteries with more storage life, powerful new pharmaceuticals (e.g., new anti-cancer drugs), remarkably capable robots, remarkably efficient vehicles, advanced educational systems and even political persuasion tools. Within just a few years, Prometheus effectively takes over the world.

Does this scenario seem futuristic and far-fetched, more in the realm of science fiction than reality? Don’t count on it.

Utopia or dystopia

Tegmark outlines a number of possible future paths for the future of society:

  1. Libertarian utopia: Humans and superintelligent AI systems coexist peacefully, thanks to rigorously enforced property rights that cordon off the two domains.
  2. Benevolent dictator: Everyone knows that a superintelligent AI runs society, but it is tolerated because it does a rather good job.
  3. Egalitarian utopia: Humans and intelligences coexist peacefully, thanks to a guaranteed income and abolition of private property.
  4. Gatekeeper: An intelligent AI is created that interferes as little as possible in human affairs, except to prevent the creation of a competing superintelligent AI. Human needs are met, but technological progress may be forever stymied.
  5. Enslaved god: A superintelligent AI is created to produce amazing technology and wealth, but it is strictly confined and controlled by humans.
  6. Conquerors: A superientelligent AI takes control, decides that humans are a threat, nuisance and waste of resources, and then gets rid of us, possibly by a means that we do not even understand until it is too late.
  7. Descendants: Superintelligent AIs replace humans, but give us a gradual and graceful exit.
  8. Zookeeper: One or more superintelligent AI systems take control, but keep some humans around as amiable pets.
  9. Reversion: An Orwellian surveillance state blocks humans from engaging in advanced AI research and development.
  10. Self-destruction: Humanity extinguishes itself before superintelligent AI systems are deployed.

The goal alignment problem

So what do we need to do to ensure a utopia and not a dystopia? To begin with, Tegmark argues, we need to very carefully define what our goals and values are as a society. Then we need to ensure that AI systems learn, adopt and retain our goals and values — this is the goal alignment problem of AI. Note that there is an inherent tension here, in that as a superintelligent AI improves its capability and world model, it may shed its previously established set of goals.

All of this raises some of the most fundamental questions of philosophy and religion, for example how to objectively define an ethical worldview. Utilitarianism, diversity, autonomy and some sense of legacy must all be part of this discussion. When will intelligent AI systems be granted legal rights? Even more fundamentally, how do we determine whether or not an AI system is a conscious entity? What is consciousness, anyway?

The recent developments in AI raise these and a host of similar questions, whose solution or at least better understanding is urgently needed. As Tegmark notes, we have before us some of the most difficult and basic questions ever posed by humans, and a ticking time limit in which to answer them.

Along this line, Tegmark recommends that we study long-lasting human organizations (he mentions the Roman Catholic Church, among others) for clues into how to craft a society that would be stable for hundreds, thousands or millions of years, at the same time accommodating unimaginable advances in technology.

Talk versus action

In his Epilogue, Tegmark argues that it is no longer sufficient to merely highlight these challenges; real action is needed. To that end, he has organized a Future of Life Institute to focus efforts to understand these issues and to take positive steps to guide future development. The institute’s activities include research programs on AI safety, ensuring explainability in AI software, aligning goals, managing impact on jobs, reducing potential economic inequality, and balancing benefits and risks.

In the end, everyone will have to participate in examining these issues. This is one discussion we can’t sit out.

Comments are closed.