

“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb,” he concludes. He sometimes notes, as a point of comparison, the trajectories of people and gorillas: both primates, but with one species dominating the planet and the other at the edge of annihilation. Such a system would effectively be a new kind of life, and Bostrom’s fears, in their simplest form, are evolutionary: that humanity will unexpectedly become outmatched by a smarter competitor.

gains the ability to improve itself, and in short order exceeds the intellectual potential of the human brain by many orders of magnitude. Central to this concern is the prospect of an “intelligence explosion,” a speculative event in which an A.I.

Titled “Superintelligence: Paths, Dangers, Strategies,” it argues that true artificial intelligence, if it is realized, might pose a danger that exceeds every previous threat from technology-even nuclear weapons-and that if its development is not managed carefully humanity risks engineering its own extinction. Last year, a curious nonfiction book became a Times best-seller: a dense meditation on artificial intelligence by the philosopher Nick Bostrom, who holds an appointment at Oxford.
