https://www.ft.com/content/5aa09af9-cd6c-46b6-a1be-e37ad1e33758

When the theoretical physicist Robert Oppenheimer witnessed the first nuclear weapons test in the New Mexico desert in 1945, he famously invoked a line from the Hindu scripture the Bhagavad Gita: “Now I am become Death, the destroyer of worlds.” The wartime head of the Los Alamos Laboratory, known as the “father of the atomic bomb”, had no doubt about the significance and impact of the weapon he had helped develop. After the bombing of Hiroshima and Nagasaki a few weeks later, everyone else on the planet also understood that humanity had entered a new, and terrifying, age. According to the authors of The Age of AI, humanity stands on the brink of an equally consequential moment, yet one that is more diverse, diffuse and unpredictable and less widely acknowledged. The increasing power of artificial intelligence, a general purpose technology that can be put to an astonishing array of civil and military uses — from reading X-rays and predicting weather patterns to empowering killer robots and spreading disinformation — is already scrambling centuries-old conceptions of national security and state sovereignty. Equally unnerving, the authors contend, is that AI will also test the outer limits of human reason and understanding and challenge the very nature of human identity and agency. It may be tempting to dismiss such arguments as the wild-eyed hyperbole that envelops so much of the debate about AI. But the three authors of The Age of AI have strong claims to be taken seriously. The veteran diplomat Henry Kissinger knows a thing or two about strategy. As the former chief executive of Google, Eric Schmidt understands how the giant technology companies deploy AI in the real world. And Daniel Huttenlocher, the inaugural dean of MIT’s Schwarzman College of Computing, is well versed in the latest cutting-edge AI research. What is most unsettling about the book is that even such acknowledged experts are far more adept at raising uncomfortable questions than providing comforting answers.To explain the likely impact of AI in the future, the authors examine our technological past. In previous eras, the most powerful strategic technologies tended to have two of three characteristics, but none had all three. The railways that carried troops to the front lines in the first world war had both civilian and military uses and could spread easily and widely, but were not threatening in themselves. The nuclear technology that defined the cold war could also be used for both warlike and peaceful purposes and had massive destructive force, but could not be spread easily and widely. But AI, the authors argue, breaks that paradigm because it exhibits all three features. AI is clearly dual use, it can be easily developed and deployed (being in essence no more than lines of computer code) and has enormous destructive power. “Few eras have faced a strategic and technological challenge so complex and with so little consensus about either the nature of the challenge or even the vocabulary necessary for discussing it,” the authors write. Or, as Elon Musk summarised the argument in 2014 in a pithy tweet: “We need to be super careful with AI. Potentially more dangerous than nukes.”One of the drawbacks of The Age of AI is that it reads more like a series of monologues by the authors on their pet subjects rather than an engaging dialogue that could have truly elevated the debate. The most interesting chapter, on security and world order, was presumably written by Kissinger. It should be read by anyone trying to make sense of geopolitics today. AI holds the prospect of augmenting conventional, nuclear and cyber capabilities in ways that make security relationships among rivals more challenging The eternal goal of military strategists has been to project power across ever bigger distances with progressively greater force and speed. But the great powers of the day that developed such technologies did so more or less in lockstep and in the plain light of day. Although the US had a jump on the Soviet Union in developing the atomic bomb, other powers quickly caught up and could more or less count each other’s missile stockpile. But more dynamic and surreptitious military technologies, such as cyber weapons, have recently multiplied and grown more destructive, while strategies for using them for defined aims have become more elusive. Just as ill-designed trading algorithms have been blamed for destabilising financial markets, so AI-enhanced cyber weapons could result in a strategic “flash crash”. As the authors write: “AI holds the prospect of augmenting conventional, nuclear and cyber capabilities in ways that make security relationships among rivals more challenging to predict and maintain and conflicts more difficult to limit.”Although Kissinger is considered one of the leading US authorities on China and Schmidt chaired the recent US National Security Commission report on AI that warned of China’s growing technological muscle, the book does not provide as much insight into Beijing’s ambitions as the reader might expect. But the authors urge the US and China to speak to one another directly and regularly about their cyber doctrines and red lines and not to cede too much agency to automated decision-making systems. At the very least, Washington and Beijing should ensure that human decision makers remain “in the loop” to maximise the time for dialogue and diplomacy during extreme situations and work together to prevent the dangerous proliferation of military AI.The remaining chapters in the book are interesting enough but scarcely stake out new ground. In Rule of the Robots, Martin Ford does a better job of describing the likely economic impact of AI. In Atlas of AI, Kate Crawford is more original in exploring the technology’s broader societal, political and environmental context. But The Age of AI does pose two big mind-bending questions about AI that will resonate for decades to come. When a human-designed software program, such as Google DeepMind’s games-playing AlphaZero, learns and applies a model that no human can recognise or understand, does that advance knowledge? Or, for the first time in human history, does that mean that knowledge is receding from us?The Age of AI and Our Human Future by Henry Kissinger, Eric Schmidt and Daniel Huttenlocher Little Brown, $30/John Murray, £20, 272 pagesJohn Thornhill is the FT’s innovation editorJoin our online book group on Facebook at FT Books Café