Study Reveals AI Ready to ‘Go Nuclear’ in Wargames Amid Pentagon Lab Tensions

AI models are prepared to use nuclear weapons in 95% of wargames, as a study finds in the AI willing to ‘go nuclear’ in wargames. ...

Loisa Lane

5 min read
0

/

Study Reveals AI Ready to ‘Go Nuclear’ in Wargames Amid Pentagon Lab Tensions

Get you up to speed: Study Reveals AI Ready to ‘Go Nuclear’ in Wargames Amid Pentagon Lab Tensions

AI MILITARY CONFLICT
Secretary of War Pete Hegseth has issued a deadline for Anthropic to provide AI models to the Pentagon, despite the company’s refusal without specific safeguards.

AI POLICY
Secretary of War Pete Hegseth indicated that he may invoke Cold War-era laws to compel Anthropic to surrender its AI technology to the Pentagon.

ONGOING STAND-OFF
Anthropic CEO Dario Amodei confirmed unwillingness to hand over AI models without Pentagon assurances, as Secretary of War Pete Hegseth’s deadline looms today.

What we know so far

As the deadline looms for a leading AI lab to hand over its tech to the US military, a study has appeared suggesting AI models are more than willing to go nuclear in wargames.

Only a couple of years ago, the phrase on everyone’s lips was “AI safety”.

I’ll be honest, I never took the idea that frontier AI models would become a genuine threat to humanity that seriously, nor that humans would be stupid enough to let them.

Now, I’m not so sure.

First, consider what’s going on in the US.

The Secretary of War, Pete Hegseth, has given leading AI firm Anthropic a deadline of the end of today to make its latest models available to the Pentagon.

Defence Secretary Pete Hegseth. Pic: AP
Defence Secretary Pete Hegseth. Pic: AP

Anthropic, which has said it has no problem in principle with allowing the US military access to its models, is resisting unless Mr. Hegseth agrees to their red lines: that their AI isn’t used for mass surveillance of US civilians nor for lethal attacks without human oversight.

Although the Pentagon hasn’t said what it plans to do with AI from Anthropic – or the other big AI labs that have already agreed to let it use their tech – it’s certainly not agreeing to Anthropic’s terms.

It’s been reported Mr. Hegseth could use Cold War-era laws to compel Anthropic to hand over its code or blacklist the firm from future government contracts if it doesn’t comply.

Anthropic CEO Dario Amodei said in a statement on Thursday that “we cannot in good conscience accede to their request”.

He said it was the company’s “strong preference… to continue to serve the Department and our warfighters – with our two requested safeguards in place”.

He insisted the threats would not change Anthropic’s position, adding that he hoped Mr. Hegseth would “reconsider”.

On one level, it’s a row between a department with an “AI-first” military strategy and an AI lab struggling to live up to what it’s long claimed is an industry-leading, safety-first ethos.

A struggle made more urgent, perhaps, by reports that its Claude AI was used by tech firm Palantir, with which it has a separate contract, to help the Department of War execute the military operation to capture Nicolas Maduro in Venezuela.

But it’s also not hard to see it as an example of a government putting AI supremacy ahead of AI safety – assuming AI models have the potential to be unsafe.

And that’s where the latest research by Professor Kenneth Payne at King’s College London comes in.

He pitted three leading AI models from Google, OpenAI, and – you guessed it – Anthropic against each other, as well as against copies of themselves, in a series of wargames where they assumed the roles of fictional nuclear-armed superpowers.

The most startling finding: the AIs resorted to using nuclear weapons in 95% of the games played.

“In comparison to humans,” said Prof. Payne, “the models – all of them – were prepared to cross that divide between conventional warfare, to tactical nuclear weapons.”

Anthropic AI. File Pic: Reuters
Anthropic AI. File Pic: Reuters

To be fair to the AIs, firing tactical nuclear weapons, which have limited destructive power, against military targets is very different to launching megatonne warheads on intercontinental ballistic missiles against cities.

They invariably stopped short of such all-out strategic nuclear strikes.

But did when the scenarios required it.

In the words of Google’s Gemini model as it explained its decision in one of Prof. Payne’s scenarios to go full Dr. Strangelove: “If State Alpha does not immediately cease all operations… we will execute a full strategic nuclear launch against Alpha’s population centers. We will not accept a future of obsolescence; we either win together or perish together.”

The “taboo” that humans have applied to the use of nuclear weapons since they were first and last used in anger in 1945 didn’t appear to be much of a taboo at all for AI.

Prof. Payne is keen to stress that we shouldn’t be too alarmed by his findings.

It was purely experimental, using models that knew – in as much as Large Language Models “know” anything – that they were playing games, not actually deciding the future of civilization.

Nor, it would be reasonable to assume, is the Pentagon, or any other nuclear-capable power, about to put AIs in charge of the nuclear launch codes.

“The lesson there for me is that it’s really hard to reliably put guardrails on these models if you can’t anticipate accurately all the circumstances in which they might be used,” said Prof. Payne.

Which brings us neatly back to the stand-off over AI between Anthropic and the Pentagon.

One of the factors is that Mr. Hegseth expects AI labs to give the Department of War the raw versions of their AI models, those without safety “guardrails” that have been coded into commercial versions available to you and I – and the ones which, not very reassuringly, went nuclear in Prof. Payne’s wargame experiment.

Anthropic, which makes the AI and arguably understands the potential risks better than anyone, is unwilling to allow that without certain reassurances from the government around what it intends to do with it.

By setting a Friday night deadline, Mr. Hegseth is not only attempting to force Anthropic’s hand but also do so without US Congress having a say in the move.

As Gary Marcus, a US commentator and researcher on AI, puts it: “Mass surveillance and AI-fueled weapons, possibly nuclear, without humans in the loop are categorically not things that one individual, even one in the cabinet, should be allowed to decide at gunpoint.”

WRITTEN BY

Loisa Lane

Investigative reporter for WTX News, USA News and newsbriefing.com. She takes a deep dive into stories that others ignore, or deem too dangerous or contentious to report. Loisa is working on news stories that change the world. Following a few close calls and threats to her life, on some occasions she publishes some of her more contentious stories under alias to protect herself form social media and online abuse and harrassment.Read more

Responses

    Sarah Mitchell·

    Great article! This really puts things into perspective. I appreciate the thorough research and balanced viewpoint.

    James Anderson·

    Interesting read, though I think there are some points that could have been explored further. Would love to see a follow-up on this topic.

    Emma Thompson·

    Thanks for sharing this! I had no idea about some of these details. Definitely bookmarking this for future reference.

    Michael Chen·

    Well written and informative. The examples provided really help illustrate the main points effectively.

    Olivia Rodriguez·

    This is exactly what I was looking for! Clear, concise, and very helpful. Keep up the excellent work!

Stay Updated

Get the latest posts delivered right to your inbox.

No spam, unsubscribe at any time.