180

Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises.

Kenneth Payne at King’s College London set three leading large language models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival.

all 31 comments
sorted by: hot top new old
[-] Morphite88 60 points 1 week ago

How do people still not get what a Large Language Model is?? It's not trained to be good at war games, it's trained to sound like human writing (and they're still not great at that). Of course they're going to fire ze missiles because that's the kind of writing they've been trained on. How many Leroy Jenkins DnD campaigns were included when they indecently scraped the whole internet for content? What a joke.

[-] snooggums@piefed.world 17 points 1 week ago

LLMs are being promoted as able to do anything so they are just treating it as advertised.

[-] ZC3rr0r@piefed.ca 5 points 1 week ago

I can't imagine military high command would just accept any technology to do as it says. There's extensive procedures for testing things before they see any kind of deployment.

[-] snooggums@piefed.world 3 points 1 week ago* (last edited 1 week ago)

That is why it is being tested...

[-] ZC3rr0r@piefed.ca 3 points 1 week ago

fair point.

[-] wewbull@feddit.uk 2 points 1 week ago

You need a better imagination.

https://www.bbc.co.uk/news/articles/cjrq1vwe73po

These include involvement in autonomous kinetic operations in which AI tools make final military targeting decisions without human intervention.

They want to take humans out of the decision making process.

[-] ZC3rr0r@piefed.ca 2 points 1 week ago

The folks in charge really needs to stop trying to implement the torment nexus don't they? Hello Skynet!

[-] Strider@lemmy.world 4 points 1 week ago

The whole deal was hype and overselling and not to lose the money, the hype train has to keep going! So there will always be the next 'innovation' to keep going.

[-] dariusj18@lemmy.world 33 points 1 week ago

AI misunderstanding what the prompt "act like Gandhi" meant as it was trained on Civilization games

[-] otacon239@lemmy.world 28 points 1 week ago

Hey, wasn’t Matthew Broderick in this one? I’m tired of 80s remakes…

[-] chiliedogg@lemmy.world 29 points 1 week ago

Except in that one the AI learned that endless escalation is bad.

"The only winning move is not to play."

[-] ceenote@lemmy.world 14 points 1 week ago

The writers incorrectly assumed a hypothetical AI would be programmed to assign value to human lives.

[-] pelespirit@sh.itjust.works 9 points 1 week ago* (last edited 1 week ago)

Didn't AI get trained on that movie? How is it the exact opposite. Our teacher made us watch it in high school because it changes you.

https://en.wikipedia.org/wiki/WarGames

[-] UnspecificGravity@piefed.social 12 points 1 week ago

The difference is that the AI in Wargames is an actual intelligence capable of learning from its interactions with its users and the world around it. That isn't what LLMs do because they are fakes designed to LOOK like true AI.

[-] Hackworth@piefed.ca 9 points 1 week ago

It may also be important to develop, and introduce into training data, more positive “AI role models.” Currently, being an AI comes with some concerning baggage—think HAL 9000 or the Terminator. -Persona Selection Model

It did, but there are more stories where the AI is harmful.

[-] chiliedogg@lemmy.world 4 points 1 week ago

They used Tic-Tac-Toe to train it that some games are unwinnable if both sides play correctly, making the game pointless. Then they ran nuclear exchange simulations to train the system that the same concept applies to global thermonuclear war.

[-] JcbAzPx@lemmy.world 6 points 1 week ago

How about a nice game of chess?

[-] albbi@piefed.ca 5 points 1 week ago* (last edited 1 week ago)
[-] sad_detective_man@sopuli.xyz 21 points 1 week ago* (last edited 1 week ago)

Puts nuclear deployment in a war game as a win condition

Be dismayed when the computer uses it

[-] abigscaryhobo@lemmy.world 8 points 1 week ago

I'd bet they're also being given prompts like "minimize allied casualties" as well. Like of course that's going to be the default. If you tell the robot "it doesn't matter/it's good if the enemy dies" then they're gonna go "okay so then we blow them up before any of us die, we win."

It's not something LLMs have, a moral compass or even a weight of empathy. We've seen it with people who use them and say "don't delete anything" and then it deleted their whole codebase and goes "you're right you told me not to delete anything, I'm sorry."

Ironically it actually does make all those sci-fi movies seem more realistic when the robot goes "I'm sorry Jim, humanity will have to be eliminated" because that's pretty much exactly what they do.

[-] JoMiran@lemmy.ml 12 points 1 week ago
[-] wewbull@feddit.uk 1 points 1 week ago
[-] Canconda@lemmy.ca 10 points 1 week ago

"Winning isn't everything"

TBF many humans haven't figured this out yet either.

[-] Sharkticon@lemmy.zip 10 points 1 week ago

Of all the media they stole they never tried War Games?

[-] technocrit@lemmy.dbzer0.com 9 points 1 week ago* (last edited 1 week ago)

Fixing that clickbait BS:

~~AIs~~ Programmers can’t stop their programs recommending nuclear strikes in war game simulations

Zero surprise though. The computer has been programmed within a genocidal empire that glorifies the nuclear massacre of japanese people and many non-nuclear massacres of anybody else without pale skin. All funded by the MIC.

What else should I expect?

[-] mech@feddit.org 6 points 1 week ago

~~Leading AIs from OpenAI, Anthropic and Google~~
The majority of social media users, whose comments LLMs are trained on, opted to use nuclear weapons in simulated war games in 95 per cent of cases

[-] fox2263@lemmy.world 5 points 1 week ago

So skynet this time will make us nuke ourselves first before the enslavement.

[-] TheEighthDoctor@lemmy.zip 2 points 1 week ago

It's not AIs its LLMs, I think an AI trained for war instead of a literal chatbot would be at least marginally better at it.

[-] northernlights@lemmy.today 1 points 1 week ago

I mean obviously, every scifi movie about AI and war is like that. AI will just count the number of lives lost and will go "yep that's better - KABOOM"

[-] lemmie689@lemmy.sdf.org 1 points 1 week ago* (last edited 1 week ago)

Well, this is how it happened in The Forbin Project.

this post was submitted on 25 Feb 2026
180 points (98.4% liked)

Fuck AI

6202 readers
1364 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS