8
submitted 7 hours ago by [email protected] to c/[email protected]
top 1 comments
sorted by: hot top new old
[-] [email protected] 3 points 4 hours ago* (last edited 1 hour ago)

Our purpose with this column isn't to be alarmist

[x] Doubt

The amount of math that goes into training an AI and generating output exceeds human capacity to calculate. So does the Big Bang, but we have some pretty good ideas how that went.

when given access to fictional emails during safety testing, threatened to blackmail an engineer over a supposed extramarital affair. This was part of responsible safety testing — but Anthropic can't fully explain the irresponsible action.

Because human writing, both fiction and non-fiction is full of this sort of thing, and all any LLM is doing is writing. Why wouldn't it take a dark turn sometimes? It's not like it has any inherent sense of ethics or morality.

Anthropic CEO Dario Amodei, in an essay in April called "The Urgency of Interpretability," warned: "People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology." Amodei called this a serious risk to humanity — yet his company keeps boasting of more powerful models nearing superhuman capabilities.

Is this true? Don't we have drugs that we don't fully understand how they do what they do? I'm reading that we don't fully understand all the mechanisms of aspirin.

I get that this is a quote and not the author of the article, but this quote is just included without deeper analysis. Also, a car has superhuman capabilities; a fish has superhuman capabilities. LLMs are not superhuman in any way that matters. They are not even superhuman in ways different from computers of 40 years ago.

But researchers at all these companies worry LLMs, because we don't fully understand them, could outsmart their human creators and go rogue.

This is 100% alarmism. AI might at some point outsmart humans, but it won't be LLMs.


None of this is to say there are absolutely no concerns about LLMs. Obviously there are. But there is no reason to suspect LLMs are going to end humanity unless some moron hooks one up to nuclear weapons.

this post was submitted on 09 Jun 2025
8 points (100.0% liked)

Technology

39044 readers
304 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS