121
submitted 1 day ago* (last edited 1 day ago) by LibertyLizard@slrpnk.net to c/technology@lemmy.zip

Summary: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library. This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.

(Since this is a personal blog I'll clarify I am not the author.)

you are viewing a single comment's thread
view the rest of the comments
[-] lvxferre@mander.xyz 2 points 5 hours ago

Oh fuck. Then it gets even worse (and funnier). Because even if that was a human contributor, Shambaugh acted 100% correctly, and this defeats the core lie outputted by the bot.

If you got a serious collaborative project, you don't want to enable the participation of people who act based on assumptions. Because those people ruin everything they touch with their "but I thought that...", unless you actively fix their mistakes — i.e. more work for you.

And yet once you construe that bloody bot's output as if they were human actions, that's exactly what you get — a human who assumes. A dead weight and a burden.

It remains an open question whether it was set up to do that, or, more probably, did it by itself because the Markov chain came up with the wrong token.

A lot of people would disagree with me here, but IMO they're the same picture. In either case, the human enabling the bot's actions should be blamed as if those were their own actions, regardless of their "intentions".

[-] leftzero@lemmy.dbzer0.com 2 points 4 hours ago

IMO they're the same picture. In either case, the human enabling the bot's actions should be blamed as if those were their own actions, regardless of their "intentions".

Oh, definitely. It's 100% the responsibility of the human behind the bot in either case.

But the second option is scarier, because there are a lot more ignorant idiots than malicious bastards.

If these unsupervised agents can be dangerous regardless of the intentions of the humans behind them, we should make the idiots using them aware that they're playing with fire and they can get burnt, and burn other people in the process.

this post was submitted on 14 Feb 2026
121 points (96.2% liked)

Technology

6109 readers
566 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Post guidelines

[Opinion] prefixOpinion (op-ed) articles must use [Opinion] prefix before the title.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 2 years ago
MODERATORS