this post was submitted on 04 Dec 2024
50 points (100.0% liked)

Technology

989 readers
5 users here now

A tech news sub for communists

founded 2 years ago
MODERATORS
 

So without giving away any personal information, I am a software developer in the United States, and as part of my job, I am working on some AI stuff.

I preliminarily apologize for boiling the oceans and such, I don't actually train or host AI, but still, being a part of the system is being a part of the system.

Anywhoo.

I was doing some research on abliteration, where the safety wheels are taken off of a LLM so that it will talk about things it normally shouldn't (Has some legit uses, some not so much...), and bumped into this interesting github project. It's an AI training dataset for ensuring AI doesn't talk about bad things. It has categories for "illegal" and "harmful" things, etc, and oh, what do we have here, a category for "missinformation_dissinformation"... aaaaaand

Shocker There's a bunch of anti-commie bullshit in there (It's not all bad, it does ensure LLMs don't take a favorable look at Nazis... kinda, I don't know much about Andriy Parubiy, but that sounds sus to me, I'll let you ctrl+f on that page for yourself).

Oh man. It's just so explicit. If anyone claims that they know communists are evil because an "objective AI came to that conclusion itself" you can bring up this bullshit. We're training AI's to specifically be anti-commie. Actually, I always assumed this, but I just found the evidence. So there's that.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 8 points 3 weeks ago* (last edited 3 weeks ago)

Hmm.

Leadership Dan Hendrycks Executive & Research Director

https://www.safe.ai/about

Hendrycks is the safety adviser of xAI, an AI startup company founded by Elon Musk in 2023. To avoid any potential conflicts of interest, he receives a symbolic one-dollar salary and holds no company equity.

https://en.wikipedia.org/wiki/Dan_Hendrycks#cite_note-time-2023-1

Links to Musk, that's always reassuring (not).

EDIT: Also this

The similarly named Center for AI Policy and Center for AI Safety both registered their first lobbyists in late 2023, raising the profile of a sprawling influence battle that’s so far been fought largely through think tanks and congressional fellowships.

Each nonprofit spent close to $100,000 on lobbying in the last three months of the year. The groups draw money from organizations with close ties to the AI industry like Open Philanthropy, financed by Facebook co-founder Dustin Moskovitz, and Lightspeed Grants, backed by Skype co-founder Jaan Tallinn.

https://www.politico.com/news/2024/02/23/ai-safety-washington-lobbying-00142783

Trying to figure out who/what funds it.