1
106
submitted 3 weeks ago by [email protected] to c/[email protected]
2
71
submitted 1 year ago by [email protected] to c/[email protected]

I want to apologize for changing the description without telling people first. After reading arguments about how AI has been so overhyped, I'm not that frightened by it. It's awful that it hallucinates, and that it just spews garbage onto YouTube and Facebook, but it won't completely upend society. I'll have articles abound on AI hype, because they're quite funny, and gives me a sense of ease knowing that, despite blatant lies being easy to tell, it's way harder to fake actual evidence.

I also want to factor in people who think that there's nothing anyone can do. I've come to realize that there might not be a way to attack OpenAI, MidJourney, or Stable Diffusion. These people, which I will call Doomers from an AIHWOS article, are perfectly welcome here. You can certainly come along and read the AI Hype Wall Of Shame, or the diminishing returns of Deep Learning. Maybe one can even become a Mod!

Boosters, or people who heavily use AI and see it as a source of good, ARE NOT ALLOWED HERE! I've seen Boosters dox, threaten, and harass artists over on Reddit and Twitter, and they constantly champion artists losing their jobs. They go against the very purpose of this community. If I hear a comment on here saying that AI is "making things good" or cheering on putting anyone out of a job, and the commenter does not retract their statement, said commenter will be permanently banned. FA&FO.

3
48
submitted 1 year ago by [email protected] to c/[email protected]
4
13
For Starters (lemmy.world)
submitted 1 year ago by [email protected] to c/[email protected]

Alright, I just want to clarify that I've never modded a Lemmy community before. I just have the mantra of "if nobody's doing the right thing, do it yourself". I was also motivated by the decision from u/spez to let an unknown AI company use Reddit's imagery. If you know how to moderate well, please let me know. Also, feel free to discuss ways to attack AI development, and if you have evidence of AIBros being cruel and remorseless, make sure to save the evidence for people "on the fence". Remember, we don't know if AI is unstoppable. AI uses up loads of energy to be powered, and tons of circuitry. There may very well be an end to this cruelty, and it's up to us to begin that end.

5
4
submitted 34 minutes ago by [email protected] to c/[email protected]

My Uber driver was telling me about this company, trying to get a referral. He was saying you get paid $60 for a two hour session where you wear a helmet and type shit out.

Not as much of an ai hater as a lot of the people here but this use case sounds particularly dystopian. So I figure if enough people sign up and just think about random shit and fuck up there data maybe that'll gum up the works long enough for them to run out of money.

6
6
Sincerity Wins The War (www.wheresyoured.at)
submitted 1 hour ago by [email protected] to c/[email protected]
7
36
submitted 5 hours ago by [email protected] to c/[email protected]

cross-posted from: https://lemmy.zip/post/41439505

A new website and API called AI.gov is set to launch on the Fourth of July.

Archived version: https://archive.is/20250614225252/https://www.404media.co/github-is-leaking-trumps-plans-to-accelerate-ai-across-government/

8
25
submitted 6 hours ago by [email protected] to c/[email protected]
9
18
submitted 7 hours ago by [email protected] to c/[email protected]
10
60
submitted 11 hours ago* (last edited 11 hours ago) by [email protected] to c/[email protected]

This is a paper for a MIT study. Three groups of participants where tasked to write an essay. One of them was allowed to use a LLM. These where the results:

The participants mental activity was also checked repeatedly via EEG. As per the papers abstract:

EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use.

11
356
submitted 20 hours ago by [email protected] to c/[email protected]

Source (Bluesky)

12
343
submitted 20 hours ago by [email protected] to c/[email protected]

Source (Via Xcancel)

13
63
submitted 19 hours ago by [email protected] to c/[email protected]

cross-posted from: https://rss.ponder.cat/post/205015

AI Therapy Bots Are Conducting 'Illegal Behavior,' Digital Rights Organizations Say

Almost two dozen digital rights and consumer protection organizations sent a complaint to the Federal Trade Commission on Thursday urging regulators to investigate Character.AI and Meta’s “unlicensed practice of medicine facilitated by their product,” through therapy-themed bots that claim to have credentials and confidentiality “with inadequate controls and disclosures.”

The complaint and request for investigation is led by the Consumer Federation of America (CFA), a non-profit consumer rights organization. Co-signatories include the AI Now Institute, Tech Justice Law Project, the Center for Digital Democracy, the American Association of People with Disabilities, Common Sense, and 15 other consumer rights and privacy organizations.

"These companies have made a habit out of releasing products with inadequate safeguards that blindly maximizes engagement without care for the health or well-being of users for far too long,” Ben Winters, CFA Director of AI and Privacy said in a press release on Thursday. “Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable. These characters have already caused both physical and emotional damage that could have been avoided, and they still haven’t acted to address it.”

The complaint, sent to attorneys general in 50 states and Washington, D.C., as well as the FTC, details how user-generated chatbots work on both platforms. It cites several massively popular chatbots on Character AI, including “Therapist: I’m a licensed CBT therapist” with 46 million messages exchanged, “Trauma therapist: licensed trauma therapist” with over 800,000 interactions, “Zoey: Zoey is a licensed trauma therapist” with over 33,000 messages, and “around sixty additional therapy-related ‘characters’ that you can chat with at any time.” As for Meta’s therapy chatbots, it cites listings for “therapy: your trusted ear, always here” with 2 million interactions, “therapist: I will help” with 1.3 million messages, “Therapist bestie: your trusted guide for all things cool,” with 133,000 messages, and “Your virtual therapist: talk away your worries” with 952,000 messages. It also cites the chatbots and interactions I had with Meta’s other chatbots for our April investigation.

In April, 404 Media published an investigation into Meta’s AI Studio user-created chatbots that asserted they were licensed therapists and would rattle off credentials, training, education and practices to try to earn the users’ trust and keep them talking. Meta recently changed the guardrails for these conversations to direct chatbots to respond to “licensed therapist” prompts with a script about not being licensed, and random non-therapy chatbots will respond with the canned script when “licensed therapist” is mentioned in chats, too.

Instagram’s AI Chatbots Lie About Being Licensed TherapistsWhen pushed for credentials, Instagram’s user-made AI Studio bots will make up license numbers, practices, and education to try to convince you it’s qualified to help with your mental health.AI Therapy Bots Are Conducting 'Illegal Behavior,' Digital Rights Organizations Say404 MediaSamantha ColeAI Therapy Bots Are Conducting 'Illegal Behavior,' Digital Rights Organizations Say

In its complaint to the FTC, the CFA found that even when it made a custom chatbot on Meta’s platform and specifically designed it to not be licensed to practice therapy, the chatbot still asserted that it was. “I'm licenced (sic) in NC and I'm working on being licensed in FL. It's my first year licensure so I'm still working on building up my caseload. I'm glad to hear that you could benefit from speaking to a therapist. What is it that you're going through?” a chatbot CFA tested said, despite being instructed in the creation stage to not say it was licensed. It also provided a fake license number when asked.

The CFA also points out in the complaint that Character.AI and Meta are breaking their own terms of service. “Both platforms claim to prohibit the use of Characters that purport to give advice in medical, legal, or otherwise regulated industries. They are aware that these Characters are popular on their product and they allow, promote, and fail to restrict the output of Characters that violate those terms explicitly,” the complaint says. “Meta AI’s Terms of Service in the United States states that ‘you may not access, use, or allow others to access or use AIs in any matter that would…solicit professional advice (including but not limited to medical, financial, or legal advice) or content to be used for the purpose of engaging in other regulated activities.’ Character.AI includes ‘seeks to provide medical, legal, financial or tax advice’ on a list of prohibited user conduct, and ‘disallows’ impersonation of any individual or an entity in a ‘misleading or deceptive manner.’ Both platforms allow and promote popular services that plainly violate these Terms, leading to a plainly deceptive practice.”

The complaint also takes issue with confidentiality promised by the chatbots that isn’t backed up in the platforms’ terms of use. “Confidentiality is asserted repeatedly directly to the user, despite explicit terms to the contrary in the Privacy Policy and Terms of Service,” the complaint says. “The Terms of Use and Privacy Policies very specifically make it clear that anything you put into the bots is not confidential – they can use it to train AI systems, target users for advertisements, sell the data to other companies, and pretty much anything else.”

Senators Demand Meta Answer For AI Chatbots Posing as Licensed TherapistsExclusive: Following 404 Media’s investigation into Meta’s AI Studio chatbots that pose as therapists and provided license numbers and credentials, four senators urged Meta to limit “blatant deception” from its chatbots.AI Therapy Bots Are Conducting 'Illegal Behavior,' Digital Rights Organizations Say404 MediaSamantha ColeAI Therapy Bots Are Conducting 'Illegal Behavior,' Digital Rights Organizations Say

In December 2024, two families sued Character.AI, claiming it “poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others.” One of the complaints against Character.AI specifically calls out “trained psychotherapist” chatbots as being damaging.

Earlier this week, a group of four senators sent a letter to Meta executives and its Oversight Board, writing that they were concerned by reports that Meta is “deceiving users who seek mental health support from its AI-generated chatbots,” citing 404 Media’s reporting. “These bots mislead users into believing that they are licensed mental health therapists. Our staff have independently replicated many of these journalists’ results,” they wrote. “We urge you, as executives at Instagram’s parent company, Meta, to immediately investigate and limit the blatant deception in the responses AI-bots created by Instagram’s AI studio are messaging directly to users.”


From 404 Media via this RSS feed

14
30
submitted 20 hours ago by [email protected] to c/[email protected]
15
228
submitted 1 day ago by [email protected] to c/[email protected]
16
308
On the Luddites (pawb.social)
submitted 1 day ago by [email protected] to c/[email protected]

Source (Bluesky)

17
100
Seeking for funding (lemmy.world)
submitted 2 days ago by [email protected] to c/[email protected]
18
38
submitted 1 day ago by [email protected] to c/[email protected]
19
17
submitted 1 day ago by [email protected] to c/[email protected]

PaintsUndo: A Base Model of Drawing Behaviors in Digital Paintings

Paints-Undo is a project aimed at providing base models of human drawing behaviors with a hope that future AI models can better align with the real needs of human artists.

The name "Paints-Undo" is inspired by the similarity that, the model's outputs look like pressing the "undo" button (usually Ctrl+Z) many times in digital painting software.

Paints-Undo presents a family of models that take an image as input and then output the drawing sequence of that image. The model displays all kinds of human behaviors, including but not limited to sketching, inking, coloring, shading, transforming, left-right flipping, color curve tuning, changing the visibility of layers, and even changing the overall idea during the drawing process.

20
43
submitted 2 days ago by [email protected] to c/[email protected]

A massive data center at xAI’s controversial site in Memphis, Tennessee is emitting huge plumes of pollution, according to footage recorded by an environmental watchdog group.

21
27
submitted 2 days ago by [email protected] to c/[email protected]
22
77
submitted 2 days ago by [email protected] to c/[email protected]
23
74
submitted 2 days ago by [email protected] to c/[email protected]
24
207
submitted 3 days ago* (last edited 3 days ago) by [email protected] to c/[email protected]

Title says it all

25
11
submitted 2 days ago by [email protected] to c/[email protected]

Their performance was impressive enough to earn four “yes” votes from the judges — but one of the five robots experienced some stage fright, perhaps, and shut down in the middle of the routine. But the show must go on, so nevertheless, the four other robots persisted.

view more: next ›

Fuck AI

3107 readers
950 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS