1
101
submitted 2 weeks ago by [email protected] to c/[email protected]
2
65
submitted 1 year ago by [email protected] to c/[email protected]

I want to apologize for changing the description without telling people first. After reading arguments about how AI has been so overhyped, I'm not that frightened by it. It's awful that it hallucinates, and that it just spews garbage onto YouTube and Facebook, but it won't completely upend society. I'll have articles abound on AI hype, because they're quite funny, and gives me a sense of ease knowing that, despite blatant lies being easy to tell, it's way harder to fake actual evidence.

I also want to factor in people who think that there's nothing anyone can do. I've come to realize that there might not be a way to attack OpenAI, MidJourney, or Stable Diffusion. These people, which I will call Doomers from an AIHWOS article, are perfectly welcome here. You can certainly come along and read the AI Hype Wall Of Shame, or the diminishing returns of Deep Learning. Maybe one can even become a Mod!

Boosters, or people who heavily use AI and see it as a source of good, ARE NOT ALLOWED HERE! I've seen Boosters dox, threaten, and harass artists over on Reddit and Twitter, and they constantly champion artists losing their jobs. They go against the very purpose of this community. If I hear a comment on here saying that AI is "making things good" or cheering on putting anyone out of a job, and the commenter does not retract their statement, said commenter will be permanently banned. FA&FO.

3
48
submitted 1 year ago by [email protected] to c/[email protected]
4
13
For Starters (lemmy.world)
submitted 1 year ago by [email protected] to c/[email protected]

Alright, I just want to clarify that I've never modded a Lemmy community before. I just have the mantra of "if nobody's doing the right thing, do it yourself". I was also motivated by the decision from u/spez to let an unknown AI company use Reddit's imagery. If you know how to moderate well, please let me know. Also, feel free to discuss ways to attack AI development, and if you have evidence of AIBros being cruel and remorseless, make sure to save the evidence for people "on the fence". Remember, we don't know if AI is unstoppable. AI uses up loads of energy to be powered, and tons of circuitry. There may very well be an end to this cruelty, and it's up to us to begin that end.

5
35
submitted 9 hours ago by [email protected] to c/[email protected]

... New York City’s Administration for Children’s Services (ACS) has been quietly deploying an algorithmic tool to categorize families as “high risk". Using a grab-bag of factors like neighborhood and mother’s age, this AI tool can put families under intensified scrutiny without proper justification and oversight.

ACS knocking on your door is a nightmare for any parent, with the risk that any mistakes can break up your family and have your children sent to the foster care system. Putting a family under such scrutiny shouldn’t be taken lightly and shouldn’t be a testing ground for automated decision-making by the government.

This “AI” tool, developed internally by ACS’s Office of Research Analytics, scores families for “risk” using 279 variables and subjects those deemed highest-risk to intensified scrutiny. The lack of transparency, accountability, or due process protections demonstrates that ACS has learned nothing from the failures of similar products in the realm of child services.

The algorithm operates in complete secrecy and the harms from this opaque “AI theater” are not theoretical. The 279 variables are derived only from cases back in 2013 and 2014 where children were seriously harmed. However, it is unclear how many cases were analyzed, what, if any, kind of auditing and testing was conducted, and whether including of data from other years would have altered the scoring.

What we do know is disturbing: Black families in NYC face ACS investigations at seven times the rate of white families and ACS staff has admitted that the agency is more punitive towards Black families, with parents and advocates calling its practices “predatory.” It is likely that the algorithm effectively automates and amplifies this discrimination...

6
21
submitted 9 hours ago by [email protected] to c/[email protected]

cross-posted from: https://rss.ponder.cat/post/202379

The robotaxi company Waymo has suspended service in some parts of Los Angeles after some of its vehicles were summoned and then vandalized by protesters angry with ongoing raids by US Immigration and Customs Enforcement. Five of Waymo's autonomous Jaguar I-Pace electric vehicles were summoned downtown to the site of anti-ICE protests, at which point they were vandalized with slashed tires and spray-painted messages. Three were set on fire.

The Los Angeles Police Department warned people to avoid the area due to risks from toxic gases given off by burning EVs. And Waymo told Ars that it is "in touch with law enforcement" regarding the matter.

The protesters in Los Angeles were outraged after ICE, using brutal tactics, began detaining people in raids across the city. Thousands of Angelenos took to the streets over the weekend to confront the masked federal enforcers and, in some cases, forced them away.

Read full article

Comments


From Ars Technica - All content via this RSS feed

7
9
submitted 10 hours ago by [email protected] to c/[email protected]

cross-posted from: https://lemmy.world/post/31121462

OC below by @[email protected]

What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can't think - only generate statistically plausible patterns.

The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.

Finally what should cause alarm is that on top that LLMs can't think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.

8
6
submitted 12 hours ago by [email protected] to c/[email protected]

Link to the challenge result announcement.

https://krita.org/en/posts/2025/monthly-update-27/?pk_kwd=KritaMonthlyUpdate-Edition27

Krita hosts these painting events monthly then gives the award to their best pick.

Here is the link to the winner, by Mythmaker.

https://krita.org/images/posts/2025/mu27_mouse_sage-mythmaker.jpeg

Anyway, out of curiosity, I ran it through https://wasitai.com/. This is what it tells me.

We are quite confident that this image, or significant part of it, was created by AI.

9
11
submitted 16 hours ago by [email protected] to c/[email protected]

I recently heard mention of the author and book on a Paris Marx podcast, either System Crash or Tech Won't Save Us. This interview was brought to my attention by someone I know to be somewhat neutral about ai, so I'm excited to find an ai critic reaching a broader audience. I thought interview was great, too.

10
112
submitted 1 day ago by [email protected] to c/[email protected]
11
65
submitted 1 day ago by [email protected] to c/[email protected]
12
75
submitted 1 day ago by [email protected] to c/[email protected]
13
24
submitted 23 hours ago by [email protected] to c/[email protected]
14
210
submitted 1 day ago by [email protected] to c/[email protected]

[OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.

OP: https://slashdot.org/story/25/06/09/062257/ai-is-not-intelligent-the-atlantic-criticizes-scam-underlying-the-ai-industry

Primary source: https://www.msn.com/en-us/technology/artificial-intelligence/artificial-intelligence-is-not-intelligent/ar-AA1GcZBz

Secondary source: https://bookshop.org/a/12476/9780063418561

15
82
submitted 1 day ago by [email protected] to c/[email protected]
16
272
submitted 1 day ago* (last edited 1 day ago) by [email protected] to c/[email protected]

Summary:

I downvoted pro-AI comments in a post in leftymemes community. It was LLM generated polandball comic (Which is objectively pathetic as fuck) that showed up on my feed, blocked couple of users who I thought were unhinged, and have blocked the whole instance on my client after realizing how rabid these morons are.

I didn't go looking for AI posts like a vigilante.

One user in question got miffed for being downvoted and banned me from places they moderate.

17
274
AI "Art" in China (pawb.social)
submitted 1 day ago by [email protected] to c/[email protected]

Source (Via Xcancel)

18
139
submitted 1 day ago by [email protected] to c/[email protected]
19
47
Ableism (pawb.social)
submitted 1 day ago by [email protected] to c/[email protected]

Source (Bluesky)

20
329
submitted 2 days ago by [email protected] to c/[email protected]

Duolingo really is speedrunning dystopia rn.

21
5
submitted 1 day ago* (last edited 1 day ago) by [email protected] to c/[email protected]
22
278
submitted 2 days ago by [email protected] to c/[email protected]
23
257
submitted 2 days ago by [email protected] to c/[email protected]
24
215
submitted 2 days ago* (last edited 2 days ago) by [email protected] to c/[email protected]

It's impossible, i got this instance to just see lemmy from my own instance, but no, it was slow as hell the whole week, i got new pods, put postgres on a different pod, pictrs on another, etc.

But it was slow as hell. I didn't know what it was until a few hours before now. 500 GETs in a MINUTE by ClaudeBot and GPTBot, wth is this? why? I blocked the user agents, etc, using a blocking extension on NGINX and now it works.

WHY? So google can say that you should eat glass?

Life is now hell, if before at least someone could upload a website, now even that is painfull.

Sorry for the rant.

25
134
submitted 2 days ago by [email protected] to c/[email protected]
view more: next ›

Fuck AI

3034 readers
1147 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS