this post was submitted on 31 Dec 2023
464 points (99.2% liked)

Not The Onion

12146 readers
1175 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Comments must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 1 year ago
MODERATORS
 

Michael Cohen, the former lawyer for Donald Trump, admitted to citing fake, AI-generated court cases in a legal document that wound up in front of a federal judge, as reported earlier by The New York Times. A filing unsealed on Friday says Cohen used Google’s Bard to perform research after mistaking it for “a super-charged search engine” rather than an AI chatbot.

I... don't even. I lack the words.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 13 points 10 months ago (1 children)

While the individuals have a responsibility to double check things, I think Google is a big part of this. They're rolling "AI" into their search engine, so people are being fed made up, inaccurate bullshit by a search engine that they've trusted for decades.

[–] [email protected] 11 points 10 months ago (1 children)

That's not what they're talking about here. Unless this so different in the US, only Microsoft so far shows LLM "answer" next to search results.

[–] [email protected] 8 points 10 months ago (2 children)

Google may not be showing an "AI" tagged answer, but they're using AI to automatically generate web pages with information collated from outside sources to keep you on Google instead of citing and directing you to the actual sources of the information they're using.

Here's an example. I'm on a laptop with a 1080p screen. I went to Google (which I basically never use, so it shouldn't be biased for or against me) and did a search for "best game of 2023". I got no actual results in the entire first screen. Instead, their AI or other machine learning algorithms collated information from other people and built a little chart for me right there on the search page and stuck some YouTube (also Google) links below that, so if you want to read an article you have to scroll down past all the Google generated fluff.

I performed the exact same search with DuckDuckGo, and here's what I got.

And that's not to mention all the "news" sites that have straight up fired their human writers and replaced them with AI whose sole job is to just generate word salads on the fly to keep people engaged and scrolling past ads, accuracy be damned.

[–] [email protected] 5 points 10 months ago* (last edited 10 months ago)

I mean I kind of see your point but calling those results AI is not accurate unless you're just calling any kind of data collation/wrangling or even just basic programming logic "AI". What Google is doing is taking the number of times a game is mentioned in the pages that are in the gaming category and trying to spoon feed you what it thinks you want. But that isn't AI. the point of the person you were replying to is that it wasn't as if he had intended to perform a Google search and was misled, you have to go to Google bard or chatgpt or whatever and prompt it, meaning it's on you if you're a professional who's going to cite unverified word salad. The YouTube stuff is pretty obvious, it's a part of their platform. What was done has nothing to do with web searches.

[–] [email protected] 4 points 10 months ago* (last edited 10 months ago) (1 children)

It was kinda funny to me when everyone freaked out about misinformation and "death of search" when I see a lot of people already never leave Google and treat Instant Answers as the truth, like they do with Chat-GPT, despite being very innacurate and out of context a lot of times.

[–] [email protected] 0 points 10 months ago (1 children)

Never expect the bottom 80% of the bell curve to have self awareness. That's a bet you lose 9 times out of 10.

[–] [email protected] 1 points 10 months ago

Funny how "self awareness" has two meanings here. It's the essence of what makes humans the smartest animals, but the problem you're referring to—lack of self reflection—is one of the most common problems amongst people today. Common sense ain't so common.