this post was submitted on 07 May 2025
195 points (90.5% liked)

Not The Onion

16157 readers
1488 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 1 day ago (4 children)

This bring up an interesting question I like to ask my students about AI. A year or so ago, Meta talked about people making personas of themselves for business. Like if a customer needs help, they can do a video chat with an AI that looks like you and is trained to give the responses you need it to. But what if we could do that just for ourselves, but instead let an AI shadow us for a number of years so it essentially can mimic the language we use and thoughts we have enough to effectively stand in for us in casual conversations?

If the murdered victim in this situation had trained his own AI in such a manner, after years of shadowing and training, would that AI be able to mimic its master’s behavior well enough to give its master’s most likely response to this situation? Would the AI in the video have still forgiven the murderer, and would it hold more significant meaning?

If you could snapshot you as you are up to right now, and keep it as a “living photo” A.I. that would behave and talk like you when interacted with, what would you do with it? If you could have a snapshot AI of anyone in the world in a picture frame on your desk, who you could talk to and interact with, who would you choose?

[–] [email protected] 5 points 1 day ago* (last edited 1 day ago) (1 children)

it would hold the same meaning as now, which is nothing.

this is automatic writing with a computer. no matter what you train on, you're using a machine built to produce things that match other things. the machine can't hold opinions, can't remember, can't answer from the training data. all it can do is generate a plausible transcript of a conversation and steer it with input.

one person does not generate enough data during a lifetime so you're necessarily using aggregated data from millions of people as a base. there's also no meaning ascribed to anything in the training data. if you give it all a person's memories, the output conforms to that data like water conforms to a shower nozzle. it's just a filter on top.

in regards to the final paragraph, i want computers to exhibit as little personhood as possible because i've read the transcript of the ELISA experiments. it literally could only figure out subject-verb-object and respond with the same noun as it was fed, and people were saying it should replace psychologists.

[–] [email protected] 4 points 1 day ago (1 children)

The deceased's sister wrote the script. AI/LLMs didnt write anything. It's in the article. So the assumptions you made for the middle two paragraphs dont really apply to this specific news article.

[–] [email protected] 1 points 1 day ago (1 children)

i was responding to the questions posted in the comment i replied to.

also, doesn't that make this entire thing worse?

[–] [email protected] 3 points 1 day ago (1 children)

also, doesn’t that make this entire thing worse?

No? This is literally a Victim Impact Statement. We see these all the time after the case has determined guilt and before sentencing. This is the opportunity granted to the victims to outline how they feel on the matter.

There have been countless court cases where the victims say things like "I know that my husband would have understood and forgiven [... drone on for a 6 page essay]" or even done this exact thing, but without the "AI" video/audio (home videos with dubbed overlay of a loved one talking about what the deceased person would want/think about it). It's not abnormal and has been accepted as a way for the aggrieved to voice their wishes to the court. All that's changed here was the presentation. This didn't affect the finding of if the person was guilty as it was played after the finding and was only played before sentencing. This is also the customary time where impact statements are made. The "AI" didn't make the script. This is just a mildly fancier impact statement and that's it. She could have dubbed it over home video with a fiverr voice actor. Would that change how you feel about it? I see no evidence that the court treated this anything different than any other impact statement. I don't think anyone would be fooled that the dead person is magically alive and directly making the statement. It's clear who made it the whole time.

[–] [email protected] 1 points 1 day ago (1 children)

i had no idea this was a thing in american courts. it just seems like an insane thing to include in a murder trial

[–] [email protected] 1 points 1 day ago

Those statements come after the trial during the sentencing phase. They're not used to sway the initial verdict.

[–] [email protected] 1 points 1 day ago

So… Who Framed Roger Rabbit?

The book not the movie.

[–] [email protected] 1 points 1 day ago

I wouldn't want to talk to AI either. Just have it send me a voicemail recording of the video, but transcribed into a text, into my spam folder.