BlueMonday1984

joined 9 months ago
[–] [email protected] 3 points 3 months ago

I dunno, after 4 years of happily proclaiming that this is the thing we’re going to sell, why have these guys never considered that fraud is bad, actually. Is fully automated luxury gay space fraud really so enticing?

It is if you expect to make an absolute crapload of money off of it and have absolutely zero soul. And, well, its the AI industry - anyone with a soul probably left a couple years ago.

[–] [email protected] 20 points 3 months ago

Something you always have to consider, even if it is a shitty doctor for our standards, it might still be better than no doctor.

No doctor means your shit doesn't get treated. A false doctor (e.g. alternative medicine) gives you a false sense of hope at best and ruins your health at worst.

[–] [email protected] 5 points 3 months ago (1 children)

I'd be happy to see it, this place could do have a notawfulmusic sub

[–] [email protected] 7 points 3 months ago (1 children)

In other news, someone caught a former NFT artist using AI. Not shocked in the slightest an NFT grifter jumped on the AI train.

[–] [email protected] 10 points 3 months ago (5 children)

Witnessed an AI doomer freaking out over a16z trying to deep-six SB1047.

Seems like the "AI doom" criti-hype is starting to become a bit of an albatross around the industry's neck.

[–] [email protected] 6 points 3 months ago (1 children)

Look on the bright side - we at least got some potential sci-fi gadgets out of it.

The temporal echo chamber sounds like the seed of a good story - you could really get some character analysis out of it if used well.

[–] [email protected] 7 points 3 months ago (2 children)

...I mean yeah that's a pretty obvious use case - if Elon's given you a checkmark against your will, might as well use the benefits to cause him as much grief as possible.

(Also, loved your series on Devs - any idea when the final part's gonna release? Seems its gotten hit with some major delays.)

[–] [email protected] 10 points 3 months ago (2 children)

Update: Whilst the the story's veracity remains unconfirmed as of this writing, it has gone on to become a shitshow for the AI industry anyways - turns out the story got posted on Twitter and proceeded to go viral.

Assuming its fabricated, I suspect OP took their cues from this 404 Media report made a year ago, which warned about the flood of ChatGPT-generated mycology books and their potentially fatal effects.

As for people believing it, I'm not shocked - the AI bubble has caused widespread harm to basically every aspect of society, and the AI industry is viewed (rightfully so, I'd say) as having willingly caused said harm by developing and releasing AI systems, and as utterly unrepentant about it.

Additionally, those who use AI are viewed (once again, rightfully so) as unrepentant scumbags of the highest order, entirely willing to defraud and hurt others to make a quick buck.

With both those in mind, I wouldn't blame anyone for immediately believing it.

[–] [email protected] 7 points 3 months ago

You're not wrong. Precisely how this AI debacle will strengthen copyright I don't know, but I fully anticipate it will be strengthened.

[–] [email protected] 12 points 3 months ago (2 children)

It hasn't been hashed out in court yet, but I suspect AI mickey will be considered copyright infringement, rather than public domain.

[–] [email protected] 14 points 3 months ago

Is it just that these AI programs need no skill at all?

That's a major reason. That Grok's complete lack of guardrails is openly touted as a feature is another.

[–] [email protected] 78 points 3 months ago (9 children)

I've already seen people go absolutely fucking crazy with this - from people posting trans-supportive Muskrat pictures to people making fucked-up images with Nintendo/Disney characters, the utter lack of guardrails has led to predictable chaos.

Between the cost of running an LLM and the potential lawsuits this can unleash, part of me suspects this might end up being what ultimately does in Twitter.

view more: ‹ prev next ›