this post was submitted on 30 Jul 2024
103 points (96.4% liked)

Technology

59168 readers
2104 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 3 months ago (1 children)

No surprise, since there's not a lot of pressure to do any other regulation on the closed source versions. Self monitoring of a profit company always works out well...

And for any of the "AGI won't happen, there's no danger"...what if on the slightest chance you're wrong? Is the maddening rush to get the next product out without any research on what we're doing worth a mistake? Scifi is fiction, but there's lessons there too, and we're ignoring them all because "that can't happen" is stronger than "let's be sure".

Besides, even with no AGI, humans alone can do huge damage with "bad" AI tools, that we're not looking into either.

[–] [email protected] 14 points 3 months ago (1 children)

And for any of the "AGI won't happen, there's no danger"...what if on the slightest chance you're wrong? Is the maddening rush to get the next product out without any research on what we're doing worth a mistake? Scifi is fiction, but there's lessons there too, and we're ignoring them all because "that can't happen" is stronger than "let's be sure".

What sorts of scenarios involving the emergence of AGI do you think regulating the availability of LLM weights and training data (or of more closely regulating AI training, research, and development within the “closed source” shops like OpenAI) would help us avoid?

And how does that threat compare to impending damage from climate change if we don’t reduce energy consumption + reliance on fossil fuels?

Besides, even with no AGI, humans alone can do huge damage with "bad" AI tools, that we're not looking into either.

When I search for “misuse of AI” I get a ton of results from people talking about exactly that.

[–] [email protected] 2 points 3 months ago

Good questions.

What sorts of scenarios involving the emergence of AGI do you think regulating the availability of LLM weights and training data (or of more closely regulating AI training, research, and development within the “closed source” shops like OpenAI) would help us avoid?

Honestly, we might be too late anyway for avoidance, but it's specifically research of the alignment problem that I think regulation could help with, and since they're still self regulation and free to do what OpenAI did with their department for that...it's akin to someone manufacturing a new chemical and not bothering any research on side effects, only what they can gain from it. Oh shit, never mind, that's standard operating procedure isn't it, at least as long as the government isn't around to stop it.

And how does that threat compare to impending damage from climate change if we don’t reduce energy consumption + reliance on fossil fuels?

Another topic that I personally think we're doomed to ignore until things get so bad they affect more than poor people and countries. How does it compare? Climate change and the probable directions it takes the planet are much more of a certainty than the unknown of if AGI is possible and what effects AGI could have. Interesting that we're taking the same approaches though, even if it's more obvious a problem. Plus profiting via greenwashing rather than a concentrated effort to do effective things to mitigate what we could.