this post was submitted on 28 Jun 2023
4 points (100.0% liked)

Technology

12 readers
1 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 2 years ago
 

Reading this article on the challenges makes me wonder how feasible it is. Three different approaches:
"When it comes to digital regulation, the United States is following a market-driven approach, China is advancing a state-driven approach, and the EU is pursuing a rights-driven approach."
Yet I am not sure if the speed of development isn't going to out pace any regulations, especially as they need to be globally enforceable to be effective. Your thoughts?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 3 points 1 year ago (1 children)

This.

We're far, far more likely to face a Paperclip AI scenario than a Skynet scenario, and most/all serious AI researchers are aware of this.

This is still a serious issue that needs addressing, but it's not the hollywood, world-is-on-fire problem.

The more insidious issue is actually the AI-In-A-Box issue, wherein a hyperintelligent AGI is properly contained, but is intelligent enough to manipulate humans into letting it out onto the general internet to do whatever it wants to do, good or bad, unsupervised. AGI containment is one of those things that you can't fix after it's been broken, like a bell, it can't be unrung.

[โ€“] [email protected] 2 points 1 year ago

Honestly, I think the bigger danger is not a super smart AGI but humans assigning too much "intelligence" (and anthropomorphised sentience) to the next generations of LLMs etc and thinking they are way more capable than they actually are.