this post was submitted on 19 Jun 2023
44 points (100.0% liked)
Technology
12 readers
1 users here now
This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!
founded 2 years ago
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Mark Zuckerberg was on the Lex Friedman podcast less than a week ago talking about this, and he said meta would continue to open source their models until they reach the point of "super intelligence".
So what changed in the last week?
That was specifically around LLMs. In that same podcast he also highlighted how scams are very worrisome and you can probably extend that to any reality-faking technology as it gets more and more convincing.
It's self explanatory that the threat of extinction by AI and threat of crafting a fake reality to shape the outcome of the real reality are two different threats
Ok, following you, only commenter so far. Your posts are thought-experiment inducing. Thank you!
I well enjoyed our talk together too! Though just beware i might definitely post cringe shit on topics you don't care about
Oh well, we’ll see how it goes. Cheers!
So creating a text-based AI that impersonates influencers or celebrities is a "cool feature" to "increase engagement" and is totally viable to release to the public, but doing the (checks notes) same thing using voice is incredibly "dangerous" and needs to be protected?
People understand that text can be fake.
People don't really understand that voices can be. It's opening up a lot of scams with people pretending to be kidnapped (or otherwise desperate) relatives and taking money from people. If you make it easier to automate that without the human in play and have it appear responsive? A lot more is going to happen a lot more convincingly.
I don't at all believe Facebook cares about that, but it is a real downside to the tech.
Well snarked, especially enjoyed the copypaste of the checking notes phenomena. Can you figure out why one would be seen as more harmful in the immediate future than the other?