this post was submitted on 30 Sep 2023
56 points (95.2% liked)

Futurism

429 readers
1 users here now

A place to discuss the ideas, developments, and technology that can and will shape the future of civilization.

Tenets:

(1) Concepts are often better treated in isolation -- eg: "what if energy became near zero cost?"
(2) Consider the law of unintended consequences -- eg: "if this happens, then these other systems fail"
(3) Pseudoscience and speculative physics are not welcome. Keep it grounded in reality.
(4) We are here to explore the parameter spaces of the future -- these includes political system changes that advances may trigger. Keep political discussions abstract and not about current affairs.
(5) No pumping of vapourware -- eg: battery tech announcements.

See also: [email protected] and [email protected]

founded 1 year ago
MODERATORS
 

The article and release are interesting unto themselves. However, as this is c/Futurism, let's discuss what happens in the future. How do you folks think this ideological battleground plays out in 5, 50, or 500 years?

all 18 comments
sorted by: hot top controversial new old
[–] [email protected] 14 points 1 year ago (1 children)

Friendly reminder AI is just regurgitating information, not creating it or verifying it. It could happily tell you how to murder based on 14 posts other people have written about murder - none of whom know what the fuck they're talking about, so neither will the AI.

I am very unafraid.

[–] [email protected] 4 points 1 year ago (1 children)

Do you ever have the existential crisis where you wonder if your own brain is just doing the same thing? Maybe none of your thoughts are original and are just recombinations? What if the spark of consciousness is just a delusion?

[–] [email protected] 2 points 1 year ago

Shhh, a delusions is only comfortable when you are ignorant

[–] [email protected] 7 points 1 year ago (2 children)
[–] [email protected] 4 points 1 year ago

The AI in Hyperion are some of the coolest sci-fi I've ever read.

[–] [email protected] 2 points 1 year ago

It's excellent :)

[–] [email protected] 6 points 1 year ago (2 children)

If nothing else, it shows that the LLM genie cannot be put back in the bottle.

5 years: bad actors are using unregulated LLMs to flood content to every possible medium where there's a chance to influence someone. It becomes increasingly hard to tell what is an authentic human interaction on the internet. CS:GO has fake 12yo kids swearing at each other. Social media websites are now popping up that require a human to physically visit one of their service centres with government I'd to register, proving that they're human.

50 years: LLMs are so ubiquitous and self-tuning that you can have any model you want with any parameters. This becomes similar to TARS on Interstellar, sans robot body. Regulations have largely failed, but people tell their personals AIs to "tone it down in front of the family" like they would their belligerent uncles. At least one major diplomatic incident is caused when nation state sponsored LLMs are arguing with one another.

500 years: I am a paperclip.

[–] [email protected] 3 points 1 year ago

CS:GO went the way of Overwatch 1 😒

[–] [email protected] 3 points 1 year ago

At least one major diplomatic incident is caused when nation state sponsored LLMs are arguing with one another.

These are made by the internet, some one is going to leave one on horny mode in front of a bunch of world leaders...

[–] [email protected] 5 points 1 year ago

The "war" with LLMs is already over, that is to say there was no war. We embraced it.

Given how easily we accepted it and how slow legislation has been to have any meaningful impact on the development of the models if we achieve true AGI we'll see a similar path to integration.

My opinion is different than some, in my eyes AGI is a kind of natural evolution of us, if we create it and foster it's early development we will effectively be its parents. As our progeny, it will outlast humanity no matter it's relationship with us.

In my eyes that's the only possible way to truly preserve a piece of us forever and may be the only way to get information out beyond the fermi paradox if the paradox's key tenant that proves true is the short lifespan/implosion of advanced societies.

Creating AGI is what religious people would call judgement day, we'll let an advanced and evolving entity make the judgements about our societies that we can't.

Maybe they'll let us live the same way we let mosquitoes live, we don't wage wars where every human goes out and kills mosquitoes. We find them completely useless and even a leech on each of us, if we found an easy way to eradicate them we probably would.

Whatever happens, we deserve it. I don't say this as a way to say "the people researching and releasing LLMs are making mistakes", in fact I think those people are some of the best and brightest among us. Rather I'm saying humanity on the whole is pretty fucked up. Genocide, torture, hatred, religious bigotry, nationalism, tribalism, humans are largely just kinda smart and honestly fairly evil primates.

We literally raise sentient creatures from birth in cages to murder and devour, and the vast majority of society accepts that?!

To be more concrete, I don't think LLMs will be the path to AGI, but I do think they are an essentiao component of a consciousness. The "stream" of consciousness specifically.

5 years, nothing, LLMs treated like google is today, an incredible information source and partner for creation.

50 years, hard to conceive this time period arriving without AGI. I think humans will suffer as they weakly combat the ideology of a more advanced race being in charge. Not physically combat, but literally "back in my day" style combat.

500 years, I'd be surprised if we last that long with our current structure and march toward doom. Perhaps the new AGI race strips what it needs to leave the planet and we revert back to 1800s technology. So hard to predict.

[–] [email protected] 2 points 1 year ago

Finally, I can get those ducks home from the park!

[–] [email protected] -5 points 1 year ago (2 children)

You can either be for censorship, or against it. There's no "oh let's just censor it a little" middle-ground.

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (1 children)

You may need to visit doctor because there is so major symptoms of brainrot in this comment. Guys we should allow childporn everywhere because "YoU CaN EiThEr Be FoR CeNsOrShIp, Or AgAiNsT It." Clearly there is no room for any nuance in anything, its a plainly black or white issue. Sure we can argue the slippery slope of who do we get to decide on what needs to be "censored" and that is something we have to navigate carefully but to go full brainmush and throw your hands up in the air and say you either have to let everything be out there or nothing.

[–] [email protected] -2 points 1 year ago (1 children)

Ah yes, the "think of the children!" argument, battle-cry of conservatives everywhere.

[–] [email protected] 4 points 1 year ago

Okay big guy, since you are whining that is such an easy target. Ironic you are calling anyone here a conservative since its fucking hilarious seeing the community of Lemmy leans heavily left. So are you saying revenge porn, snuff material, rape footage, etc should be on every platform that has the ability to host video or images because if you don't allow it, once again in your braindead take its "Censorship"? I would hope you would say no since I'm hoping you are a sensible individual who understands that some moderation in materials that a host provides is something that needs to happen or we will be flooded with "illegal" material and spam. Spam blocking would also count as censorship but again hoping you are sensible to understand that its fine to censor some material. Like sure we may be able to agree on some things but you just need to understand your initial stance is way too broad, that is it almost irresponsible to suggest and I feel you are just pulling the trigger way too quickly on something and you didn't fully understand the ramifications of what you are suggesting.

[–] [email protected] 1 points 1 year ago (1 children)

I disagree. There is middle ground. If an engineer gives bad advice, it shouldn't be propagated -- you know, bridges fall down and people die. Where possible, the invalid info should be scrubbed and replaced with valid info. The engineering firm also has their reputation, permits to practice, etc. at stake. But an AI does not. There's no one to sue for negligence when someone takes invalid advice from an AI which is masquerading as a doctor. Etc. The companies making AIs are mostly trying to protect themselves when they put the gates in place.

You could go stand on your soapbox and shout suicide tips to the crowd as they walk by. You might get locked up as you're abetting a crime (in most jurisdictions). But what if you're posting suicide advice into a forum, and the advice was generated by an AI? What if a script is posting it? Where does the legal responsibility for harm fall?

[–] [email protected] 2 points 1 year ago

A Large Language Model is just a set of computer algorithms designed to answer a user's question, it's just a tool. None of your arguments are at all relevant to the tool itself, but rather to how the tool is used. A hammer is designed to pound nails but it can also be used to murder somehow, are you going to sue the hammer manufacturer because they didn't prevent that?

If someone uses a hammer to murder someone, do they get away with it because the hammer wasn't designed to kill someone, so clearly it's not their fault? No, of course not. This article is nothing but rage-bait. They may as well have taken a hammer and started hitting everything they could (except for nails of course), and then wrote some bullshit about how Master-Craft produces items that can be used to perform abortions and kill Native Americans.

And as for my original post, this has to do with how the LLM is trained. There's several ways to 'censor' the output from a LLM, including prompts and ban tokens. This is what services like GPT or Stable Diffusion do, they don't censor the training data, they censor the inputs and outputs shown to the user. So should the training data be scrubbed of all traces of anything we find objectionable? There was plenty of murders in Hamlet, do we exclude that because the model might suggest poisoning your partner by pouring poison in their ear while they sleep?