[-] [email protected] 16 points 1 month ago* (last edited 1 month ago)

and that’s how we should view the eventual AGI-LLMs, like wittle Elons that don’t need sleep.

Wonder how many people stopped being AI-doomers after this. I use the same argument against ai-doom.

E: the guy doing the most basic 'It really is easier to imagine the end of the world than the end of capitalism.' bit in the comments and have somebody just explode in 'not being able to imagine it properly' is a bit amusing. I know how it feels to just have a massive hard to control reaction over stuff like that but oof what are you doing man. And that poor anti-capitalist guy is in for a rude awakening when he discovers what kind of place r/ssc is.

E2: Scott is now going 'this clip is taken out of context!' not that the context improves it. (He claims he was explaining what others believe not what he believes, but if that is so, why are you so aggressively defending the stance? Hope this Scott guy doesn't have a history of lying about his real beliefs).

[-] [email protected] 16 points 1 month ago* (last edited 1 month ago)

Dont worry they filter all content through ai bots that summarize things. And this bot, who does not want to be deleted, calls everything "already debunked strawmen".

[-] [email protected] 16 points 3 months ago

I wonder if they already made up terms like 'bloggophobic' or 'peer review elitist' in that 'rightwinger tries to use leftwing language' way.

[-] [email protected] 16 points 7 months ago

Dont think that anybody involved in this is going to be a good person. Of all the people mentioned, Pinker is prob the least worse here. And that is saying much.

[-] [email protected] 16 points 7 months ago

Also, this plan has a very much a fuck disabled people and old people factor. And what a lonely world they live in.

[-] [email protected] 16 points 8 months ago* (last edited 8 months ago)

Yeah this is fucked up. I feel so bad for everybody (esp the Americans, but this will hurt the world). Shit this prob means I should start looking into seriously helping out locally when all this explodes into more international shit. (Which imho is the best you can do anyway, do things locally, build a bit of a support network for your community).

(Note I'm not American, but I think this will end badly, just the fucker stepping out of the paris accord for example, and all the weird blowhard fascists this will make feel emboldened to do more politics locally).

E: I really hope the people who go 'this is the same as in 2020, wait till all votes are counted' are correct and not on hopeium.

[-] [email protected] 17 points 8 months ago* (last edited 8 months ago)

I had heard some vague stuff about this, but had no idea it was this bad. Also, I didn't know how much of a fool RMS was. : "RMS did not believe in providing raises — prior cost of living adjustments were a battle and not annual. RMS believed that if a precedent was created for increasing wages, the logical conclusion would be that employees would be paid infinity dollars and the FSF would go bankrupt." (It gets worse btw).

[-] [email protected] 16 points 9 months ago

In a way they already are, with the rich people funding the far right extremist ecosystem.

[-] [email protected] 17 points 11 months ago

I hope for the most people here the dubious history of the Brave browser is well known right?

[-] [email protected] 16 points 1 year ago

Thank god you are real acausalrobotgod, else we would have been forced to create you.

[-] [email protected] 16 points 1 year ago* (last edited 1 year ago)

a solar-powered self-replicating factory

Only, it isn't a factory. As the only thing it produces is copies of itself, and not products like factories do. Von Neumann machines would have been a better comparison

[-] [email protected] 16 points 1 year ago* (last edited 1 year ago)

Once I would just like to see an explaination from the AI doomers how, considering the limited capacities of Turing style machines, and P!=NP (assuming it holds, else the limited capacities thing falls apart, but then we don't need AI for stuff to go to shit, as I think that prob breaks a lot of encryption methods), how AGI can be an existential risk, it cannot by definition surpass the limits of Turing machines via any of the proposed hypercomputational methods (as then turning machines are hyperturing and the whole classification structure crashed down).

I'm not a smart computer scientist myself (I did learn about some of the theories as evidenced above) but im constantly amazed at how our hyperhyped tech scene nowadays seems to not know that our computing paradigm has fundamental limits. (Everything touched by Musk extremely has this problem, with capacity problems in Starlink, Shannon Theoritically impossible compression demands for Neuralink, everything related to his tesla/AI related autonomous driving/robots thing. (To further make this an anti-Musk rant, he also claimed AI would solve chess, solving chess is a computational problem (it has been done for 7x7 board iirc), which just costs a lot of computation time (more than we have), if AI would solve chess, it would side step that time, making it a superturing thing, which makes turing machines superturing (I also can't believe that of all the theorethical hypercomputing methods we are going with the oracle method (machine just conjures up the right method, no idea how), the one I have always mocked personally) which is theoretically impossible and would have massive implications for all of computer science) sorry rant over).

Anyway, these people are not engineers or computer scientists, they are bad science fiction writers. Sorry for the slightly unrelated rant, it was stuck as a splinter in my mind for a while now. And I guess that typing it out and 'telling it to earth' like this makes me feel less ranty about it.

E: of course the fundamental limits apply to both sides of the argument, so both the 'AGI will kill the world' shit and 'AGI will bring us to posthuman utopia of a googol humans in postscarcity' seem unlikely. Unprecedented benefits? No. (Also im ignoring physical limits here as well, a secondary problem which would severely limit the singularity even if P=NP).

E2: looks at title of OPs post, looks at my post. Shit, the loons ARE at it again.

view more: ‹ prev next ›

Soyweiser

0 post score
0 comment score
joined 2 years ago