TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
immediately know which represents pragmatism vs esoteric theory
What neither of them are doing 'The Stare', Moore can do it, Crowley could do it, Rasputin can do it. Get these low budget farces out of here, and give me a proper Stare.
Moldbug tries, but he mostly just looks in a way that makes you think 'he just farted and is trying to figure out if I noticed'.
Ah yes, pragmatists, well known for their constantly sunny and optimistic outlook on the future, consequences be damned (?)
man will wheaton really turned to the dark side, didn't he?
got told to shut up one too many times. See what happens when you censor people libs?
i ran the NVidia CEO's press conference through my ChatGPT based translator and it came out "lol this is gonna bomb in two quarters but holy shit it's fun while it lasts and i can def get a few scheduled insider sales done, now where's the coke"
Lawrence Lessig falls victim to the siren song of the blarney engines. Also, lol cnn
Many people refer to concerns about the technology as a question of “AI safety.” That’s a terrible term to describe the risks that many people in the field are deeply concerned about. Some of the leading AI researchers, including Turing Prize winner Yoshua Bengio and Sir Geoffrey Hinton, the computer expert and neuroscientist sometimes referred to as “the godfather of AI,” fear the possibility of runaway systems creating not just “safety risks,” but catastrophic harm.
And while the average person can’t imagine how anyone could lose control of a computer (“just unplug the damn thing!”), we should also recognize that we don’t actually understand the systems that these experts fear.
Companies operating in the field of AGI — artificial general intelligence, which broadly speaking refers to the theoretical AI research attempting to create software with human-like intelligence, including the ability to perform tasks that it is not trained or developed for — are among the least regulated, inherently dangerous companies in America today. There is no agency that has legal authority to monitor how the companies develop their technology or the precautions they are taking.
https://www.cnn.com/2024/06/06/opinions/artificial-intelligence-risks-chat-gpt-lessig/index.html
bit of a combo-sneer this morning
CNBC put out an article uncritically repeating yet another round of lieboy pulling the exact same shit. I don't recognize the authors immediately, not sure if they're typical bootlickers or not
openai is going in hard, hiring ex-NSA person that was appointed by the walking talk racist mop (via dan gillmor). for the .... safety role! ah yes, I'm sure we'll all be so very surprised by this attempt consolidation of power.
to their credit, they did manage to get past the editor:
This is all far-out stuff even for Musk, who is notorious for making ambitious promises to investors and customers that don’t pan out — from developing software that can turn an existing Tesla into a self-driving vehicle with an upload, to EV battery swapping stations.
yeah, fair call on that. I just live in hope that we can get to a point where “lying fucker with history of failures and grandiose statements has made another ridiculous grandiose statement probably composed of lies” can be the actual type of headline, instead of this constant simping bullshit
Or just, you know, not write a headline on that in the first place? Who the fuck is in a dire need to know the last stupid thing a pathological liar said?
publishers who demand clicks
https://zhukeepa.substack.com/p/ai-alignment-and-the-distributed
This came across the dash and well…