you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 05 Apr 2026
21 points (92.0% liked)
TechTakes
2533 readers
42 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
I aired some Reviewer #2 grievances in the bsky comments:
https://bsky.app/profile/ronanfarrow.bsky.social/post/3mitapp7j2s2c
"Kalanick now runs a robotics startup; in his free time, he said recently, he uses OpenAI’s ChatGPT “to get to the edge of what’s known in quantum physics.”"
As a physicist, I have never pressed F to doubt harder.
"In 2022, researchers at a pharmaceutical company tested whether a drug-discovery model could be used to find new toxins; within a few hours, it had suggested forty thousand deadly chemical-warfare agents." To the best of my knowledge, these suggestions were never evaluated by any other researchers.
(The original paper was published as a "comment": https://www.nature.com/articles/s42256-022-00465-9)
Similar claims of AI-facilitated discoveries have turned out to be overblown in other fields.
https://pubs.acs.org/doi/pdf/10.1021/acs.chemmater.4c00643
"In a 2025 study, ChatGPT passed the test more reliably than actual humans did."
If this is referring to Jones and Bergen's "Large Language Models Pass the Turing Test", that's a preprint (arXiv:2503.23674) that has yet to pass peer review over a year after its posting.
"A classic hypothetical scenario in alignment research involves a contest of wills between a human and a high-powered A.I. In such a contest, researchers usually argue, the A.I. would surely win"
Which researchers?
(Hint: Eliezer Yudkowsky is not a researcher.)
AI: "I will convince you to let me out of this box"
Humanity (wringing hands): "Oh, where is our savior? Who will stand fast in the face of all entreaties?"
Bartleby the Scrivener: hello
"...a hub of the effective-altruism movement whose commitments included supporting the distribution of mosquito nets to the global poor."
Phrasing like this subtly underplays how the (to put it briefly) weird people were part of EA all along.
https://repository.uantwerpen.be/docman/irua/371b9dmotoM74
"In late 2022, four computer scientists published a paper motivated in part by concerns about “deceptive alignment,” ... one of several A.I. scenarios that sound like science fiction—but, under certain experimental conditions, it’s already happening."
Barrett et al.'s arXiv:2206.08966? AFAIK, that was never peer-reviewed either; "posted" is not the same as "published". And claims in this area are rife with criti-hype:
https://pivot-to-ai.com/2025/09/18/openai-fights-the-evil-scheming-ai-which-doesnt-exist-yet/
Oh, right, the "Future of Life Institute". Pepperidge Farm remembers:
"In January 2023, Swedish magazine Expo reported that the FLI had offered a grant of $100,000 to a foundation set up by Nya Dagbladet, a Swedish far-right online newspaper."
https://en.wikipedia.org/wiki/Future_of_Life_Institute#Activism
"Tegmark also rejected any suggestion that nepotism could have played a part in the grant offer being made, given that his brother, Swedish journalist Per Shapiro ... has written articles for the site in the past."
https://www.vice.com/en/article/future-of-life-institute-max-tegmark-elon-musk/