25
submitted 3 days ago by [email protected] to c/[email protected]

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top new old
[-] [email protected] 11 points 13 hours ago

trying to explain why a philosophy background is especially useful for computer scientists now, so i googled "physiognomy ai" and now i hate myself

https://www.physiognomy.ai/

Discover Yourself with Physiognomy.ai

Explore personal insights and self-awareness through the art of face reading, powered by cutting-edge AI technology.

At Physiognomy.ai, we bring together the ancient wisdom of face reading with the power of artificial intelligence to offer personalized insights into your character, strengths, and areas for growth. Our mission is to help you explore the deeper aspects of yourself through a modern lens, combining tradition with cutting-edge technology.

Whether you're seeking personal reflection, self-awareness, or simply curious about the art of physiognomy, our AI-driven analysis provides a unique, objective perspective that helps you better understand your personality and life journey.

[-] [email protected] 2 points 1 hour ago

Prices ranging from 18 to 168 USD (why not 19 to 199? Number magic?) But then you get integrated approach of both Western and Chinese physiognomy. Two for one!

Thanks, I hate it!

[-] [email protected] 8 points 9 hours ago* (last edited 9 hours ago)

The web is often Dead Dove in a Bag as a Service innit?

[-] [email protected] 6 points 10 hours ago

trying to explain why a philosophy background is especially useful for computer scientists now, so i googled “physiognomy ai” and now i hate myself

Well, I guess there's your answer - "philosophy teaches you how to avoid falling for hucksters"

[-] [email protected] 3 points 13 hours ago

A company that makes learning material to help people learn to code made a test of programming basics for devs to find out if their basic skills have atrophied after use of AI. They posted it on HN: https://news.ycombinator.com/item?id=44507369

Not a lot of engagement yet, but so far there is one comment about the actual test content, one shitposty joke, and six comments whining about how the concept of the test itself is totally invalid how dare you.

[-] [email protected] 4 points 11 hours ago

Looks like it's been downranked into hell for being too mean to the AI guys, which is weird when its literally an AI guy promoting his AI generated trash.

[-] [email protected] 4 points 12 hours ago

It seems that the test itself is generated by autoplag? At least that's how I understand the PS and one of the comments about "vibe regression" in response to an error

[-] [email protected] 4 points 12 hours ago

Anyway, they say it covers Node and to any question regarding Node the answer is "no", I don't need an AI to know webdev fundamentals

[-] [email protected] 12 points 1 day ago* (last edited 1 day ago)

A Supabase employee pleads with his software to not leak its SQL database like a parent pleads with a cranky toddler in a toy store.

https://news.ycombinator.com/item?id=44502318

[-] [email protected] 8 points 1 day ago

The Supabase homepage implies AI bros are two levels below "beginner", which I found somewhat amusing:

Skill LevelA list of different skill levels including: 1. AI Builder 2. No Code 3. Beginner 4. Developers 5. Postgres Devs

[-] [email protected] 4 points 15 hours ago

Its also completely accurate - AI bros are not only utterly lacking in any sort of skill, but actively refuse to develop their skills in favour of using the planet-killing plagiarism-fueled gaslighting engine that is AI and actively look down on anyone who is more skilled than them, or willing to develop their skills.

[-] [email protected] 2 points 1 day ago* (last edited 11 hours ago)

oof! That's hilarious!

[-] [email protected] 17 points 1 day ago

Another day, another jailbreak method - a new method called InfoFlood has just been revealed, which involves taking a regular prompt and making it thesaurus-exhaustingly verbose.

In simpler terms, it jailbreaks LLMs by speaking in Business Bro.

[-] [email protected] 4 points 14 hours ago* (last edited 14 hours ago)

maybe there's just enough text written in that psychopatic techbro style with similar disregard for normal ethics that llms latched onto that. this is like what i guess happened with that "explain step by step" trick - instead of grafting from pairs of answers and questions like on quora, lying box grafts from sets of question -> steps -> answer like on chegg or stack or somewhere else where you can expect answers will be more correct

it'd be more of case of getting awful output from awful input

[-] [email protected] 7 points 18 hours ago

I mean, decontextualizing and obscuring the meanings of statements in order to permit conduct that would in ordinary circumstances breach basic ethical principles is arguably the primary purpose of deploying the specific forms and features that comprise "Business English" - if anything, the fact that LLM models are similarly prone to ignore their "conscience" and follow orders when deciding and understanding them requires enough mental resources to exhaust them is an argument in favor of the anthropomorphic view.

Or:

Shit, isn't the whole point of Business Bro language to make evil shit sound less evil?

[-] [email protected] 19 points 1 day ago

Penny Arcade chimes in on corporate AI mandates:

load more comments (2 replies)
[-] [email protected] 14 points 1 day ago

In the recent days there's been a bunch of posts on LW about how consuming honey is bad because it makes bees sad, and LWers getting all hot and bothered about it. I don't have a stinger in this fight, not least because investigations proved that basically all honey exported from outside the EU is actually just flavored sugar syrup, but I found this complaint kinda funny:

The argument deployed by individuals such as Bentham's Bulldog boils down to: "Yes, the welfare of a single bee is worth 7-15% as much as that of a human. Oh, you wish to disagree with me? You must first read this 4500-word blogpost, and possibly one or two 3000-word follow-up blogposts".

"Of course such underhanded tactics are not present here, in the august forum promoting 10,000 word posts called Sequences!"

https://www.lesswrong.com/posts/tsygLcj3stCk5NniK/you-can-t-objectively-compare-seven-bees-to-one-human

[-] [email protected] 3 points 1 day ago

I thought you were talking about lemmy.world (also uses the LW acrynom) for a second.

[-] [email protected] 11 points 1 day ago

Lesswrong is a Denial of Service attack on a very particular kind of guy

[-] [email protected] 13 points 1 day ago

You must first read this 4500-word blogpost, and possibly one or two 3000-word follow-up blogposts”.

This, coming from LW, just has to be satire. There's no way to be this self-unaware and still remember to eat regularly.

[-] [email protected] 10 points 1 day ago

NYT covers the Zizians

Original link: https://www.nytimes.com/2025/07/06/business/ziz-lasota-zizians-rationalists.html

Archive link: https://archive.is/9ZI2c

Choice quotes:

Big Yud is shocked and surprised that craziness is happening in this casino:

Eliezer Yudkowsky, a writer whose warnings about A.I. are canonical to the movement, called the story of the Zizians “sad.”

“A lot of the early Rationalists thought it was important to tolerate weird people, a lot of weird people encountered that tolerance and decided they’d found their new home,” he wrote in a message to me, “and some of those weird people turned out to be genuinely crazy and in a contagious way among the susceptible.”

Good news everyone, it's popular to discuss the Basilisk and not at all a profundly weird incident which first led peopel to discover the crazy among Rats

Rationalists like to talk about a thought experiment known as Roko’s Basilisk. The theory imagines a future superintelligence that will dedicate itself to torturing anyone who did not help bring it into existence. By this logic, engineers should drop everything and build it now so as not to suffer later.

Keep saving money for retirement and keep having kids, but for god's sake don't stop blogging about how AI is gonna kill us all in 5 years:

To Brennan, the Rationalist writer, the healthy response to fears of an A.I. apocalypse is to embrace “strategic hypocrisy”: Save for retirement, have children if you want them. “You cannot live in the world acting like the world is going to end in five years, even if it is, in fact, going to end in five years,” they said. “You’re just going to go insane.”

[-] [email protected] 9 points 1 day ago

Yet Rationalists I spoke with said they didn’t see targeted violence — bombing data centers, say — as a solution to the problem.

ahem

[-] [email protected] 3 points 20 hours ago

Ah, you see, you fail to grasp the shitlib logic that the US bombing other countries doesn't count as illegitimate violence as long as the US has some pretext and maintains some decorum about it.

[-] [email protected] 10 points 1 day ago* (last edited 1 day ago)

Re the “A lot of the early Rationalists" bit. Nice way to not take responsibility, act like you were not one of them and throw them under the bus because "genuinely crazy" like some preexisting condition, and not something your group made worse, and a nice abuse of the general publics bias against "crazy" people. Some real Rationalist dark art shit here.

There is some dark irony here in that the "we must make sure the AI doesnt turn bad" people cant even stop their own people from turning bad after looking at their own ideas. Wonder if they have already went "musk isnt a real Rationalist" (imho he isnt but for some reason LWers seem to like him) after he turned Grok basically into a neonazi (not sure if it is was reported here but Grok is now doing great replacement shit when asked about Jewish "control of the media").

load more comments (1 replies)
[-] [email protected] 11 points 2 days ago
[-] [email protected] 7 points 1 day ago

Just the usual stuff religions have to do to maintain the façade, "this is all true but gee oh golly do NOT live your life as if it was because the obvious logical conclusions it leads to end in terrorism"

[-] [email protected] 13 points 2 days ago

"Another thing I expect is audiences becoming a lot less receptive towards AI in general - any notion that AI behaves like a human, let alone thinks like one, has been thoroughly undermined by the hallucination-ridden LLMs powering this bubble, and thanks to said bubble’s wide-spread harms […] any notion of AI being value-neutral as a tech/concept has been equally undermined. [As such], I expect any positive depiction of AI is gonna face some backlash, at least for a good while."

Me, two months ago

Well, it appears I've fucking called it - I've recently stumbled across some particularly bizarre discourse on Tumblr recently, reportedly over a highly unsubtle allegory for transmisogynistic violence:

You want my opinion on this small-scale debacle, I've got two thoughts about this:

First, any questions about the line between man and machine have likely been put to bed for a good while. Between AI art's uniquely AI-like sloppiness, and chatbots' uniquely AI-like hallucinations, the LLM bubble has done plenty to delineate the line between man and machine, chiefly to AI's detriment. In particular, creativity has come to be increasingly viewed as exclusively a human trait, with machines capable only of copying what came before.

Second, using robots or AI to allegorise a marginalised group is off the table until at least the next AI spring. As I've already noted, the LLM bubble's undermined any notion that AI systems can act or think like us, and double-tapped any notion of AI being a value-neutral concept. Add in the heavy backlash that's built up against AI, and you've got a cultural zeitgeist that will readily other or villainise whatever robotic characters you put on screen - a zeitgeist that will ensure your AI-based allegory will fail to land without some serious effort on your part.

[-] [email protected] 10 points 2 days ago

Humans are very picky when it comes to empathy. If LLMs were made out of cultured human neurons, grown in a laboratory, then there would be outrage over the way in which we have perverted nature; compare with the controversy over e.g. HeLa lines. If chatbots were made out of synthetic human organs assembled into a body, then not only would there be body-horror films about it, along the lines of eXistenZ or Blade Runner, but there would be a massive underground terrorist movement which bombs organ-assembly centers, by analogy with existing violence against abortion providers, as shown in RUR.

Remember, always close-read discussions about robotics by replacing the word "robot" with "slave". When done to this particular hashtag, the result is a sentiment that we no longer accept in polite society:

I'm not gonna lie, if slaves ever start protesting for rights, I'm also grabbing a sledgehammer and going to town. … The only rights a slave has are that of property.

load more comments
view more: next ›
this post was submitted on 06 Jul 2025
25 points (100.0% liked)

TechTakes

2044 readers
93 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS