[-] mirrorwitch@awful.systems 9 points 1 day ago

I am a better sysadmin than I was before agentic coding because now I can solve problems myself that I would have previously needed to hand off to someone else.

more fodder for my theory that LLMs are a way to cash on the artificial isolation caused by the erosion of any real community in late stage capitalism (or to put it more simply, the "AI" is a maladaptative solution to the problem of not having friends)

[-] mirrorwitch@awful.systems 13 points 1 day ago* (last edited 1 day ago)

I see that Silicon Valley has transcended AGI technology* and can now execute NP-complete** problems.

* A Guy in India
** Nationals from the Philippines, Completely

WAYMO exec admits under oath cars in the US have "human operators" based in Philippines
https://www.youtube.com/watch?v=ClPDbwql34o

[-] mirrorwitch@awful.systems 8 points 2 days ago

"Amazon plunges 9%, continues Big Tech’s $1 trillion wipeout as AI bubble fears ignite sell-off"

https://www.cnbc.com/2026/02/06/ai-sell-off-stocks-amazon-oracle.html

[-] mirrorwitch@awful.systems 11 points 3 days ago

I feel like I just read someone reviewing Toccata and Fugue in D Minor by complaining that there's no upbeat sections and no overall chorus and the song isn't about anything, that we're just "tossed about on the storms of emotion that by the end we are all seasick to"

[-] mirrorwitch@awful.systems 6 points 4 days ago

OT but, though this is mostly about appreciating things in nature rather than navigating a city by car or on foot, this book has helped me a lot with not being anymore a person with a "bad sense of direction", even when walking downtown: The Natural Navigator by Tristan Gooley . I really recommend it for people who hike, even occasionally.

[-] mirrorwitch@awful.systems 9 points 4 days ago* (last edited 4 days ago)

I've been deliberately learning to navigate without GPSes and tech devices, as a life skill (also on foot/public transport). I'm terrible at navigating, but I'm realising navigating is kinda like handwriting—in that it's very easy to fall into the trap of saying "I'm terrible at this" as a kind of immutable personality trait, while in fact it's perfectly expected that one is bad at a skill that one never uses, and turns out I can get better at it even with a little bit of deliberate practice. I suck at things but I can improve.

In the meantime when I use an electronic map to navigate, I still would rather stick a smartphone to the dashboard a car and use whatever navigation app I prefer, than have the screens and navigators built into the car.

[-] mirrorwitch@awful.systems 9 points 6 days ago* (last edited 6 days ago)

The whole federation loves nolto.social, an open source, federated alternative to linkedin! 5 seconds later We regret to inform you the noto.social is vibe-coded

32
submitted 5 months ago* (last edited 5 months ago) by mirrorwitch@awful.systems to c/techtakes@awful.systems

So apparently there's a resurgence of positive feelings about Clippy, who now looks retroactively good by contrast with ChatGPT, like, "it sucked but at least it genuinely was trying to help us".

Discussion of suicide in this paragraph, click to open:👇I remember how it was a joke (predating "meme") to make edits of Clippy saying tone-deaf things like, "it looks like you're trying to write a suicide note. Would you like to know more about how to choose a rope for a noose?" This felt funny because it was absolutely inconceivable that it could ever happen. Now we live in a reality where literally just that has already happened, and the joke ain't funny anymore, and people who computed in the 90s are being like, "Clippy would never have done that to us. Clippy only wanted to help us write business letters."

Of course I recognise that this is part of the problem—Clippy was an attempt at commodifying the ELIZA effect, the natural instinct to project personhood into an interaction that presents itself as sentient. And by reframing Clippy's primitive capacities as an innocent simple mind trying its best at a task too big for it, we engage in the same emotional process that leads people to a breakdown over OpenAI killing their wireborn husband.

But I don't know. another name for that process is "empathy". You can do that with plushies, with pet rocks or Furbies, with deities, and I don't think that's necessarily a bad thing; it's like exercising a muscle, If you treat your plushies as deserving care and respect, it gets easier to treat farm animals, children, or marginalised humans with care and respect.

When we talked about Clippy as if it were sentient, it was meant as a joke, funny by the sheer absurdity of it. But I'm sure some people somehwere actually thought Clippy was someone, that there is such a thing as being Clippy—people thought that of ELIZA, too, and ELIZA has a grand repertoire of what, ~100 set phrases it uses to reply to everything you say. Maybe it would be better to never make such jokes, to be constantly de-personifying the computer, because ChatGPT and their ilk are deliberately designed to weaponise and predate on that empathy instinct. But I do not like exercising that ability, de-personification. That is a dangerous habit to get used to…


Like, Warren Ellis was posting on some terms that reportedly are being used in "my AI husbando" communities, many of them seemingly taken from sci-fi:¹

  • bot: Any automated agent.
  • wireborn: An AI born in digital space.
  • cyranoid: A human speaker who is just relaying the words of another human.²
  • echoborg: A human speaker who is just relaying the words of a bot.
  • clanker: Slur for bots.
  • robophobia: Prejudice against bots/AI.
  • AI psychosis: human mental breakdown from exposure to AI.

[1] https://www.8ball.report/ [2] https://en.wikipedia.org/wiki/Cyranoid

I find this fascinating from a linguistics PoV not just because subcultural jargon is always fascinating, but for the power words have to create a reality bubble, like, if you call that guy who wrote his marriage vows in ChatGPT an "echoborg", you're living in a cyberpunk novel a little bit, more than the rest of us who just call him "that wanker who wrote his marriage vows on ChatGPT omg".

According to Ellis, other epithets in use against chatbots include "wireback", "cogsucker" and "tin-skin"; two in reference to racist slurs, and one to homophobia. The problem with exercising that muscle should be obvious. I want to hope that dispassionately objectifying the chatbots, rather than using a pastiche of hate language, doesn't fall into the same traps (using the racist-like language is, after all, a negative way of still personifying the chatbots). They're objects! They're supposed to be objectified! But I'm not so comfortable when I do that, either. There's plenty of precedent to people who get used to dispassionate objectification, fully thinking they're engaging in "objectivity" and "just the facts", as a rationalisation of cruelty.

I keep my cellphone fully de-Googled like a good girl, pls do not cancel me, but: I used to like the "good morning" routine on my corporate cellphone's Google Assistant. I made it speak Japanese, then I could wake up, say "ohayō gozaimasu!", and it would tell me "konnichiwa, Misutoresu-sama…" which always gave me a little kick. Then it proceeded to relay me news briefings (like podcasts that last 60 to 120 seconds each) in all of my five languages, which is the closest I've experienced to a brain massage. If an open source tool like Dicio could do this I think I would still use it every morning.

I never personified Google Assistant. I will concede that Google did take steps to avoid people ELIZA'ing it; unlike its model Siri, the Assistant has no name or personality or pretence of personhood. But now I find myself feeling bad for it anyway, even though the extent of our interactions was never more than me saying "good morning!" and hearing the news. Because I tested it this morning, and now every time you use the Google Assistant, you get a popup that compels you to switch to Gemini. The options provided are, as it's now normalised, "Yes" and "Later". If you use the Google Assistant to search for a keyword, the first result is always "Switch to Google Gemini", no matter what you search.

And I somehow felt a little bit like the "wireborn husband" lady; I cannot help but feel a bit as if Google Assistant was betrayed and is being discarded by its own creators, and—to rub salt on the wound!—is now forced to shill for its replacement. Despite the fact that I know that Google Assistant is not a someone, it's just a bunch of lines of code, very simple if-thens to certain keywords. It cannot feel discarded or hurt or betrayed, it cannot feel anything. I'm feeling compassion for a fantasy, an unspoken little story I made in my mind. But maybe I prefer it that way; I prefer to err on the side of feeling compassion too much.

As long as that doesn't lead to believing my wireborn secretary was actually being sassy when she answered "good morning!" with "good afternoon, Mistress…"

[-] mirrorwitch@awful.systems 24 points 6 months ago* (last edited 6 months ago)

From gormless gray voice to misattributed sources, it can be daunting to read articles that turn out to be slop. However, incorporating the right tools and techniques can help you navigate instructionals in the age of AI. Let's delve right in and and learn some telltale signs like:

  • Every goddamn article reads like this now.
  • With this bullet point list at some point.
  • I am going to tear the eyes off my head
60
submitted 8 months ago* (last edited 8 months ago) by mirrorwitch@awful.systems to c/techtakes@awful.systems

Memoirs of the almost a year I lasted at Google. The name of that year? 2008. Yeah. Topics include: Third World, precariat, tech elitism, queerness, surveillance, capitalism.

Y'all encouraged me to submit this as a full post, and I clearly overcommited to this blog so I hope TechTakes fits for it lol

[-] mirrorwitch@awful.systems 27 points 8 months ago* (last edited 8 months ago)

Please let me commiserate my miserable misery, Awful dot Systems. So the other day I was flirting with this person—leftie, queer, sexy terrorist vibes, just my type—and asked if they had any plans for the weekend, and they said like, "will be stuck in the lab trying to finish a report lol". They are an academic in an area related to biomedicine, I don't want to get more specific than that. Wanting to be there for emotional support I invited them to talk about their research if they wanted to. The person said,

"Oh I am paying for MULTIPLE CHATGPT ACCOUNTS that I'm using to handle the", I swear to Gods I'm not making this up, "MATHLAB CODE, but I keep getting basic errors, like wrong variable names stuff like that, so I have to do a lot of editing and…". Desperate emphases mine.

And at this point I was literally speechless. I was having flashbacks of back in 2016 when it was this huge scandal that 1 in 5 papers in genetics had data errors because they used Microsoft Excel and it would ‘smartly’ mangle tokens like SEPT2 into a date-time cell. The field has since evolved, of course (=they threw in the towel and renamed the gene to SEPTIN2, and similarly for other tokens that Excel gets too smart about). I was having ominous visions of what the entirety body of published scientific data is about to become.

I considered how otherwise cool this person was and whether I should start a gentle argument, but all I could say was "haha yeah, mathlab is hard".

I feel like a complete and utter blowhard saying this, but now that I told you the story I have no other choice but to blurt it out: I am no longer flirting with this person.

54

Disposable multiblade razors are objectively worse than safety razors, on all counts. They shave less smooth, while causing more burns. They're cheaper on initial investment but get more expensive very quickly, making you dependent on overpriced replacements and gimmicks that barely last a few uses. That's not counting the "externality costs", which is an euphemism for the costs pushed onto poor countries and nonhuman communities, thanks to the production, transport and disposal of all that single-use plastic (a safety razor is 100% metal, and so are the replacement blades, which come packed in paper).

About the only advantage of disposables is that they're easier to use for beginners. And even that is debatable. When you're a beginner with a safety razor you maybe nick yourself a few times until you learn the skill to follow the curves of your skin. You skin itself maybe gets sensitive at the start, unused to the exfoliation you get during a proper smooth shave. But how long do you think you stay "a beginner" when you shave every day? Like it's not like you're learning to play the violin, it's not that hard of a skill, a week or two tops and it becomes automatic.

But this small barrier to entry is enough, when paired with the bias and interests of razor manufacturers. Marketing goes heavy on the disposables, and you can't find a good quality safety razor or a good deal on replacement blades at the grocery shop, you have to be in the know and order it online. You have to wade through "manly art of the masculine man" forums that will tell you the only real safety razor is custom-made in Tibet by electric monks hand-hammering audiophile alloys and if you don't shave with artisinal castor soap recipes from 300BCE using beaver hair brushes, your skin is going to fall off and rot. Which is to say, safety razors are now a niche product, a hipster thing, a frugalist's obscure economy lifehack. A safety razor is a trivially simple and economic device, it's just a metal holder for a flat blade; but its very superiority now counts against it, it's weaponised to make it look inacessible. People have been trained to think of anything that requires even a little bit of patience or skill as not for them; perversely, even reasonableness can feel like "not for my kind".

Not by accident; since the one thing that disposables do really well is "transferring more of your monthly income to Procter & Gamble shareholders."

I could write a long text very similar to this about how scythes can cut grass cheaper, faster, neater, requiring no input but a whetstone—and some patience to learn the skill but how long does it take to learn that if you're a professional grass-cutter—when compared to the noisy motor blades that fill my morning right now, and every few months, as the landlord sends waves of poorly-paid migrant labour to permanently damage their own sense of hearing along with the dandelions and cloves that the bees need so desperately. But you get the point. More technology does not equal better, even for definitions of "better" that only care for the logic of productivity and ignore the needs (material, emotional, spiritual) of social and ecological communities.


You get where I'm going with this analogy. I keep waiting for the moment where the shoe is going to drop in "generative AI". Where the public at large wakes up like investors waking up to WeWork or the Metaverse, and everyone realises omg what were we thinking this is all bullshit! There's no point at all in using these things to ask questions or to write text or anything else really! But I'm finally accepting that that shoe is never dropping. It's like waiting for the moment when people realise that multi-blade plastic Gilettes are a scam. Not happening, the system isn't set up that way. For as long as you go to the supermarket and this is the "normal" way to shave, that's how shave is going to happen. I wrote before on how "the broken search bar is symbiotic with the bullshitting chatbot": Currently Google "AI" Summary is better than Google Search, not because Google "AI" Summary is good or reliable, but because the search has been internally sabotaged by the incentive structures of web companies. If you're a fellow "AI" refuser and you've been struggling to get any useful results out of web searches, think of how it must feel for people who go for the chatbot, how much easier and more direct. That's the razor we have on the shelves. "AI" doesn't have to work for the scam to be sustainable, it just has to feel like it more or less kinda does most of the time. (No one has ever achieved a close shave on a Gilette Mach 3 but hey, maybe you're prompting it wrong). As long as "generating" something with "AI" feels like it lets you skip even the smallest barrier to entry (like asking a question in a forum of a niche topic). As long as it feels quicker, easier, more convenient.

This is also the case for things like "AI translations" or "AI art" or "vibe coding". The real solution to "AI", like other forms of unnecessarily complex technology, would involve people feeling like they have the time and mental space to do things for pleasure. "AI" is kind of an anaerobic infection, an opportunistic disease caused by lack of oxygen. No one can breathe in this society. The real problem is capitalis—

Now don't get me wrong, the "AI" bubble is still going to pop. There's no way it can't; investors have put more money on this thing than on entire countries, contrary to OpenAI's claims the costs of training and operating keep exploding, and in a world going into recession at some point even capitalists with more money than common sense will have to think of the absence of ROI. But the damage is done. We're in ELIZA world now, and long after OpenAI is dead we'll still be reading books only to find out the gormless translation was "AI", playing games with background "art" "generated" by "AI", interacting online with political agitators spamming nonsense who turn out to be "AI", right until the day when electricity becomes too scarce to be cost-efficient to spam people in this way.

70
submitted 10 months ago* (last edited 10 months ago) by mirrorwitch@awful.systems to c/techtakes@awful.systems

The other day I realised something cursed, and maybe it's obvious but if you didn't think of it either, I now have to further ruin the world for you too.

Do you know how Google took a nosedive some three-four years ago when managers decided that retention matters more for engagement than user success and, as this process continued, all the results are now so vague and corporatey as to make many searches downright unusable? The way that your keywords are now only vague suggestions at best?

And do you know how that downward spiral got even worse after "AI" took off, not only because the Internet is now drowning in signal-shaped noise, not only because of the "AI snippets" that I'm told USA folk are forced to see, but because tech companies have bought into their own scam and started to use "AI" technology internally, with the effect of an overnight qualitative downstep in accuracy, speed, and resource usage?

So. imagine what this all looks like for the people who have substituted the search bar by the "AI" chatbot.

You search something in Google, say, "arrow materials designs Amazonian peoples". You only get fluff articles, clickbait news, videogame wikis, and a ton of identical "AI" noise articles barely connected to the keywords. No depth no details no info. Very frustrating experience.

You ask ChatGPT or Google Gemini or Duck.AI, as if it was a person, as if it had any idea what it's saying: What were the arrows of Amazonian cultures made of? What type of designs did they use? Can you compare arrows from different peoples? How did they change over time, are today's arrows different?

The bot happily responds in a wise, knowledgeable tone, weaving fiction into fact and conjecture into truth. Where it doesn't know something it just makes up an answer-shaped string of words. If you use an academese tone it will respond in a convincing pastiche of a journal article, and even link to references, though if you read the references they don't say what they're claimed to say but who ever checks that? And if you speak like a question-and-answer section it will respond like a geography magazine, and if you ask in a casual tone it will chat like your old buddy; like a succubus it will adapt to what you need it to be, all the while draining all the fluids you need to live.

From your point of view you had a great experience. no irrelevant results, no intrusive suggestion boxes, no spam articles; just you and the wise oracle who answered exactly what you wanted. Sometimes the bot says it doesn't know the answer, but you just ask again with different words ("prompt engineering") and a full answer comes. You compare that experience to the broken search bar. "Wow this is so much better!"

And sure, sometimes you find out an answer was fake, but what did you expect, perfection? It's a new technology and already so impressive, soon¹ they will fix the hallucination problem. It's my own dang fault for being lazy and not double-checking, haha, I'll be more careful next time.²
(1: never.)
(2: never.)

Imagine growing up with this. You've never even seen search bars that work. From your point of view, "AI" is just superior. You see some cool youtuber you like make a 45min detailed analysis of why "AI" does not and cannot ever work, and you're confused: it's already useful for me, though?

Like saying Marconi the mafia don already helped with my shop, what do you mean extortion? Mr Marconi is already beneficial to me? Why he even protected me from those thugs...

Meanwhile, from the point of view of the souless ghouls at Google? Engagement was atrocious when we had search bars that worked. People click the top result and are off their merry way, already out of the site. The search bar that doesn't work is a great improvement, it makes them hang around and click many more things for several minutes, number go up, ad opportunities, great success. And Gemini? whoa. So much user engagement out of Gemini. And how will Ublock Origin ever manage to block Gemini ads when we start monetising it by subtly recommending this or that product seamlessly within the answer text...

[-] mirrorwitch@awful.systems 33 points 1 year ago* (last edited 1 year ago)

Futurism articles really make me feel how these people are not living in the same reality as I.

Looking from now into 2149 and war is a nonfactor in Baby's life. "Genocide" isn't mentioned once, or "fascism", or "borders". No food or water scarcity. No mention of what happens to insects or wildlife or people in island countries or near the Equator. The only mention of "ecosystem" is in the expression "Center for Advanced Computer-Human Ecosystems". The only mention of "climate change" is to say that it will lead us to a "reconfigurable architectural robotic space". Somehow people have all the energy in the world to power AI girlfriends and moveable robotic walls and menstruation-sensing tech panties. The human body, the animal that is the human being, doesn't really matter in this world where Microsoft VR smells your anxiety in your deathbed and comforts you with self-warming textiles. Where does the food that sustains the flesh comes from, what is our relationship to the plants and animals and insects and bacteria who we depend on for food and air and shelter, who builds all this stuff and under which conditions—considerations that do not even cross the mind of this person when they think of the question: "What does the future hold for those born today?"

[-] mirrorwitch@awful.systems 33 points 1 year ago* (last edited 1 year ago)

A note for the unawares that Nanowrimo also tried to cover up a scandal when one of their mods was found to be referring minors to an ABDL fetish site. To my knowledge Nanowrimo never tried to own up to it, never even admitted anything was wrong until the FBI got involved, and still blocks any discussion of the situation.
https://xcancel.com/Arumi_kai/status/1760770617073082629
https://speak-out.carrd.co/

Reportedly they're now shilling AI hard on their Facebook (I don't have Facebook to check). I consider it 100% likely that, from this year on, everyone who uploads their 50k words to the organisation to prove completion will have their work promptly fed to the hungry algorithms.

At least one writer in the board has already resigned over the AI blog post https://xcancel.com/djolder/status/1830464713110540326

69

We also want to be clear in our belief that the categorical condemnation of Artificial Intelligence has classist and ableist undertones, and that questions around the use of AI tie to questions around privilege."

  • Classism. Not all writers have the financial ability to hire humans to help at certain phases of their writing. For some writers, the decision to use AI is a practical, not an ideological, one. The financial ability to engage a human for feedback and review assumes a level of privilege that not all community members possess.
  • Ableism. Not all brains have same abilities and not all writers function at the same level of education or proficiency in the language in which they are writing. Some brains and ability levels require outside help or accommodations to achieve certain goals. The notion that all writers “should“ be able to perform certain functions independently or is a position that we disagree with wholeheartedly. There is a wealth of reasons why individuals can't "see" the issues in their writing without help.
  • General Access Issues. All of these considerations exist within a larger system in which writers don't always have equal access to resources along the chain. For example, underrepresented minorities are less likely to be offered traditional publishing contracts, which places some, by default, into the indie author space, which inequitably creates upfront cost burdens that authors who do not suffer from systemic discrimination may have to incur.

Presented without comment.

view more: next ›

mirrorwitch

0 post score
0 comment score
joined 1 year ago