271
submitted 15 hours ago* (last edited 15 hours ago) by SwooshBakery624@programming.dev to c/fuck_ai@lemmy.world

Related:

This is in a PR where Shougo, another long-time contributor, communicates entirely in walls of unparseable AI slop text: https://github.com/vim/vim/pull/19413

Thank you for the detailed feedback! I've addressed all the issues:

Thank you for the feedback! I agree that following the Vim 8+ naming convention makes sense.

Thank you for the feedback on naming!

Thanks for the suggestion! After thinking about this more, I believe repeat_set() / repeat_get() is the right choice:

Thank you for the feedback. A brief clarification.

https://hachyderm.io/@AndrewRadev/116176001750596207

@AndrewRadev@hachyderm.io

top 50 comments
sorted by: hot top new old
[-] badbytes@lemmy.world 1 points 2 hours ago

IMHO, the logo shouldn't have the anti-AI symbol. I like the quill. Maybe a more positive DNA symbol.

[-] grandma@sh.itjust.works 14 points 6 hours ago

AI psychosis

[-] chonglibloodsport@lemmy.world 36 points 9 hours ago

Shougo is Japanese. I’m guessing he communicates like that because he uses translation rather than trying to communicate in broken English.

[-] SlurpingPus@lemmy.world 5 points 5 hours ago

TBF if the reviewer just quoted Claude at me, I would reply with Claude or ChatGPT.

[-] peanuts4life@lemmy.blahaj.zone 15 points 8 hours ago

I would like to mirror another commentor and mention that Shougo is Japanese and probably issuing Claude to communicate.

[-] hperrin@lemmy.ca 136 points 14 hours ago

I spent literally all day yesterday working on this:

https://sciactive.com/human-contribution-policy/

I’ve started to add it to my projects. Eventually, it will be on all of my projects. I made it so that any project could adopt it, or modify it to their needs. It’s got a thorough and clear definition of what is banned, too, so it should help any argument over pull requests.

Hopefully more projects will outright ban AI generated code (and other AI generated material).

[-] Magnum@infosec.pub 2 points 1 hour ago
[-] thethunderwolf@lemmy.dbzer0.com 4 points 3 hours ago

this is cool

you should make a post about this somewhere here on Lemmy

people should know about it

[-] hperrin@lemmy.ca 3 points 2 hours ago

Ok, yeah, I’ll make a post for it.

Feel free to share it anywhere. :)

[-] thethunderwolf@lemmy.dbzer0.com 1 points 3 hours ago

“AI generated” means that the subject material is in whole, or in meaningful part, the output of a generative AI model or models, such as a Large Language Model. This does not include code that is the result of non-generative tools, such as standard compilers, linters, or basic IDE auto-completions. This does, however, include code that is the result of code block generators and automatic refactoring tools that make use of generative AI models.

As "artificial intelligence" is not that well defined, you could clarify what the policy defines "AI" as by specifying that "AI" involves machine learning.

[-] hperrin@lemmy.ca 4 points 3 hours ago

“Generative AI model” is a pretty well defined term, so this prohibits all of those things like ChatGPT, Gemini, Claude Code, Stable Diffusion, Midjourney, etc.

Machine learning is a much more broad category, so banning all outputs of machine learning may have unintended consequences.

[-] PlutoniumAcid@lemmy.world 26 points 14 hours ago

I like this approach, but how can it be enforced? Would you have to read every line and listen to a gut feeling?

[-] Jankatarch@lemmy.world 14 points 8 hours ago* (last edited 8 hours ago)

Same mindset as "You don't need a perfect lock to protect your house from thieves, you just need one better than what your neighbors have."

If a vibecoder sees this they will not bother with obfuscation and simply move onto the next project.

[-] hperrin@lemmy.ca 61 points 13 hours ago

Basically the best you can do is continue as normal, and if someone submits something that says it is or obviously is AI, point to this policy and reject it. Just having the policy should be a decent deterrent.

load more comments (5 replies)
[-] hayvan@piefed.world 64 points 13 hours ago

The devs do have my sympathy, they dedicate their time and energy for these projects and start burning out.
The solution obviously shouldn't be drowning it on slop. They should be just slowing down. Vim has been an excellent and functional tool for many years now, it doesn't need more speed.
There are better ways to use LLMs as a productivity tool.

[-] unexposedhazard@discuss.tchncs.de 44 points 11 hours ago* (last edited 11 hours ago)

I see this excuse of burn out every time it comes to LLM use, but i honestly do not buy it. You cant tell me every other dev out there just burnt out at the same time in sync with the release of LLM coding assistants. If you use LLMs like this you simply dont care about the project anymore and should move on with your life. Its better for everyone if it gets abandoned by the original dev and forked by ones that care. Sometimes you just gotta let go.

[-] hayvan@piefed.world 14 points 10 hours ago

Agreed. They need to take a break at least.

[-] cloudskater@piefed.blahaj.zone 9 points 8 hours ago

There aren't better ways, not in their current forms.

[-] mech@feddit.org 1 points 5 hours ago

What I'm wondering is, why does Vim need new features in the core repo at all?
It's finished software at this point.
The dev should just do security upgrades and let extensions developed by other people handle additional functionality.

load more comments (2 replies)
[-] maegul@lemmy.ml 70 points 14 hours ago

Couldn’t help but notice the casual gendering of Claude to “he” as well.

Someone somewhere made the important observation not long ago that computer assistants tended to be gendered female when more like a secretary (Siri and Alexa) but now that AIs are “intelligent” and powerful … Claude now has to be a male.

Especially weird (and telling?) when it is objectively gender neutral as it’s not human.

[-] GrindingGears@lemmy.ca 6 points 6 hours ago

Let's not lose focus more on the more immediate concern here, that this person is using a human pronoun to describe a computer.

[-] TheTechnician27@lemmy.world 43 points 13 hours ago* (last edited 13 hours ago)

Couldn’t help but notice the casual gendering of Claude to “he” as well.

"Claude" is a male given name. If you think it's actually a problem, blame Anthropic for giving their LLM a gendered name. I've never gendered AI assistants, but I'm not going to begrudge people who do when it's in the name (or in the case of old Siri, the voice, which would later be the default rather than only option).

Women named "Claude" exist, but they're staggeringly outnumbered by men to a point where most people don't even know of women named "Claude" – let alone would immediately associate it as masculine.

[-] maegul@lemmy.ml 11 points 10 hours ago

Not blaming anyone, this is social commentary.

But like the neutral “it” is right there.

In a world that’s both charged around gender and pronoun usage, and focused on the nature and value of LLMs … I think it’s weird that there isn’t more commonly pushback enforcing the non-human neutral for the simple reason that it’s an objective fact amidst a swampy pool of (mis-)information synthesis.

A little like the bechdel test, I feel like it’s the casualness and indifference around this gender bias (at least at the moment) that’s interesting and telling.

[-] amino@lemmy.blahaj.zone 19 points 13 hours ago

it's extremely telling however the shift in marketing. i don't believe giving the coding plagiarism bot a male name is coincidental. most feminists would probably agree. we've known for decades that chatbots were given female names because they're trying to reenact some tradwife fetish and attract a male audience

load more comments (1 replies)
load more comments (4 replies)
[-] Retail4068@lemmy.world 11 points 11 hours ago

Or maybe, just maybe, it has a guys name.

Good Lord y'all made up since crazy shit to whine about.

[-] xep@discuss.online 4 points 10 hours ago

Of all the problems with these things we're taking issue with the naming?

load more comments (2 replies)
load more comments (2 replies)
[-] hexagonwin@lemmy.today 22 points 12 hours ago

wtf. i really like vim. is everyone really using neovim instead and there's no good dev maintaining vim now?

[-] SaharaMaleikuhm@feddit.org 1 points 6 hours ago

Just use VScode, definitely no slop in there. Microslop would never

[-] pogmommy@lemmy.ml 1 points 6 hours ago

I switched to helix editor since last I saw, it appears to lack AI use. That said, Neovim isn't awful and is less enthusiastically pro-ai. Helix also still does not have a plugin system so for the time being you're kind of constrained to its built-in features and customization options

[-] lemonhead2@lemmy.world 7 points 10 hours ago

i ❤️vim. used it for some 15 years.

switched to neovim cause of firenvim which allowed me to use neovim in text areas in firefox

[-] fdnomad@programming.dev 42 points 14 hours ago

It's such a monumental waste of LLMs to include these slop phrases.

Employee 1 enters a prompt to send a slop mail that is so garbage it is unbearable to read using a brain.

So employee 2 either summarizes the slop mail using an LLM too or skips obtaining the information entirely and just goes straight to answering by prompting the next slop mail.

I wonder if that's by design - to make interacting with slop so painful that human-to-human communication will not happen without a LLM in between anymore.

[-] user224@lemmy.sdf.org 6 points 9 hours ago

Reverse compression: making transmission larger (while still being lossy).

[-] Mothra@mander.xyz 15 points 12 hours ago

I originally meant to leave a much shorter comment; apologies.

I can't code to save my life. However I find your observation interesting. The way I see it, AI, no matter where, is eroding human to human interactions. It becomes the middleman for everything.

It's really obvious with personal research. A couple years ago if you wanted to start say, growing tomatoes in your backyard, you would have searched people's comments on a variety of media platforms, would have read a few books or blogs. You would have asked questions to a bunch of people with some experience, left a like or upvote on people posting photos of their tomatoes, you would have used your own judgement to discern what consisted good quality advice and what not.

It would have taken you days. But all that interaction is very rewarding especially for those authoring comments, blogs, books, and photos of their experiences. Because nobody makes something just to be ignored.

Now LLM does all that process for you. In a matter of seconds. And giving no feedback or interaction to anyone whose information was used. It's depressing, but I'm intrigued to see how it plays out.

[-] fdnomad@programming.dev 11 points 12 hours ago

I agree. Specifically for your example I think the transformation has been going on for a while with the aggressive monitization of internet content / the ad industry and the general downfall of google search. LLMs could to be the final nail in the coffin for nieche expertise on the broader internet.

I too am curious to see how AI companies will try to overcome the lack of human generated content to train their models on.

[-] tristan@tarte.nuage-libre.fr 6 points 10 hours ago

I had this reflection 3 years ago, and I think that’s where we’re headed.

The internet is already un-useable for search without prompting an LLM to gather the info you need for you, and it’s getting worse every month.

load more comments
view more: next ›
this post was submitted on 16 Mar 2026
271 points (95.0% liked)

Fuck AI

6368 readers
1377 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS