AI represents a lot things people here dislike (large stock bubble companies, scraping, energy waste, etc)
I think Lemmy in general is very against AI
I agree, although for a counter-case, db0 has involvement in community-driven image generation (Horde, some communities they host and banners they use).
My view isn't as blanket as pro- or anti- "AI" (and I add quotes because I see that as a science-fiction term adapted into a marketing fantasy).
These technologies are powerful and there are legitimate, productive, pro-social uses of them (an obvious example is assisting in medical diagnosis). They are not inherently incompatible with social values, environmental progress, and the other problems associated with it - these technologies are tools powered by electricity and materials. But the way they are currently implemented, the economic concerns around marketing them, and the lack of broad education around them, are very very dangerous.
-
The way regular people misunderstand and misuse the technology has already resulted in direct deaths, all kinds of mismanagement and mass sacking of workers. Mistrusting technology is no new issue, see ELIZA in the '60s for example, but this is so much more accessible, more misrepresented due to marketing campaigns and social media, and more powerful. Furthermore, a huge proportion of people don't have the media literacy to instinctively doubt its output - and this was already a big enough issue with news media.
-
Processing currently is largely done using non-renewable energy sources in large centralized data centers (consuming water, creating noise pollution, etc.) which has serious global and local environment issues.
-
Most of our exposure to this technology as regular citizens is wasteful or actively harmful, such as propaganda/forgery, vapid industrialization of artistic aesthetics, unsolicited sexualization, scams and fraud attacks, automated bots, advertising and other 'slop', along with misguided attempts to eliminate workers from jobs which cannot be adequately delegated to these tools.
Even for people who generally like the function of AI (which seem to be fairly rare here) the absolutely obscene climate impact and implications for peopes jobs and livelihoods, privacy breaches, and general internet enshittification is surely reason enough to be against it.
The jobs thing i don't understand, its the distribution of productivity gains that's the issue, why we keep voting for the same politicians ensuring it goes to the wealthy is the real mystery.
The distribution of productivity gains and development of new technology are intrinsically and historically connected. New technology is only developed in order to exploit workers, either to make individual something which was previously socialized, or to directly replace workers with industrial advances; and in many cases both.
Marx said it best: Machines were the weapon employed by the capitalist to quell the revolt of specialized labor.
This was true for the Luddites and it is true today.
Oh, I absolutely agree. But currently, the people in charge of making those decisions have demonstrated moral bankruptcy and will absolutely ensure the productivity gains funnel to the top. Until that changes, AI impact on jobs will likely be devastating.
And I'm all for changing it. It's just going to be a long and/or violent process.
It isnt moral bankruptcy, it is systematic. The capitalist who produces profit stays in business, the capitalist who does not goes bankrupt. It isnt morals of individuals, the dehumanization of the poor by the rich is a symptom of a system that prioritizes profits over humanity.
Capitalism is, among other things, a system of forced competition.
I'm glad to hear you are on the right side of it. But in order to be effective we have to name the actual problems. I am above all a humanist, and certainly the capitalist class contains some vile and hateful individuals. That is more clear now than ever before. But we are not made rich or poor by our morality; our morality comes from the conditions that dictate whether we are rich or poor.
Even individualism is structural.
That I why I like small, specialized, locally hosted AI. Runs acceptably fast and quite on my gaming PC, it's private, and I can give it knowledge is small doses in specific topics and projects.
Which model do you use and what are your specs? I ran a couple using an RTX5060 with 16gb and it's too slow to be usable for larger models while the smaller ones are mostly useless.
It has its uses but it feels like more of a 10-20% productivity boost when used effectively, not the 500%, “lets have openclaw replace my whole company!” kind of BS being pushed by AI companies.
If it is a productivity boost for you, it is at the cost of someone else who will have to proofread and test everything you do. LLMs (and genAI) are useless.
A lot of people even outside, who are not techbros and corporate out of touch zealots, don't like AI. It is being treated as a solve-all solution for everyday problems. When, it is horribly doing its job, gets in the way, artificially messes up anything in reach.
Yup. I suspect on other social media that some of the positive sentiment towards AI is just astroturfing.
There a strong amount of astroturfing even over on Mastodon. I imagine it's worse on the billionaire owned socials.
If John Mastodon can't stop the astroturfing, there's no way those lesser founders can.
Yeah, Lemmy is a bit over-the-top anti-AI, but most of it is based in reality.
There are a bunch of problems with AI. And they outbnumer any good ones by a mile.
The main cause of that fact is the entire AI bubble.
AI wastes a fuckton of energy. Of course, this energy isn't free: communities pay. Electricity demand goes up, and so does price. Then, most electricity isn't green. And on top of that, the rise in demand causes more electricity peaks, which almost exclusively get "fixed" through fossil fuel-based methods.
From another angle, AI disrupts markets. And not in a good way. Companies dump millions into AI while neglecting their employees (who get laid off because AI "can replace" them), and their customers as well (since instead of doing useful stuff for consumers they pump out AI-branded bullshit no one wants or needs).
Then, big AI companies spit in the face of copyright and have the audacity to turn around and claim copyright on their models' outputs. If inputs are free game, so are the outputs. Copyright is a very vague, misunderstood and misused term, and no argument I've heard claiming feeding stuff into AI is fair use was grounded in reality.
That all veing said, AI is here to stay. I've been thinking long and hard about similar fundamental changes to how human society functions, and I think i found one. Photography.
Way back when, you had to do things painstakibgly by hand. Drawing, copying books by hand, etc.
Then the printing press came. Revolutionary? Sure. But not as revolutionary as photography. Instead of writing by hand, you had to typeset by hand before printing. This made the process scalable, but it was still painstaking work.
But photography is a different matter. You just have to make (or buy) a camera and other required supplies (film, developing media, etc), and then you merely have to set up the camera, take the photo, develop the film, and make the photo.
Even in the early days of photography, while these processes took some time, it wasn't painstaking. To take a photo, you set up the camera, and wait. To develop film, you dunk the film into a chemical bath, and wait. To transfer the image onto paper - a similar ordeal. Set, forget.
Photography fundamentally changed how the entirety of society works. Painters complained and lost jobs and livelihoods - like the "jobs stolen" by AI. Instead of drawing stuff, which required a lot of skill, taking a photo is much simpler (abd faster).
Yesterday, instead of having to paint stuff, you'd take a photo. Today, instead of taking a photo, you ask AI.
On the copyright front, the paralels are obvious: Taking a photo of a book is fair use. But photocopying a book isn't. The problem with AI is that it does some transformations to the original, so it's obfuscated inside the model. But the obfuscation can be undone, as AI often happily spits out certain inputs verbatim when asked. Take a photo of a page - okay. Photocopy the entire book? Not okay.
The situation is the same when we look at artwork instead of books. Taking a photo of an artwork in a museum is okay. Scanning an artwork (duplicating it verbatim) - isn't. Same for movies. A frame is probably gonna be okay. The entire movie - won't.
Going by the closest analogue, there is absolutely no justification to indiscriminately feed everything and anything into AI, for indiscriminately photocopying and vervatim copying the same material is clearly protected.
Considering the username, I'm just sitting here wondering if we're just arguing against an LLM.
Looking at history.... Yeah. I think so.
I have been working with LLMs for decades. I know what they can do and what they can't. I admit they have grown in leaps and bounds in the last few years because of the hype, but therein lies the issue: there is still way too much hype, it's not the end all solution some think it is, it's driving up hardware prices, the environmental impact is horrendous, and it's a new bullshit business marketing term that serves only to artificially inflate stock prices. "Agentic" is the new "data driven".
Reality as an artist dictates that all my work was datamined without my consent and anything I post in the future, should I choose to do so, will. And the end result of this data mining is to drive artists like me out of business. I don't mind the average Joe getting their anime girl with three titties in five minutes, but company owners are making money out of this and paying nobody for their source material.
wait, you can ask for three titties....?..?
A tool becomes "good" or "bad" based on its implementation.
The current trend towards massive unsustainable data centers is pretty objectively "bad" for humans and other creatures for questionable benefit.
Localized AI, on the other hand, would be less harmful, and more useful. This would move the needle towards a more objective "good".
There’s usually a sub argument here of what the models are trained on - local or not.
Yeah it’s like gmos. The biggest companies in the game are well documented as ill-intentioned profiteers. The technology isn’t inherently bad.
I am not "against" AI. I am against unfettered capitalism and how it is poisoning humanity. AI can hold the same kind of promise that Internet v1 had before the first eternal September. But because of the "success" of the capitalization of the web, folks are flocking to AI on the assumption that something similar will happen to it. I see it as a gold rush. Some boom towns may happen along the way. Some may endure. But it's still very early to know that.
People come to Lemmy precisely because they're tired of big algorithmic corporate platforms. They come here precisely to get away from AI slop on platforms like Facebook. Hell, half the people here have been banned from reddit based on comically flawed algorithmic AI moderation tools. This platform is heavily selected for people who dislike AI and AI content.
Yeah check out the very next article in my feed:

Yup looks like AI

I think there is a lot of misdirected frustration. The technology isn't the issue, the way it's been implemented is the issue. There are some useful use cases for AI.
Group thing? No.
Does it seem like the majority are against it? Yes.
I’ve leaned pretty heavily into using LLMs personally and professionally.
There was a post on Mastodon that I sadly cannot find right now that really articulated the fact that there's not necessarily a single problem with LLMs and generative AI - the issue is that there's an entire stack of potential dangers associated with them. To paraphrase:
Use of and reliance on LLMs for certain tasks has shown to have deleterious effects on critical thinking skills.
Even if that isn't true or I weren't concerned about it, I'd be concerned about its effects on my psychological wellbeing.
Even if I weren't concerned about that, I'd be concerned about the ethical issues of how their training data was and is acquired.
Even if I weren't concerned about that, I'd be concerned about its effects on the job market and the further upward concentration of wealth.
Even if I weren't concerned about that, I'd be concerned about the massive energy costs and the associated effects on utility bills and greenhouse gas emissions.
Even if I weren't concerned about that, I'd be concerned about the massive cooling requirements and its effects on the global availability of clean water.
Even if certain approaches to or implementations of GAI solve one or a couple of these concerns, I'd have to overcome all of them (and likely others I've forgotten to list) to feel comfortable using GAI in any serious capacity, and even then it looks like I would end up with a tool that I'd have to constantly double-check to avoid hallucinations. It's just not worth it.
And nearly all of these arguments also to apply to others using GAI, so I'm forced to advocate against it.
Could one not conceive of a world where there is a group of actual human beings that hold different values to one's own ?
Is that so inconceivable to a person's worldview that it breaks a person's sense of reality ?
Seems like a weird place to start. But here we are.
Many people here know that "AI" as a term is pure snake oil. You aren't actually talking about anything until you say what you think it means, or specific examples.
AI research goes back to the early 1950s. Being "against" all of that old research is kinda meaningless... So it's your job to clarify what you mean, or not, and other users will respond accordingly.
I'm pissed at how its able to license wash Foss code and peoples IP. But it seems there are no rules for American or Chinese tech companies because they refuse to legislate so ip should be completely removed. There is no way any of their IP should be respected.
I'm against the LLM bubble. They're gobbling up all of our compute, electricity, water, and basically all investment capital while not even generating productivity gains or improving anyone's lives. Internet search is now dead, all my fan communities are just full of slop instead of art from artists, and the piggies that own the data centers are destroying all culture to feed their autocomplete machines. LLMs have accelerated the decay of civilization in a way that we might struggle to recover from when the bubble pops. Half the time it's not even AI, the real work is just outsourced to some superexploited workers in the Global South.
There are some legitimate use-cases for LLM technology, but the way they're trying to cram it into everything is actually just wrecking everything. It seems like they're destroying the world for a worse calculator that can pretend to be your girlfriend.
30 year IT professional here, whose company is starting to utilize AI. So far for my workflow it does not provide any benefit. With that said, I am working with my team to find somewhere in our business and technical processes to make things better. It just hasn't happened yet.
I am against it, but not dead set. What I am against are the insane things that are happening due to the over zealous investment into LLMs. The Three Mile Island #1 reactor is in the process of being brought back into operation by Microsoft, just to power an AI data center.
That is absolutely insane. TMI #1 is a 60 year old reactor design that was built over 50 years ago and that is at least two generations behind modern reactors. TMI #2 experienced a meltdown back in 1979, hence why it is not an option to bring back into operation. There are several documented issues with that reactor design (remember that #2 melted down? It was due to one of these issues.) that will require monitoring and processes in place to make sure the reactor stays safe. Monitoring that is not needed on more modern reactor designs.
Western Digital has announced that their entire production run of hard drives is completely sold out. Micron exited the consumer market in order to supply AI. So hard drive and memory prices are going to get even higher than what they are now. That means computers, phones, and any consumer device that uses memory or HDD storage will see massive price increases.
That's the issue I have with LLMs. If the role out was anywhere near sane, then my attitude would be different. Right now it just looks like massive amounts of resources and money are being thrown into a pit with a dim hope that there would be some kind of return. Instead of a deliberate and planned role out that is sustainable in the long term.
Asklemmy
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~