35
submitted 1 week ago by [email protected] to c/[email protected]
top 50 comments
sorted by: hot top new old
[-] [email protected] 30 points 1 week ago

LLMs really might displace many software developers. That’s not a high horse we get to ride. Our jobs are just as much in tech’s line of fire as everybody else’s have been for the last 3 decades. We’re not East Coast dockworkers; we won’t stop progress on our own

why did I do computer science god I fucking hate every person in this field it's amazing how much of an idiot everyone is.

[-] [email protected] 20 points 1 week ago* (last edited 1 week ago)

You can tell the ones that got A's in their comp sci classes and C's in their core/non-major classes by how bloodthirsty they are.

Me, the enlightened centrist, just got C's in everything

[-] [email protected] 8 points 1 week ago

Writer is the type of guy to only fail his ethics courses

[-] [email protected] 25 points 1 week ago

Every six months the tone of these "why won't you use my hallucinating slop generator" get more and more shrill.

[-] [email protected] 22 points 1 week ago

I think his point that you basically give a slop generator a fitness function in the form of tests, compilation scripts, and static analysis thresholds, was pretty good. I never really thought of forcing the slop generator to generate slop randomly until it passes tests. That's a pretty interesting idea. Wasteful for sure, but I can see it saving someone a lot of time.

[-] [email protected] 22 points 1 week ago

you basically give a slop generator a fitness function in the form of tests, compilation scripts, and static analysis thresholds, was pretty good.

forcing the slop generator to generate slop randomly until it passes tests.

I have to chuckle at this because it's practically the same way that you have to manage junior engineers, sometimes.

It really shows how barely "good enough" is killing off all the junior engineers, and once I die, who's going to replace me?

[-] [email protected] 20 points 1 week ago

This is absolutely the crisis of aging hitting the software engineering labor pool hard. There are other industries where 60% or more of the trained people are retiring in 5 years. Software is now on the fast track to get there as well.

[-] [email protected] 8 points 1 week ago

This is a great point. I think what is most jarring to me is the speed at which this is happening. I may be wrong but it felt like those other industries, it took at least a couple decades for it to happen, and it feels like tech is doing it in a matter of months?

[-] [email protected] 9 points 1 week ago

Nah. It's two different phenomena with the same end point. Those other industries lost young entrants because of the rise of the college pursuit. And yes that took decades. But for software we're still 20 years out at least before the we have a retirement crisis.

Although, we already one back in 2000 when not enough working age people knew COBOL.

Anyway, it's a historical process. It's just one we've seen over and over and just don't learn the lesson.

[-] [email protected] 16 points 1 week ago

I'd much rather the slop generator wastes its time doing these repetitive and boring tasks so I can spend my time doing something more interesting.

[-] [email protected] 20 points 1 week ago* (last edited 1 week ago)

wastes its time doing these repetitive and boring tasks

To me, this is sort of a code smell. I'm not going to say that every single bit of work that I have done is unique and engaging, but I think that if a lot of code being written is boring and repetitive, it's probably not engineered correctly.

It's easy for me to be flippant and say this and you'd be totally right to point that out. I just felt like getting it out of my head.

[-] [email protected] 17 points 1 week ago

If most of the code you write is meaningful code that's novel and interesting then you are incredibly privileged. Majority of code I've seen in the industry is mostly boring and a lot of it just boilerplate.

[-] [email protected] 9 points 1 week ago

meaningful code that’s novel and interesting then you are incredibly privileged

This is possible but I doubt it. It's your usual CRUD web application with some business logic and some async workers.

[-] [email protected] 8 points 1 week ago

So then you do write a bunch of boilerplate such as HTTP endpoints, database queries, and so on.

load more comments (18 replies)
load more comments (1 replies)
[-] [email protected] 12 points 1 week ago

It's more that the iterative slop generation is pretty energy intensive when you scale it up like this. Tons of tokens in memory, multiple iterations of producing slop, running tests to tell it's slop and starting it over again automatically. I'd love the time savings as well. I'm just saying we should keep in mind the waste aspect as it's bound to catch us up.

[-] [email protected] 14 points 1 week ago

I don't really find the waste argument terribly convincing myself. The amount of waste depends on how many tries it needs to get the answer, and how much previous work it can reuse. The quality of output has already improved dramatically, and there's no reason to expect that this will not continue to get better over time. Meanwhile, there's every reason to expect that iterative loop will continue to be optimized as well.

In a broader sense, we waste power all the time on all kinds of things. Think of all the ads, crypto, or consumerism in general. There's nothing uniquely wasteful about LLMs, and at least they can be put towards producing something of value, unlike many things our society wastes energy on.

[-] [email protected] 10 points 1 week ago

I do think there's something uniquely wasteful about floating point arithmetic, which is why need specialized processors for it, and there is something uniquely wasteful about crypto and LLMs, both in terms of electricity but also in terms of waste heat. I agree that generative AI for solving problems is definitely better than crypto, and it's better than using generative AI to produce creative works, do advertising and marketing, etc.

But it's not without it's externalities and putting that in an unmonitored iterative loop at scale requires us to at least consider the costs.

[-] [email protected] 10 points 1 week ago

Eventually we most likely will see specialized chips for this, and there are already analog chips being produced for neural networks which are a far better fit. There are selection pressures to improve this tech even under capitalism, since companies running models end up paying for the power usage. And then we have open source models with people optimizing them to run things locally. Personally, I find it mind blowing that we've already can run local models on a laptop that perform roughly as well as models that required a whole data centre to run just a year ago. It's hard to say when all the low hanging fruit is picked, will improvements start to plateau, but so far it's been really impressive to watch.

load more comments (8 replies)
load more comments (7 replies)
[-] [email protected] 20 points 1 week ago

The argument that workers should capture AI instead of the ruling class is interesting, but let me ask you.

Has there been a single technology entirely captured and for the workers in history, ever? Has not every piece of technology been used primarily by the working class, yes, but the direction it develops and what value it produces is decided by the ruling class? Always has been unless we can remove them from controlling the mode of production..

I think China is an interesting example of this, where the worker's party controls the majority of the economy and wouldn't let a program like DeepSeek threaten to unemploy half of it's economy (America does probably have a larger segment dedicated to programming, though, silicon valley and all). Even then, the average worker there has more safety nets.

[-] [email protected] 15 points 1 week ago* (last edited 1 week ago)

The threat I see is the dominance of AI services provided by an oligarchy of tech companies. Like Google dominance of search. It's a service that they own.

Thankfully China is a source of alternative AI services AND open source models. The bonus is that Chinese companies like Huawei are also an alternative source of AI hardware. This allows you to run your own AI models so you don't necessarily need their services.

You're thinking of class war. There's only one proven way to win that war: The working class rises up, kill some MFers and takes over. There's no point smashing the loom - kill the loom owners and take their looms.

load more comments (5 replies)
[-] [email protected] 14 points 1 week ago* (last edited 1 week ago)

Has there been a single technology entirely captured and for the workers in history, ever?

No, technology has no ideology, which is why we shouldn't be opposed to using the tools that the ruling class uses against us. The chinese communists didn't win the civil war without using guns or without studying military tactics and logistics.

load more comments (6 replies)
[-] [email protected] 12 points 1 week ago

I mean, technology will be used to oppress workers under capitalism. That is why Marxists fundamentally reject capitalist relations. However, given that people in the west do live under capitalism currently, the question has to be asked whether this technology should be developed in the open and driven by community or owned solely by corporations. This is literally the question of workers owning their own tools.

load more comments (19 replies)
[-] [email protected] 10 points 1 week ago

If people can build it, it can serve the people. Think of open-weights LLMs. If we got a couple of 32B models that score as high as GPT-4o and Claude-3.5, why not use them? It can be run on mid-high end hardware. There are developers out there doing a good job. It doesn't need to be a datacenter/big tech company centered scenario.

load more comments (1 replies)
[-] [email protected] 19 points 1 week ago* (last edited 1 week ago)

Thanks for sharing these AI posts.

Paid employment could mean retraining under socialism. Remember communism is moneyless, stateless and classless. The aim of society is the socialisation of all labour to free up time to do more leisure including art. People will still want art from humans without AI but there’s a difference between that and the preservation of regression through ludditism to maintain less productive paid labour.

Equating anti-capitalism to anti-corporatism, the appeal to ludditism, the defense of proprietorship, or the appeal to metaphysical creativity is not going to cut it, and that is a low bar to clear for marxists.

https://lemmygrad.ml/post/7917393/6409037

[-] [email protected] 19 points 1 week ago

My party is trying their best to understand and implement AI and it's causing some friction within the party. The official stance that is now adopted is the one of 'we need to understand it and use it to our advantage' and 'we need to prevent AI being solely a thing of the ruling class' and to me that makes sense. I wasn't around at the time but I imagine it was the same with the coming of the internet some decades ago and we can see how that ended. I hope socialist orgs don't miss the boat this time.

[-] [email protected] 18 points 1 week ago

I think that's precisely the correct stance. As materialists we have to acknowledge that this technology exists, and that it's not going away. The focus has to be on who will control this tech and how it will be developed going forward.

load more comments (7 replies)
[-] [email protected] 16 points 1 week ago

See, for coding AI makes a lot of sense, since a lot of it is very tedious. You still need to understand the output to be able to debug it and make novel programs though, because the limitation is that the LLM can only recreate code its seen before in fairly generic configurations.

[-] [email protected] 10 points 1 week ago

Right, I agree with the author of the article that LLMs are great at tackling boring code like boilerplate, and freeing you up to actually do stuff that's interesting.

[-] [email protected] 16 points 1 week ago* (last edited 1 week ago)

I find the tone kind of slapdash. Feel like the author could have condensed it to a small post about using AI agents in certain contexts, as that seems to be the crux of their argument for usefulness in programming.

I do think they have a valid point about some in tech acting squeamish about automation when their whole thing has been automation from day one. Though I also think the idea of AI doing "junior developer" level of work is going to backfire massively on the industry. Seniors start out as juniors and AI is not going to progress fast enough to replace seniors probably within decades (I could see it replacing some seniors, but not on the level of trust and competency that would allow it to replace all of them). But AI could replace a lot of juniors and effectively lock the field into a trajectory of aging itself out of existence, due to it being too hard for enough humans to get the needed experience to take over the senior roles.

Edit: I mean, it's already the case that dated systems sometimes use languages nobody is learning anymore. That kind of thing could get much worse.

[-] [email protected] 15 points 1 week ago

The developer pipeline is the big question here. My experience using these tools is that you absolutely have to know what you're doing in order to evaluate the code LLMs produce. Right now we have a big pool of senior developers who can wrangle these tools productively and produce good code using them because they understand what the proper solution should look like. However, if new developers start out using these tools directly, without building prior experience by hand, then it might be a lot harder for them to build such intuition for problem solving.

[-] [email protected] 10 points 1 week ago

Yogthos is really relentless with all these AI posts. You're not fighting for the poor defenseless AI technologies against the tyrannical masses with these posts.

People are clearly pissed off at the current state of these technologies and the products of it. I would have expected that here out of all places that the current material reality would matter more than the idealistic view of what could be done with them.

I don't mean for this comment to sound antagonistic, I just feel that there's more worthwhile things to focus on than pushing back against people annoyed by AI-generated memes and comics and calling them luddites.

[-] [email protected] 19 points 1 week ago

This post is about what could be done with them though. It's not about image generators, it's about coding agents. LLMs are really good at programming certain things and it's gotten to the point where avoiding them puts you at a needless disadvantage. It's not like artisanally typed code is any better than what the bot generates.

load more comments (19 replies)
[-] [email protected] 9 points 1 week ago

Irrational hate of new technology isn't going to accomplish anything of value.

load more comments (29 replies)
load more comments (1 replies)
[-] [email protected] 9 points 1 week ago

At this point, anti-AI sentiment is just cope. AI is here to stay. For the people against AI, what is the praxis that must be undertaken against AI? AI, like any other tool, is lifeless but has living users that use, support, and develop it, so the question of praxis against AI becomes a question of praxis against workers who use, develop, and propagate AI.

This is why the Luddites failed. The Luddites had enough people to conduct organized raids, but the fact that those machinations were installed and continued to be installed by other workers meant that they represented a minority of workers. If they had a critical mass of workers on their side, those machinery would quite simply not be installed in the first place. Who else is going to install the machinery, the bourgeoisie, the gentry, and a bunch of merchants involved in human trafficking of Africans slaves?

Those looms didn't sprout legs and installed themselves. They were installed by other workers, workers who, for whatever reason, disagreed with the Luddite's praxis or ideology. Viewed in this context, it made sense why the Luddites failed in the end. Who cares if 500 looms got smashed by the Luddites if 600 looms got installed by non-Luddite workers anyways.

Corps are already starting to build underground data centers, so you and your plucky guerilla band of anti-AI insurgents can't just firebomb a data center that's build from a repurposed nuclear bunker. Pretty much all of the AI scientists who push the field forward are Chinese scientists safely located within the People's Republic of China, so liquidating AI scientists for being class traitors is out of the question. Then what else is left in terms of anti-AI praxis besides coping about it online and downvoting pro-AI articles from some cheap knockoff of R*ddit?

load more comments (1 replies)
[-] [email protected] 9 points 1 week ago

Do you like fine Japanese woodworking? All hand tools and sashimono joinery?

this should sell it, why would anyone want something more expensive just because it was hand made instead of mass produced.

[-] [email protected] 8 points 1 week ago

Turns out my assumptions about how LLM-assisted programming works were completely wrong or outdated. This new way sounds super efficient.

load more comments
view more: next ›
this post was submitted on 03 Jun 2025
35 points (88.9% liked)

Technology

1135 readers
13 users here now

A tech news sub for communists

founded 2 years ago
MODERATORS