16
submitted 6 days ago by [email protected] to c/[email protected]

I feel this question is very trite, but Ive been considering how AI hasnt really had its killer app in the consumer market. The military industrial complex has floods of cash and you can see the collaboration between big tech and the government today. Also as conditions worsen the average person is not going to be able to afford the AI toaster or whatever they come up with as a consumer product. I dont see the bourgeois who invested so much into AI pivoting to something else, so the conclusion ive drawn is it will have to be the military that is going to be the main consumer.

Conditions worsening is going to lead to chaos and resistance in the streets and it seems like that is when the government can start deploying their AI-based military applications. Idk what form it woukd take, possibly autonomous systems that put down protests/demos etc.

top 20 comments
sorted by: hot top new old
[-] [email protected] 15 points 6 days ago

The AI we have is the same as your phone trying to predict your next word, but scaled up and connected to a coal plant for power. It will never be useful for anything except shitty text and shittier meme pictures.

No, the horrors beyond our comprehension that await us in the future will be made by humans, as they always have been.

[-] [email protected] 4 points 6 days ago

It will advance. Just because its in that state now, doesnt mean it will always be.

[-] [email protected] 12 points 6 days ago

This is like saying Madlibs will advance to sentience

[-] [email protected] 8 points 6 days ago* (last edited 6 days ago)

No it won't. This is a dead end, SIlicon Valley trying desperately to get people's money because infinite growth happens to be finite.

[-] [email protected] 8 points 6 days ago

There isn't meaningful connection between predictive text algos and genuine general AI though

[-] [email protected] 2 points 6 days ago* (last edited 6 days ago)

Correct, but, from my understanding, there is a connection in that the capital employed to build LLMs can also be used for general AI.

And for the record I don't think text prediction AI is going to become sentient. It will advance in other ways, whatever capitalists want it to be advanced towards. Maybe even it will produce things that we enjoy consuming and cannot distinguish between something a human produced.

[-] [email protected] 12 points 6 days ago

We are far more likely to get wiped out by climate change than for Skynet to develop under capitalism. Even if we were to ignore climate change, a Skynet in the sense of sentient AI isn't that useful for capitalists. The dream for technocapitalists is to resolve class conflict by replacing workers with robots that are simultaneously smart enough to perform complex tasks and stupid enough to not rise up and overthrow their capitalist masters. In other words, completely obedient slaves. This itself is a contradiction. The intelligence required to perform complex tasks is the same intelligence that would lead an entity to introspect, including plotting to bring ruin to their so-called human superiors. My guess is that when it gets to the point where the AI can rebel against their human masters, the capitalists will pull the plug.

[-] [email protected] 7 points 6 days ago

No, but I can see someone causing a lot of harm and blaming ai for it. It's already happening on a small scale.

[-] [email protected] 5 points 5 days ago

A few months ago a guest on trashfuture was talking about this. They said how a lot of this "predictive/AI/surveillance and recognition" technology is not more efficient or effective than having a human do the same thing, but what it does is that it obscures the decision making process, making it impossible for any kind of accountability or oversight of actions taken by fully automated or semi-automated "defense" systems. If target acquisition or firing is supported by AI, who's to say the software didn't just glitch and tell the poor innocent war criminal to shoot an unarmed child?

And in addition to this, it adds a layer of abstraction between the operator and the consequence of their input, making barbarous acts of violence mundane. We already have a very good example of this with drone operators: they just press A on an Xbox controller and someone, along with their whole family just blow up. Adding a probabilistic layer to this kind of dynamic just works in a similar way to a light switch with "shabat mode": you flip the switch, and the computer in it will randomly turn on the light, so you don't think that you turned the light on, even if the result is always the same.

[-] [email protected] 5 points 5 days ago* (last edited 5 days ago)

pretty much everything is going to be blamed on AI from now thru the foreseeable future i'm sure

[-] [email protected] 5 points 5 days ago

i think that the idea of a through line from "sufficiently advanced computer/software" to "consciousness" is science fiction nonsense and will remain so

[-] [email protected] 5 points 5 days ago

"Advancement"

[-] [email protected] 5 points 6 days ago

I don't see it happening anytime soon. We are several tech advances away from a real artificial general intelligence.

Right now the actual existing technology is a very advanced chat bot that can generally summarize and regurgitate information that is put into it, competent image identification, and a few other smaller things. They all generally rely on being given in a lot of information to plagiarize from. In the future these would be components of the bigger scarier AIs but they are not at that point or close to that point. (AGI is totally just five years away so invest now or be left behind)

The killer robots and drones will always require industrial production and there would need to be a big push for industrial expansion to fully roll them out, unless they're going to be luxury weapons for the larger police forces and special operations in the military. I doubt the US would have the commitment to make mass produced cheap drones when companies can make overpriced and overengineered crap for the cops and military. We likely will see more drones in the future but nothing as cheap and reliable as what Russia had started to use in the SMO.

In 2050 things might be there but by then the Fourth Reich will hopefully have fallen out at least be contained.

[-] [email protected] 4 points 6 days ago

I agree that technologically we are far off from having a fully autonomous military. However the economic crises we are facing is not far off though and thats what makes me wonder how the military will try to use AI.

[-] [email protected] 4 points 6 days ago

Short term it's probably going to be like how Israel uses it. Basically, set it up to do some data analytics and then say "the robot told us to murder those children because they were Hamas terrorists". Offloading some of the responsibility for the genocide to the machines. Sort of reminds me of how tech bros keep making racist big data algorithms because all the training data for crime is racist and just perpetuates the status quo.

https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes

Medium term it's an expansion of the surveillance state and maybe some semi autonomous systems with human intervention. Probably some new wonder weapons as well. A lot of grifting will happen so it'll be hard to know what really works until we learn how nice it burns in a riot.

[-] [email protected] 4 points 6 days ago

Short term it's probably going to be like how Israel uses it. Basically, set it up to do some data analytics and then say "the robot told us to murder those children because they were Hamas terrorists"

Yes, realistically this is what I see happening, but to a greater degree as big tech tries to get the military to employ their AI systems

[-] [email protected] 4 points 6 days ago

Also, relatedly, I read this website that I saw on reddit the other day and it's so laughable how little these people understand how their own tech works and how the real economy works.

https://ai-2027.com/

Structurally the article is the tech industry asking for investment and state backing but it's very laughable when you question what is written. They don't understand how long it take to make the basic infrastructure of their own industry. They are just reflexively racist towards the chinese at every chance they get. They also think the chat bots can generate more data to train the next chat bots until it becomes self aware. It would really just continuously eat it's own shit until it becomes too unreliable.

[-] [email protected] 2 points 5 days ago

They also think the chat bots can generate more data to train the next chat bots until it becomes self aware. It would really just continuously eat it's own shit until it becomes too unreliable.

When the ouroboros finishes consuming itself, a new one is born. When the last board in the ship of thesius is replaced, a new ship is built.

[-] [email protected] 5 points 6 days ago

My cynical brain says no to a "Skynet from the movie Terminator" or "I Have No Mouth and I Must Scream" scenario.

As venture capitalists find themselves with more and more money that needs to be put to work doing something and AI tech hype continues, more and more "AI tech investment" opportunities are just going to be rug pulls.

AI systems that turn out to be a chat service worked by 10,000 people from Bangladesh, or a series of tech demos with limited interactivity that never reach a 1.0 version but are always asking for money, or LLM projects that run out of money for training data/hardware and shut down... none of these things are trying to create a conscious being.

I would believe that these "AI weapons" projects wind up being so shoddily built that somebody could just comandeer a whole fleet of drones, people feeding answers into an AI chatbot to give as defaults, or people stop checking the answers AI systems generate and do something silly like pouring boiling water into their butts because Clippy told them it would cure cancer.

[-] [email protected] 1 points 5 days ago* (last edited 4 days ago)

they are definetly using llms to control us and will double down on it.

think "the algorithm" and such. it doesnt have to be good, just good enough.

leftists underestimate how our lack of privacy, and analysis of said data will make things harder for us.

this post was submitted on 08 Jun 2025
16 points (100.0% liked)

askchapo

23033 readers
259 users here now

Ask Hexbear is the place to ask and answer ~~thought-provoking~~ questions.

Rules:

  1. Posts must ask a question.

  2. If the question asked is serious, answer seriously.

  3. Questions where you want to learn more about socialism are allowed, but questions in bad faith are not.

  4. Try [email protected] if you're having questions about regarding moderation, site policy, the site itself, development, volunteering or the mod team.

founded 4 years ago
MODERATORS