1332

Workers should learn AI skills and companies should use it because it's a "cognitive amplifier," claims Satya Nadella.

in other words please help us, use our AI

top 50 comments
sorted by: hot top new old
[-] kescusay@lemmy.world 410 points 2 weeks ago

"Cognitive amplifier?" Bullshit. It demonstrably makes people who use it stupider and more prone to believing falsehoods.

I'm watching people in my industry (software development) who've bought into this crap forget how to code in real-time while they're producing the shittiest garbage I've laid eyes on as a developer. And students who are using it in school aren't learning, because ChatGPT is doing all their work - badly - for them. The smart ones are avoiding it like the blight on humanity that it is.

[-] wizardbeard@lemmy.dbzer0.com 133 points 2 weeks ago

As evidence: How the fuck is a company as big as Microsoft letting their CEO keep making such embarassing public statements? How the fuck has he not been forced into more public speaking training by the board?

This is like the 4th "gaffe" of his since the start of the year!

You don't usually need "social permission" to do something good. Mentioning that is at best, publicly stating that you think you know what's best for society (and they don't). I think the more direct interpretation is that you're openly admitting you're doing the type of thing that you should have asked permission for, but didn't.

This is past the point of open desperation.

load more comments (2 replies)
[-] devfuuu@lemmy.world 71 points 2 weeks ago

And they are all getting dependent and addicted to something that is currently almost "free" but the monetization of it all will soon come in force. Good luck having the money to keep paying for it or the capacity to handle all the advertisement it will soon start to push out. I guess the main strategy is manipulate people into getting experience with it with these 2 or 3 years basically being equivalent to a free trial and ensuring people will demand access to the tools from their employees which will pay from their pockets. When barely anyone is able to get their employers to pay for things like IDEs... Oh well.

[-] ThunderWhiskers@lemmy.world 32 points 2 weeks ago

We watched this exact same tactic happen with Xbox gamepass over the last 5 years. They introduced it and left in the capability to purchase the "upgrade" for $1/year. Now they are suddenly cranking it up to $30/month and people are still paying it because they feel like it's a service they "have to have".

load more comments (15 replies)
load more comments (1 replies)
[-] hushable@lemmy.world 68 points 2 weeks ago

I’m watching people in my industry (software development) who’ve bought into this crap forget how to code in real-time while they’re producing the shittiest garbage I’ve laid eyes on as a developer.

I just spent two days fixing multiple bugs introduced by some AI made changes, the person who submitted them, a senior developer, had no idea what the code was doing, he just prompted some words into Claude and submitted it without checking if it even worked, then it was "reviewed" and blindly approved by another coworker who, in his words, "if the AI made it, then it should be alright"

[-] MonkderVierte@lemmy.zip 43 points 2 weeks ago

"if the AI made it, then it should be alright"

Show him the errors of his ways. People learn best by experience.

[-] Dogiedog64@lemmy.world 27 points 2 weeks ago

You're right, they SHOULD be fired.

load more comments (5 replies)
[-] ech@lemmy.ca 38 points 2 weeks ago

And students who are using it in school aren’t learning, because ChatGPT is doing all their work - badly - for them.

This is the one that really concerns me. It feels like generations of students are just going to learn to push the slop button for any and everything they have to do. Even if these bots were everything techbros claimed they are, this would still be devastating for society.

load more comments (3 replies)
[-] floofloof@lemmy.ca 26 points 2 weeks ago

I've been programming professionally for 25 years. Lately we're all getting these messages from management that don't give requirements but instead give us a heap of AI-generated code and say "just put this in." We can see where this is going: management are convincing themselves that our jobs can be reduced to copy-pasting code generated by a machine, and the next step will be to eliminate programmers and just have these clueless managers. I think AI is robbing management of skills as well as developers. They can no longer express what they want (not that they were ever great at it): we now have to reverse-engineer the requirements from their crappy AI code.

[-] nulluser@lemmy.world 23 points 2 weeks ago* (last edited 2 weeks ago)

but instead give us a heap of AI-generated code and say “just put this in.”

we now have to reverse-engineer the requirements from their crappy AI code.

It may be time for some malicious compliance.

Don't reverse engineer anything. Do as your told and "just put this in" and deploy it. Everything will break and management will explode, but now you've demonstrated that they can't just replace you with AI.

Now explain what you've been doing (reverse engineering to figure out their requirements), but that you're not going to do that anymore. They need to either give you proper requirements so that you can write properly working code, or they give you AI slop and you're just going to "put it in" without a second thought.

You'll need your whole team on board for this to work, but what are they going to do, fire the whole team and replace them with AI? You'll have already demonstrated that that's not an option.

load more comments (1 replies)
load more comments (12 replies)
[-] SeeMarkFly@lemmy.ml 213 points 2 weeks ago

So...he has something USELESS and he wants everybody to FIND a use for it before HE goes broke?

I'll get right on it.

load more comments (9 replies)
[-] FlashMobOfOne@lemmy.world 114 points 2 weeks ago* (last edited 2 weeks ago)

"Social permission" is one term for it.

Most people don't realize this is happening until it hits their electric bills. Microslop isn't permitted to steal from us. They're just literal thieves and it takes time for the law to catch up.

[-] grue@lemmy.world 43 points 2 weeks ago

[Microsoft are] just literal thieves.

Always have been.

(But now it's worse because it's the entire public, not just their competitors)

load more comments (2 replies)
load more comments (5 replies)
[-] FreddiesLantern@leminal.space 103 points 2 weeks ago

How can you lose social permission that you never had in the first place?

[-] JoeBigelow@lemmy.ca 40 points 2 weeks ago

The peasants might light their torches

load more comments (10 replies)
load more comments (2 replies)
[-] NutWrench@lemmy.world 86 points 2 weeks ago* (last edited 2 weeks ago)

The whole point of "AI" is to take humans OUT of the equation, so the rich don't have to employ us and pay us. Why would we want to be a part of THAT?

AI data centers are also sucking up all the high quality GDDR5 ram on the market, making everything that relies on that ram ridiculously expensive. I can't wait for this fad to be over.

[-] danielton1@lemmy.world 39 points 2 weeks ago

Not to mention the water depletion and electricity costs that the people who live near AI data centers have to deal with, because tech companies can't be expected to be responsible for their own usage.

load more comments (3 replies)
load more comments (7 replies)
[-] morto@piefed.social 76 points 2 weeks ago
  • Denial
  • Anger
  • Bargaining <- They're here
  • Depression
  • Acceptance
[-] rumba@lemmy.zip 32 points 2 weeks ago

The five stages of corporate grief:

  • lies
  • venture capital
  • marketing
  • circular monetization
  • private equity sale
load more comments (3 replies)
load more comments (5 replies)
[-] OshagHennessey@lemmy.world 74 points 2 weeks ago

"Microsoft thinks it has social permission to burn the planet for profit" is all I'm hearing.

load more comments (2 replies)
[-] rustydrd@sh.itjust.works 56 points 2 weeks ago
[-] DaddleDew@lemmy.world 52 points 2 weeks ago* (last edited 2 weeks ago)

Translation: Microslop's executives are finally starting to realize that they fucked up.

load more comments (1 replies)
[-] HaraldvonBlauzahn@feddit.org 52 points 2 weeks ago* (last edited 2 weeks ago)

Literally burning the planet with power demand from data centers but not even knowing what it could possibly be good for?

That's eco-terrorism for lack of a better word.

Fuck you.

load more comments (1 replies)
[-] Siegfried@lemmy.world 49 points 2 weeks ago

Social permission? I dont remember that we had a vote or something on this bullshit.

load more comments (2 replies)
[-] itistime@infosec.pub 47 points 2 weeks ago

The oligarch class is again showing why we need to upset their cart.

[-] llama@lemmy.zip 46 points 2 weeks ago

As far as I can tell there hasn't been any tangible reward in terms of pay increase, promotion or external recruitment from using the cognitive amplifier.

load more comments (8 replies)
[-] _stranger_@lemmy.world 38 points 2 weeks ago

you never had it to begin with. Goddamn leeches.

[-] DrCake@lemmy.world 37 points 2 weeks ago

AI industry needs to encourage job seekers to pick up AI skills (undefined), in the same way people master Excel to make themselves more employable.

Has anyone in the last 15 years willingly learned excel? It seems like one of those things you have to learn on the job as your boomer managers insist on using it.

[-] JensSpahnpasta@feddit.org 25 points 2 weeks ago

I did and it's awesome. People like to shit on Excel, but there is a reason why every business on earth runs on Excel. It's a great tool and if you really learn it, you can do great things with it.

load more comments (8 replies)
[-] Buddahriffic@lemmy.world 22 points 2 weeks ago

Funny thing about "AI skills" that I've noticed so far is that they are actually just skills in the thing you're trying to get AI to help with. If you're good at that, you can often (though not always) get an effective result. Mostly because you can talk about it at a deeper level and catch mistakes the AI makes.

If you have no idea about the thing, it might look competent to you, but you just won't be catching the mistakes.

In that context, I would call them thought amplifiers and pretty effective at the whole "talking about something can help debug the problem, even if the other person doesn't contribute anything of value because you have to look at the problem differently to explain it and that different perspective might make the solution more visible", while also being able to contribute some valueable pieces.

load more comments (2 replies)
[-] MBech@feddit.dk 21 points 2 weeks ago

Excel depends on the usage. Way too many people want to use it for what it's bad at, but technically can do, instead of using it for what it's good at.

I'm fairly decent at using Excel, and have automated some database dependent tasks for my coworkers through it, which saves us a lot of time doing menial tasks no one actually wants to do.

load more comments (11 replies)
[-] RamRabbit@lemmy.world 31 points 2 weeks ago

Just make copilot it's own program that is uninstallable, remove it from everywhere else in the OS, and let it be. People who want it will use it, people who don't want it won't. Nobody would be pissed at Microsoft over AI if that is what they had done from the start.

load more comments (6 replies)
[-] H1AA6329S@lemmy.world 31 points 2 weeks ago

I hope all parties responsible for this garbage, including Microsoft will pay a huge price in the end. Fuck all these morons.

Stop shilling for these corporate assholes or you will own nothing and will be forced to be happy.

[-] matlag@sh.itjust.works 31 points 2 weeks ago* (last edited 2 weeks ago)

Take away:

  1. MS is well aware AI is useless.
  2. Nadella admits they invested G$ in something without having the slightest clue what its use-cas would be ("something something rEpLaCe HuMaNs")
  3. Nadella is blissfully unaware of the "social" image MS already has in the eye of the public. You don't have our social permission to still live as a company!
load more comments (4 replies)
[-] AlexLost@lemmy.world 29 points 1 week ago

You already don't have social permission to do what you are doing, and that hasn't stopped you. The world is bigger than the 10 people around your board's table.

load more comments (2 replies)
[-] kameecoding@lemmy.world 29 points 2 weeks ago

I will try to have a balanced take here:

The positives:

  • there are some uses for this "AI"
  • like an IDE it can help speed up the process of development especially for menial tasks that are important such as unit test coverage.
  • it can be useful to reword things to match the corpo slang that will make you puke if you need to use it.
  • it is useful as a sort of better google, like for things that are documented but reading the documentation makes your head hurt so you can ask it to dumb it down to get the core concept and go from there

The negatives

  • the positives don't justify the environmental externalities of all these AI companies
  • the positives don't justify the pc hardware/silicone price hikes
  • shoehorning this into everything is capital R retarded.
  • AI is a fucking bubble keeping the Us economy inflated instead of letting it crash like it should have a while ago
  • other than a paid product like copilot there is simply very little commercially viable use-case for all this public cloud infrastructure other than targeting with you more ads, that you can't block because it's in the text output of it.

Overall I wish the AI bubble burst already

[-] ViatorOmnium@piefed.social 32 points 2 weeks ago

menial tasks that are important such as unit test coverage

This is one of the cases where AI is worse. LLMs will generate the tests based on how the code works and not how it is supposed to work. Granted lots of mediocre engineers also use the "freeze the results" method for meaningless test coverage, but at least human beings have ability to reflect on what the hell they are doing at some point.

load more comments (6 replies)
load more comments (9 replies)
[-] Sam_Bass@lemmy.world 28 points 2 weeks ago

Best use for AI is CEO replacement

load more comments (6 replies)
[-] just_another_person@lemmy.world 28 points 2 weeks ago

Fuck this loser. We have enough issues to deal with on a daily basis. We don't need to subsidize your fear of having wasted ungodly amounts of money and becoming irrelevant.

That's a YOU problem, fool.

[-] circuitfarmer@lemmy.sdf.org 26 points 2 weeks ago

Textbook definition of a solution searching for a problem.

load more comments (1 replies)
[-] Doomsider@lemmy.world 25 points 2 weeks ago

Delusional, created a solution to a problem that doesn't exist to usurp the power away from citizens and concentrate it in the minority.

This is the opposite of the information revolution. This is the information capture. It will be sold back to the people it was taken from while being distorted by special interests.

load more comments (3 replies)
[-] utopiah@lemmy.world 23 points 2 weeks ago* (last edited 2 weeks ago)

"bend the productivity curve" is such a beautiful way to say that they are running out of ideas on how to sell that damn thing.

It basically went from :

  • it's going to change EVERYTHING! Humanity as we know it is a thing of the past!

... to "bend the productivity curve". It's not how it "radically increase productivity" no it's a lot more subtle than that, to the point that it can actually bend that curve down. What a shit show.

load more comments (1 replies)
[-] rafoix@lemmy.zip 21 points 2 weeks ago

Avoid spending trillions on a product nobody wants to pay for.

[-] Aceticon@lemmy.dbzer0.com 20 points 2 weeks ago* (last edited 2 weeks ago)

AI isn't at all reliable.

Worse, it has a uniform distribution of failures in the domain of seriousness of consequences - i.e. it's just as likely to make small mistakes with miniscule consequences as major mistakes with deadly consequences - which is worse than even the most junior of professionals.

(This is why, for example, an LLM can advise a person with suicidal ideas to kill themselves)

Then on top of this, it will simply not learn: if it makes a major deadly mistake today and you try to correct it, it's just as likely to make a major deadly mistake tomorrow as it would be if you didn't try to correct it. Even if you have access to actually adjust the model itself, correcting one kind of mistake just moves the problem around and is akin to trying to stop the tide on a beach with a sand wall - the only way to succeed is to have a sand wall for the whole beach, by which point it's in practice not a beach anymore.

You can compensate for this by having human oversight on the AI, but at that point you're just back to having to pay humans for the work being done, so now instead of having to the cost of a human to do the work, you have the cost of the AI to do the work + the cost of the human to check the work of the AI and the human has to check the entirety of the work just to make sure since problems can pop-up anywere, take and form and, worse, unlike a human the AI work is not consistent so errors are unpredictable, plus the AI will never improve and it will never include the kinds of improvements that humans doing the same work will over time discover in order to make later work or other elements of the work be easier to do (i.e. how increase experience means you learn to do little things to make your work and even the work of others easier).

This seriously limits the use of AI to things were the consequences of failure can never be very bad (and if you also include businesses, "not very bad" includes things like "not significantly damage client relations" which is much broader than merely "not be life threathening", which is why, for example, Lawyers using AI to produce legal documents are getting into trouble as the AI quotes made up precedents), so mostly entertainment and situations were the AI alerts humans for a potential situation found within a massive dataset and if the AI fails to spot it, it's alright and if the AI incorrectly spots something that isn't there the subsequent human validation can dismiss it as a false positive (so for example, face recognition in video streams for the purpose of general surveillance, were humans watching those video streams are just or more likely to miss it and an AI alert just results in a human checking it, or scientific research were one tries to find unknown relations in massive datasets)

So AI is a nice new technological tool in a big toolbox, not a technological and business revolution justifying the stock market valuations around it, investment money sunk into it or the huge amount of resources (such as electricity) used by it.

Specifically for Microsoft, there doesn't really seem to be any area were MS' core business value for customers gains from adding AI, in which case this "AI everywhere" strategy in Microsoft is an incredibly shit business choice that just burns money and damages brand value.

load more comments (4 replies)
[-] ReallyCoolDude@lemmy.ml 20 points 2 weeks ago* (last edited 2 weeks ago)

I work in AI and the only obvious profit is the ability to fire workers. Which they need to rehire after some months, but lowering wages. It is indeed a powerful tool, but tools are not driving profits. They are a cost. Unless you run a disinformation botnet, scamming websites, or porn. It is too unpredictable to really automatize software creation ( fuzzy is the term, we somehow mitigate with stochastic approach ). Probably movie industry is also cutting costs, but not sure.

AI is the way capital is trying to acquire skills cutting off the skilled.

Have to say though that having an interfacd that understands natural language opens so many possibilities. Which could really democratize access to tech, but they are so niche that they would never really drive profit.

load more comments (4 replies)
[-] FireWire400@lemmy.world 20 points 2 weeks ago* (last edited 2 weeks ago)

Dude, you never had "social permission" to do this in the first place, none of us asked for this shit. You're literally destroying the planet, the economy and our future for your personal gain.

You useless waste of space.

load more comments
view more: next ›
this post was submitted on 21 Jan 2026
1332 points (98.6% liked)

Technology

80478 readers
3899 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS