280
submitted 4 hours ago by Beep@lemmus.org to c/technology@lemmy.world
top 28 comments
sorted by: hot top new old
[-] muimota@lemmy.ml 9 points 48 minutes ago

Check site's social media icons, purely AI slop.

[-] Dyskolos@lemmy.zip 6 points 1 hour ago

Why would anyone subscribe? LLMs rarely are actually helpful and I really tried, as I'm a damn tech-nerd for decades. But most of the time it just takes longer to get worse results than just doing it yourself.

I would not pay 1 buck annually for this. And surely not 30 a month

[-] termaxima@slrpnk.net 1 points 19 minutes ago

What about using it without a subscription though ? I'm unsure whether this is good or bad for them, it loses them money but it also makes their user numbers look good so idk

[-] canofcam@lemmy.world 3 points 31 minutes ago

not just chatgpt.

Stop using Amazon, Meta products, Netflix, Spotify, etc.

All of these massive corporations are actively making life worse for people, and it will only get worse and worse as people continue to stay subscribed.

The only option is to log off and find an alternative.

[-] LibertyLizard@slrpnk.net 35 points 3 hours ago

All these boycotts I can't join since I never paid for them in the first place 😢

[-] truthfultemporarily@feddit.org 20 points 3 hours ago

You were just boycotting before it was cool.

[-] unspeakablehorror 35 points 4 hours ago

Off with their heads! GO self-hosted, go local... toss the rest in the trash can before this crap gets a foothold and fully enshitifies

[-] ZILtoid1991@lemmy.world 1 points 33 minutes ago

I would, if I found even a remotely good usecase for LLMs. Would be useful for contextual search on a bunch of API documentation and books on algorithms, but I don't want a sychophantic "copilot" or "assistant", that does job so bad I would be fired for, all while being called ableist slurs and getting blacklisted from the industry.

[-] mushroommunk@lemmy.today 12 points 3 hours ago

LLMs are already shit. Going local is still burning the world just to run a glorified text production machine

[-] suspicious_hyperlink@lemmy.today -2 points 2 hours ago

Having just finished getting an entire front end for my website, I disagree. A few years ago I would offshore this job to some third-world country devs. Now, AI can do the same thing, for cents, without having to wait for a few days for the initial results and another day or two for each revision needed

[-] mushroommunk@lemmy.today 9 points 1 hour ago

The fact you see nothing wrong with anything you said really speaks volumes to the inhumanity inherent with using "AI".

[-] suspicious_hyperlink@lemmy.today -4 points 1 hour ago* (last edited 29 minutes ago)

Please enlighten me. I am working on systems solving real-world issues, and now I can ship my solutions faster, with lower costs. Sounds like a win-win for everyone involved except for the offshore employees that have to look for new gigs now

Edit: I would actually rather read a reply than just see you downvoting. The point is, what you call a “glorified text generating machine”, has actual use cases

[-] kutt@lemmy.world 1 points 25 minutes ago

I don’t know if it’s your fault honestly. It’s the system that makes you want to offshore your work to developing countries and not hire local employees. I get it. It’s cheaper. But when even independent developers start doing this we have reached post-late stage capitalism at this point

[-] ch00f@lemmy.world 2 points 2 hours ago

GO self-hosted,

So yours and another comment I saw today got me to dust off an old docker container I was playing with a few months ago to run deepseek-r1:8b on my server's Intel A750 GPU with 8gb of VRAM. Not exactly top-of-the-line, but not bad.

I knew it would be slow and not as good as ChatGPT or whatever which I guess I can live with. I did ask it to write some example Rust code today which I hadn't even thought to try and it worked.

But I also asked it to describe the characters in a popular TV show, and it got a ton of details wrong.

8b is the highest number of parameters I can run on my card. How do you propose someone in my situation run an LLM locally? Can you suggest some better models?

[-] SirHaxalot@nord.pub 0 points 36 minutes ago

Honestly you pretty much don't. Llama are insanely expensive to run as most of the model improvements will come from simply growing the model. It's not realistic to run LLMs locally and compete with the hosted ones, it pretty much requires the economics of scale. Even if you invest in a 5090 you're going to be behind the purpose made GPUs with 80GB VRAM.

Maybe it could work for some use cases but I rather just don't use AI.

[-] lexiw@lemmy.world 2 points 1 hour ago

You are playing with ancient stuff that wasn’t even good at release. Try these:

A 4b model performing like a 30b model: https://huggingface.co/Nanbeige/Nanbeige4.1-3B

Google open source version of Gemini: https://huggingface.co/google/gemma-3-4b-it

[-] ch00f@lemmy.world 1 points 1 hour ago

Any suggestions on how to get these to gguf format? I found a GitHub project that claims to convert, but wondering if there’s a more direct way.

[-] Mika@piefed.ca 1 points 2 hours ago

It goes down to number of vram / unified ram you have. There is no magic to make 8b perform like top tier subscription based LLMs (likely in 500b+ range, wouldn't be surprised if trillions).

If you can get to 32b / 80b models, that's where magic starts to happen.

[-] CosmoNova@lemmy.world 3 points 3 hours ago

Going local is taxing on your hardware that is extremely expensive to replace. Hell, it could soon become almost impossible to replace. I genuinely don‘t recommend it.

Even if you HAVE to use LLMs for some reason, there are free alternatives right now that let Silicon Valley bleed money and they‘re quickly running out of it.

Cancelling any paid subscription probably hurts them more than anything else.

[-] Hubi@feddit.org 1 points 10 minutes ago

It's not really taxing on your hardware unless you load and unload huge models all day or if your cooling is insufficient.

[-] Mika@piefed.ca 1 points 1 hour ago* (last edited 1 hour ago)

If LLM is tied to making you productive, going local is about owning and controlling the means of production.

You aren't supposed to run it on machine you work on anyway, do a server and send requests.

[-] dieICEdie@lemmy.org 1 points 1 hour ago

Self host something as fast? For how much?

[-] Mika@piefed.ca 3 points 1 hour ago

Corporate would still use it 😒

[-] sircac@lemmy.world 4 points 2 hours ago

Any reference to Trump's donors to back that Gepeto is the biggest one? I would like to see the top 10 or 100 list...

[-] Cruxifux@feddit.nl 9 points 3 hours ago

You can subscribe to chatGPT?

[-] Dojan@pawb.social 15 points 3 hours ago* (last edited 3 hours ago)

Yes. I think it’s like $20 a month.

--

Edit: LMAO so I was fuck-off wrong. It's $10, $30, and $280 per month. At least in my currency (Swedish Crowns).

Don't use the stochastic parrot, and definitely don't fucking shell out 280 a month for it. Holy fuck.

[-] emmy67@lemmy.world 2 points 2 hours ago

Quit? Only a fool would waste their time on it.

[-] atropa@piefed.social 4 points 4 hours ago
this post was submitted on 12 Feb 2026
280 points (98.3% liked)

Technology

81026 readers
4424 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS