Check site's social media icons, purely AI slop.

Why would anyone subscribe? LLMs rarely are actually helpful and I really tried, as I'm a damn tech-nerd for decades. But most of the time it just takes longer to get worse results than just doing it yourself.
I would not pay 1 buck annually for this. And surely not 30 a month
What about using it without a subscription though ? I'm unsure whether this is good or bad for them, it loses them money but it also makes their user numbers look good so idk
not just chatgpt.
Stop using Amazon, Meta products, Netflix, Spotify, etc.
All of these massive corporations are actively making life worse for people, and it will only get worse and worse as people continue to stay subscribed.
The only option is to log off and find an alternative.
All these boycotts I can't join since I never paid for them in the first place 😢
You were just boycotting before it was cool.
Off with their heads! GO self-hosted, go local... toss the rest in the trash can before this crap gets a foothold and fully enshitifies
I would, if I found even a remotely good usecase for LLMs. Would be useful for contextual search on a bunch of API documentation and books on algorithms, but I don't want a sychophantic "copilot" or "assistant", that does job so bad I would be fired for, all while being called ableist slurs and getting blacklisted from the industry.
LLMs are already shit. Going local is still burning the world just to run a glorified text production machine
Having just finished getting an entire front end for my website, I disagree. A few years ago I would offshore this job to some third-world country devs. Now, AI can do the same thing, for cents, without having to wait for a few days for the initial results and another day or two for each revision needed
The fact you see nothing wrong with anything you said really speaks volumes to the inhumanity inherent with using "AI".
Please enlighten me. I am working on systems solving real-world issues, and now I can ship my solutions faster, with lower costs. Sounds like a win-win for everyone involved except for the offshore employees that have to look for new gigs now
Edit: I would actually rather read a reply than just see you downvoting. The point is, what you call a “glorified text generating machine”, has actual use cases
I don’t know if it’s your fault honestly. It’s the system that makes you want to offshore your work to developing countries and not hire local employees. I get it. It’s cheaper. But when even independent developers start doing this we have reached post-late stage capitalism at this point
GO self-hosted,
So yours and another comment I saw today got me to dust off an old docker container I was playing with a few months ago to run deepseek-r1:8b on my server's Intel A750 GPU with 8gb of VRAM. Not exactly top-of-the-line, but not bad.
I knew it would be slow and not as good as ChatGPT or whatever which I guess I can live with. I did ask it to write some example Rust code today which I hadn't even thought to try and it worked.
But I also asked it to describe the characters in a popular TV show, and it got a ton of details wrong.
8b is the highest number of parameters I can run on my card. How do you propose someone in my situation run an LLM locally? Can you suggest some better models?
Honestly you pretty much don't. Llama are insanely expensive to run as most of the model improvements will come from simply growing the model. It's not realistic to run LLMs locally and compete with the hosted ones, it pretty much requires the economics of scale. Even if you invest in a 5090 you're going to be behind the purpose made GPUs with 80GB VRAM.
Maybe it could work for some use cases but I rather just don't use AI.
You are playing with ancient stuff that wasn’t even good at release. Try these:
A 4b model performing like a 30b model: https://huggingface.co/Nanbeige/Nanbeige4.1-3B
Google open source version of Gemini: https://huggingface.co/google/gemma-3-4b-it
Any suggestions on how to get these to gguf format? I found a GitHub project that claims to convert, but wondering if there’s a more direct way.
It goes down to number of vram / unified ram you have. There is no magic to make 8b perform like top tier subscription based LLMs (likely in 500b+ range, wouldn't be surprised if trillions).
If you can get to 32b / 80b models, that's where magic starts to happen.
Going local is taxing on your hardware that is extremely expensive to replace. Hell, it could soon become almost impossible to replace. I genuinely don‘t recommend it.
Even if you HAVE to use LLMs for some reason, there are free alternatives right now that let Silicon Valley bleed money and they‘re quickly running out of it.
Cancelling any paid subscription probably hurts them more than anything else.
It's not really taxing on your hardware unless you load and unload huge models all day or if your cooling is insufficient.
If LLM is tied to making you productive, going local is about owning and controlling the means of production.
You aren't supposed to run it on machine you work on anyway, do a server and send requests.
Self host something as fast? For how much?
Corporate would still use it 😒
Any reference to Trump's donors to back that Gepeto is the biggest one? I would like to see the top 10 or 100 list...
You can subscribe to chatGPT?
Yes. I think it’s like $20 a month.
--
Edit: LMAO so I was fuck-off wrong. It's $10, $30, and $280 per month. At least in my currency (Swedish Crowns).
Don't use the stochastic parrot, and definitely don't fucking shell out 280 a month for it. Holy fuck.
Quit? Only a fool would waste their time on it.
Great job
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.