483
Lavalamp too hot (thelemmy.club)
submitted 3 weeks ago* (last edited 3 weeks ago) by swiftywizard@discuss.tchncs.de to c/programmer_humor@programming.dev
you are viewing a single comment's thread
view the rest of the comments
[-] MotoAsh@piefed.social 6 points 2 weeks ago

You have to pay for tokens on many of the "AI" tools that you do not run on your own computer.

[-] Feathercrown@lemmy.world 8 points 2 weeks ago* (last edited 2 weeks ago)

Hmm, interesting theory. However:

  1. We know this is an issue with language models, it happens all the time with weaker ones - so there is an alternative explanation.

  2. LLMs are running at a loss right now, the company would lose more money than they gain from you - so there is no motive.

[-] MotoAsh@piefed.social -3 points 2 weeks ago

Of course there's a technical reason for it, but they have incentive to try and sell even a shitty product.

[-] Feathercrown@lemmy.world 1 points 2 weeks ago

I don't think this really addresses my second point.

[-] MotoAsh@piefed.social 0 points 2 weeks ago

How does it not? This isn't a fucking debate. How would artificially bloating the number of tokens they sell not help their bottom line?

[-] Feathercrown@lemmy.world 0 points 2 weeks ago

Because they currently lose money for every token sold. They're operating at a loss to generate a userbase so that they can monetize later. They're currently in the pre-enshittification (I still don't like that word) phase where they want to offer a good product at a loss and lure in customers, not phase 2 where they monetize their userbase.

[-] MotoAsh@piefed.social 0 points 2 weeks ago* (last edited 2 weeks ago)

and? How do you not understand that more money is better for them even if they're not in the black, yet?

Two things can be true at once.

[-] Feathercrown@lemmy.world 0 points 2 weeks ago* (last edited 2 weeks ago)

Creating additional tokens LOSES them money. For a single token, the cost of generating it exceeds the profits.

I genuinely don't understand what would drive someone to be this condescending when you don't even understand the argument I have clearly laid out four times now.

[-] MotoAsh@piefed.social 0 points 2 weeks ago

Do you think they don't want people using their product? Are you really that dense?

[-] Feathercrown@lemmy.world 1 points 2 weeks ago

Are you? Because now we've agreed on every fact to determine my conclusion is correct. Yes they do want people using their product; they want to lure in customers. Wasting tokens generating unhelpful output would both drive customers away with a worse experience, and cost them more money. So there's no reason for them to do that. Like I said in my first post.

[-] piccolo@sh.itjust.works 0 points 2 weeks ago

Dont they charge be input tokens? E.g. your prompt. Not the output.

[-] MotoAsh@piefed.social 4 points 2 weeks ago* (last edited 2 weeks ago)

I think many of them do, but there are also many "AI" tools that will automatically add a ton of stuff to try and make it spit out more intelligent responses, or even re-prompt the tool multiple times to try and make sure it's not handing back hallucinations.

It really adds up in their attempt to make fancy autocomplete seem "intelligent".

[-] piccolo@sh.itjust.works 1 points 2 weeks ago

Yes, reasoning models... but i dont think they would charge on that... that would be insane, but AI executives are insane, so who the fuck knows.

[-] MotoAsh@piefed.social 1 points 2 weeks ago* (last edited 2 weeks ago)

Not the models. AI tools that integrate with the models. The "AI" would be akin to the backend of the tool. If you're using Claude as the backend, the tool would be asking claude more questions and repeat questions via the API. As in, more input.

this post was submitted on 25 Jan 2026
483 points (97.4% liked)

Programmer Humor

29742 readers
612 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS