671
submitted 6 days ago by [email protected] to c/[email protected]
you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 45 points 6 days ago* (last edited 6 days ago)

Funny thing is correct json is easy to "force" with grammar-based sampling (aka it literally can't output invalid json) + completion prompting (aka start with the correct answer and let it fill in whats left, a feature now depreciated by OpenAI), but LLM UIs/corporate APIs are kinda shit, so no one does that...

A conspiratorial part of me thinks that's on purpose. It encourages burning (read: buying) more tokens to get the right answer, encourages using big models (where smaller, dumber, (gasp) prompt-cached open weights ones could get the job done), and keeps the users dumb. And it fits the Altman narrative of "we're almost at AGI, I just need another trillion to scale up with no other improvements!"

[-] [email protected] 19 points 5 days ago

There's nothing conspiratorial about it. Goosing queries by ruining the reply is the bread and butter of Prabhakar Raghavan's playbook. Other companies saw that.

[-] [email protected] 2 points 6 days ago* (last edited 6 days ago)

Edit: wrong comment

this post was submitted on 02 Jun 2025
671 points (98.8% liked)

Programmer Humor

23899 readers
775 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS