this post was submitted on 11 Feb 2025
523 points (98.7% liked)
Technology
62117 readers
3696 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What temperature and sampling settings? Which models?
I've noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.
I find my local thinking models (FuseAI, Arcee, or Deepseek 32B 5bpw at the moment) are quite good at summarization at a low temperature, which is not what these UIs default to, and I get to use better sampling algorithms than any of the corporate APis. Same with “affordable” flagship API models (like base Deepseek, not R1). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.
My point is that LLMs as locally hosted tools you understand the mechanics/limitations of are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification and crypto-bro type hype in one package.
I have been pretty impressed by Gemini 2.0 Flash.
Its slightly worse than the very best on the benchmarks I have seen, but is pretty much instant and incredibly cheap. Maybe a loss leader?
Anyways, which model of the commercial ones do you consider to be good?
Benchmarks are so gamed, even Chatbot Arena is kinda iffy. TBH you have to test them with your prompts yourself.
Honestly I am getting incredible/creative responses from Deepseek R1, the hype is real, though its frequently overloaded. Tencent's API is a bit under-rated. If llama 3.3 70B is smart enough for you, Cerebras API is super fast.
Qwen Max is... not bad? The reasoning models kinda spoiled me, but I think they have more reasoning releases coming.
MiniMax is ok for long context, but I still tend to lean on Gemini for this.
I dunno about Claude these days, as its just so expensive. I haven't touched OpenAI in a long time.
Oh, and sometimes "weird" finetunes you can find on OpenRouter or whatever will serve niches much better than "big" API models.
EDIT:
Locally, I used to hop around, but now I pretty much always run a Qwen 32B finetune. Either coder, Arcee Distill, FuseAI, R1, EVA-Gutenberg, or Openbuddy, usually.
What are the local use cases? I'm running on a 3060ti but output is always inferior to the free tier of the various providers.
Can I justify an upgrade to a 4090 (or more)?
So there is not any trustworthy benchmarks I can currently use to evaluate? That in combination with my personal anecdotes is how I have been evaluating them.
I was pretty impressed with Deepseek R1. I used their app, but not for anything sensitive.
I don't like that OpenAI defaults to a model I can't pick. I have to select it each time, even when I use a special URL it will change after the first request
I am having a hard time deciding which models to use besides a random mix between o3-mini-high, o1, Sonnet 3.5 and Gemini 2 Flash
Heh, only obscure ones that they can't game, and only if they fit your use case. One example is the ones in EQ bench: https://eqbench.com/
…And again, the best mix of models depends on your use case.
I can suggest using something like Open Web UI with APIs instead of native apps. It gives you a lot more control, more powerful tooling to work with, and the ability to easily select and switch between models.
They were actually really vague about the details. The paper itself says they used GPT-4o for ChatGPT, but apparently they didnt even note what versions of the other models were used.
I've found Gemini overwhelmingly terrible at pretty much everything, it responds more like a 7b model running on a home pc or a model from two years ago than a medium commercial model in how it completely ignores what you ask it and just latches on to keywords... It's almost like they've played with their tokenisation or trained it exclusively for providing tech support where it links you to an irrelevant article or something
Gemini 1.5 used to be the best long context model around, by far.
Gemini Flash Thinking from earlier this year was very good for its speed/price, but it regressed a ton.
Gemini 1.5 Pro is literally better than the new 2.0 Pro in some of my tests, especially long-context ones. I dunno what happened there, but yes, they probably overtuned it or something.
Bing/chatgpt is just as bad. It loves to tell you it's doing something and then just ignores you completely.
I don’t think giving the temperature knob to end users is the answer.
Turning it to max for max correctness and low creativity won’t work in an intuitive way.
Sure, turning it down from the balanced middle value will make it more “creative” and unexpected, and this is useful for idea generation, etc. But a knob that goes from “good” to “sort of off the rails, but in a good way” isn’t a great user experience for most people.
Most people understand this stuff as intended to be intelligent. Correct. Etc. Or they At least understand that’s the goal. Once you give them a knob to adjust the “intelligence level,” you’ll have more pushback on these things not meeting their goals. “I clearly had it in factual/correct/intelligent mode. Not creativity mode. I don’t understand why it left out these facts and invented a back story to this small thing mentioned…”
Not everyone is an engineer. Temp is an obtuse thing.
But you do have a point about presenting these as cloud genies that will do spectacular things for you. This is not a great way to be executing this as a product.
I loathe how these things are advertised by Apple, Google and Microsoft.
Temperature isn't even "creativity" per say, it's more a band-aid to patch looping and dryness in long responses.
Lower temperature is much better with modern sampling algorithms, E.G., MinP, DRY, maybe dynamic temperature like mirostat and such. Ideally, structure output, too. Unfortunately, corporate APIs usually don't offer this.
It can be mitigated with finetuning against looping/repetition/slop, but most models are the opposite, massively overtuning on their own output which "inbreeds" the model.
And yes, domain specific queries are best. Basically the user needs separate prompt boxes for coding, summaries, creative suggestions and such each with their own tuned settings (and ideally tuned models). You are right, this is a much better idea than offering a temperature knob to the user, but... most UIs don't even do this for some reason?
What I am getting at is this is not a problem companies seem interested in solving.They want to treat the users as idiots without the attention span to even categorize their question.
This is really a non-issue, as the LLM itself should have no problem at setting a reasonable value itself. User wants a summary? Obviously maximum factual. He wants gaming ideas? Etc.
For local LLMs, this is an issue because it breaks your prompt cache and slows things down, without a specific tiny model to "categorize" text... which few have really worked on.
I don't think the corporate APIs or UIs even do this. You are not wrong, but it's just not done for some reason.
It could be that the trainers don't realize its an issue. For instance, "0.5-0.7" is the recommended range for Deepseek R1, but I find much lower or slightly higher is far better, depending on the category and other sampling parameters.
Rare that people here argument for LLMs like that here, usually it is the same kind of "uga suga, AI bad, did not already solve world hunger".
Your comment would be acceptable if AI was not advertised as solving all our problems, like world hunger.
So the ads are the problem? Do you have a link to such an ad?
Not ads, whole governments talking about it and funding that crap like Altman/Musk in the USA or Macron in Europe.
What a nuanced representation of the position, I just feel trustworthiness oozes out of the screen.
In case you're using random words generation machine to summarise this comment for you, it was a sarcasm, and I meant the opposite.
So many arguments... Wow!
Ask a forest burning machine to read the surrounding treads for you, then you will find the arguments you're looking for. You have at least 80% chance it will produce something coherent, and unknown chance of there being something correct, but hey, reading is hard amirite?
"If you try hard you might find arguments for my side"
What kind of meta-argument is that supposed to be?
If you read what people write, you will understand what they're trying to tell you. Shocking concept, I know. It's much easier to imagine someone in your head, paint him as a soyjack and yourself as a chadjack and epicly win an argument.
Wrong thread?
Lemmy is understandably sympathetic to self-hosted AI, but I get chewed out or even banned literally anywhere else.
In one fandom (the Avatar fandom), there used to be enthusiasm for a "community enhancement" of the original show since the official DVD/Blu-ray looks awful. Years later in a new thread, I don't even mention the word "AI," just the idea of restoration, and I got bombed and threadlocked for the mere tangential implication.