this post was submitted on 04 Jul 2023
22 points (92.3% liked)

LocalLLaMA

2237 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

While I'm always hoping the latest newest model will be The One, in the mean time I'm sticking with Airoboros 33B gpt4 1.4 as the most powerful model I can currently run on my 3090.

It's not perfect by a long shot, but after extensive use I've learned what I can trust it with. It's good enough to summarize long text, to rephrase sentences, and to give me a decent starting point when I'm curious about some topic. It's reasoning skills are a bit less than GPT 3.5, I'd say.

I also occasionally switch back to Guanaco 33B because it generates a different flavor of text, but I find it to be factually weaker than other similar models.

What are your favorites?

top 4 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 1 year ago

For me I have to run down at a measly 13B, so I've been using mainly wizard and airoboros, hoping though that the new orca (or dolphin) models will work well, especially down to 7B, want to create a highly specialized home AI and would be so handy if I could QLora a really powerful but small model

[–] [email protected] 2 points 1 year ago (1 children)

Being stuck on ROCm and also having an allergy to the reams of garbage python throws out when it doesn't work.. I'm still on the launchpad.

[–] [email protected] 1 points 1 year ago

Yeah rocm is very limiting :/

[–] [email protected] 2 points 1 year ago

The same (65B versions) and wizard vicuna uncensored 30B.