this post was submitted on 11 Sep 2023
9 points (100.0% liked)

LocalLLaMA

2237 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

Hi,

Just like the title says:

I'm try to run:

With:

  • koboldcpp:v1.43 using HIPBLAS on a 7900XTX / Arch Linux

Running :

--stream --unbantokens --threads 8 --usecublas normal

I get very limited output with lots of repetition.

Illustrattion

I mostly didn't touch the default settings:

Settings

Does anyone know how I can make things run better?

EDIT: Sorry for multiple posts, Fediverse bugged out.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

Looks good to me.

For reference: I think i got the settings in my screenshot from Reddit. But they seem to have updated the post since. The current recommended settings have a temperature and some other settings that are closer to what I've seen in the default settings. I've tested those (new to me) settings and they also work for me. Maybe I also adapted the settings from here.

And I've linked a 33b MythoMax model in the previous post that's probably not working properly. I've edited that part and crossed it out. But you seem to use a 13b version anyways. That's good.

I've tried a few models today. I think another promising model for writing stories is Athena. For your information: I get inspiration from this list. But beware, that's for ERP, so erotic role play. So some models from that ranking are probably not safe for work (or for minors). But other benchmarks often test for factual knowledge and answering questions. And in my experience the models good at those things are not necessarily good at creative tasks. But that's more my belief. I don't know if it's actually true. And this ranking also isn't very scientific.

[–] [email protected] 2 points 1 year ago (1 children)

Ah thank you for the trove of information. What would be the best general knowledge model according to you?

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

Well, I'm not that up to date anymore. I think MythoMax 13b is pretty solid. Also for knowledge. But I can't be bothered anymore to read up on things twice weekly. That news is probably already 3 weeks old and there will be a (slightly) better one out there now. And it gets outperformed by pretty much every one of the big 70b models. But I can't run them on my hardware, so I wouldn't know.

This benchmark ranks them by several scientific tests. You can hide the 70b models and scarlett-33b seems to be a good contender. Or the older Platypus models directly below. But be cautious, sometimes these models look better on paper than they really are.

Also regarding 'knowledge': I don't know about your application. Just in case you're not aware of this... Language models hallucinate and regularly just make up stuff. Even expensive and big models will do this. The models we play with, even more so. Just be aware of it.

And lastly: There is another good community here on Lemmy: [email protected] You can find a few tutorials and more people there, too. And have a look at the 'About' section or stickied posts there. They linked more benchmarks and info.

[–] [email protected] 2 points 1 year ago

Alright, thanks for the info & additional pointers.