3
submitted 2 years ago by [email protected] to c/[email protected]

I'm trying to learn more about LLMs, but I haven't found any explanation for what determines which prompt template format a model requires.

For example meta-llama's llama-2 requires this format:

...INST and <> tags, BOS and EOS tokens...

But if I instead download's TheBloke's version of llama-2 the prompt template should instead be:

SYSTEM: ...

USER: {prompt}

ASSISTANT:

I thought this would have been determined how the original training data was formatted, but afaik TheBloke only converted the llama-2 models from one format to another. Looking at the documentation for the GGML format I don't see anything related to the prompt being embedded in the model file.

Anyone who understands this stuff who could point me in the right direction?

top 4 comments
sorted by: hot top new old
[-] [email protected] 3 points 2 years ago* (last edited 2 years ago)

You're right. It's solely based on how the training data was formatted.

I'm pretty sure this is an error in TheBloke's description.

(Oobabooga's webui also includes those tags: https://github.com/oobabooga/text-generation-webui/blob/main/characters/instruction-following/Llama-v2.yaml )

[-] [email protected] 1 points 2 years ago

Thanks! I'm going to do some experiments and see if I get different results. I've been using TheBloke's format and it worked mostly well, but perhaps switching to meta-llama's format will eliminate the occasional bugs I've had.

[-] [email protected] 2 points 2 years ago

That's probably the most reasonable thing you can do.

I'm not sure how much of a difference we expect from 100% the correct prompt compared to something roughly in that direction. I've been tinkering around with instruction style tuned models (from the previous/first llama) and sometimes it doesn't seem to matter. I also sometimes used a 'wrong' prompt for days and couldn't tell. Maybe the models are 'intelligent' enough to compensate for that. I'm not sure. I usually try to get it right to get all the performance out of it.

load more comments
view more: next ›
this post was submitted on 26 Jul 2023
3 points (71.4% liked)

LocalLLaMA

3111 readers
47 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS