Alright, thanks for the info & additional pointers.
Well the thing with those "enabled EAC on Linux to see where it gets us" is it's non-binding and non-commital. And it's made explicitely that way so that support cannot be demanded from Linux users unlike Windows users who are explicitely mentioned in the systems supported by the game.
We legally don't have any ground to be supported the same as Windows users.
That's the problem with non official support. You're basically in an unending beta-testing phase. Theres' no easy solution here I'm afraid.
Ah thank you for the trove of information. What would be the best general knowledge model according to you?
This. It's not easy or trivial but as a long term strategy, they should already plan investing efforts into consolidating something like Godot or another FOSS engine. They should play like you calm down an abuser you can't just escape yet while planning their demise when the time has come.
Thanks a lot for your input. It's a lot to stomach but very descriptive which is what I need.
I run this Koboldcpp in a container.
What I ended up doing and which was semi-working is:
--model "/app/models/mythomax-l2-13b.ggmlv3.q5_0.bin" --port 80 --stream --unbantokens --threads 8 --contextsize 4096 --useclblas 0 0
In the Kobboldcpp UI, I set max response token to 512 and switched to an Instruction/response model and kept prompting with "continue the writing", with the MythoMax model.
But I'll be re-checking your way of doing it because the SuperCOT model seemed less streamlined and more qualitative in its story writing.
The MythoMax looks nice but I'm using it in story mode and it seems to have problems progressing once it's reached the max token, it appears stuck:
Generating (1 / 512 tokens)
(EOS token triggered!)
Time Taken - Processing:4.8s (9ms/T), Generation:0.0s (1ms/T), Total:4.8s (0.2T/s)
Output:
And then stops when I try to prompt it to continue the story.
I'll try that Model. However, your option doesn't work for me:
koboldcpp.py: error: argument model_param: not allowed with argument --model
darkeox
0 post score0 comment score
My bad. I think I confused this with the previous popular Unigine benchmarks.