[-] [email protected] 1 points 3 hours ago

to get the ball rolling usually yeah

I've always been a little skeptical of the up/downvote mechanism on any social media platform. It makes it so easy to weaponize the human bias towards group conformity, by doing exactly what you described.

I won't try to argue the voting has no benefits. It can be helpful to reduce the reach of truly bad faith posts for example, and everyone loves the little dopamine hit of seeing one's own posts upvoted - me included. Just that it also has real drawbacks, and I'm not sure whether the good outweighs the bad.

[-] [email protected] 1 points 4 hours ago

At this point, I hope Nv etc realize that even if selling AI cards to data centers gets them 10 times the profit per unit, it really is best for them in the long run to have a healthy and vibrant gamer and enthusiast market too. It's never good to have all your eggs in one basket.

[-] [email protected] 2 points 17 hours ago

Or be a 90s computer text adventure

Zork on steroids!

[-] [email protected] 2 points 17 hours ago

Thanks for your comments and thoughts! I appreciate hearing from more experienced people.

I feel like a little bit of prompt engineering would go a long way.

Yah, probably so. I tried to write a system prompt to steer the model toward what I wanted, but it's going to take a lot more refinement and experimenting to dial it in. I like your idea of asking it to be unforgiving about rules. I hadn't put anything like that in.

That's a great idea about putting a D&D manual, or at least the important parts, into a RAG system. I haven't tried RAG yet but it's on my queue of matters to learn. I know what it is, I just haven't tried it yet.

I've for sure seen that the quality of output starts to decline about 16K context, even on models that claim to support 128K. Also, I feel like the system prompt seems more effective when there are only let's say 4K context tokens so far. After the context grows, the model becomes less and less inclined to follow the system prompt. I've been guessing this is because as the context grows, any given piece of it becomes more dilute, but I don't really know.

For those reasons, I'm trying to use summarization to keep the context size under control, but I haven't found a good approach yet. SillyTavern has an auto summary injecting system, but either I'm misunderstanding it, or I don't like how it works, and I end up doing it manually.

I tried a few CoT models, but not since I moved to ST as a front end. I was using them with the standard llama-server web interface, which is a rather simple affair. My problem was that the thinking output seemed to spam up the context, leaving me much less ctx space for my own use. Each think block was like 500-800 tokens. It looks like ST might have an ability to only keep the most recent think block in the context, so I need to do more experimenting. The other problem I had was that the thinking could just take a lot of time.

[-] [email protected] 1 points 2 days ago* (last edited 2 days ago)

What was your setup for this experiment?

I'm using llama.cpp + sillytavern. I'm very much in learning mode with ST however, so I'm confident I could be using it in a more effective manner than I know how to at the moment. It seems like koboldcpp + ST ought to be similar to what I'm doing.

[-] [email protected] 2 points 2 days ago

but it’s no fun when the LLM simply says “yeah, sure whatever.” I

I hear ya. LLMs tend to heavily tilt toward what the user wants, which is not ideal for an RPG.

Have you tried any of the specialized RPG models? The one I'm using now has, at least twice so far, put me into a situation where I felt my party (2 chars, me and the AI) were going to die unless we ran away. We just finished a very difficult fight, used everything at our disposal, and sustained several serious injuries in the process. Then an even more powerful foe appeared, and it really felt that was going to be the end unless we ran. Would it really have killed us? I can't say, but I did get a genuine sense of that. It might help that in the system prompt, I had put this:

The story should be genuinely dangerous and frightening, but survivable if we use our wits.

I have the feeling the generalist models are much more tilted in the "yeah, sure, whatever" direction. I tried at least one RPG focused model (Dan's dangerous winds, or something like that) which was downright brutal, and would kill me off right away with no opportunity for me to do anything about it. That wasn't fun for the opposite reason. But like you say, it's also not fun to have no risk and no boundaries to test one's mettle. The sweet spot is can be elusive.

I'm thinking that a non-LLM rules system around an LLM for descriptive purposes could really help here too, to enforce a kind of rigor on the experience.

[-] [email protected] 3 points 2 days ago* (last edited 2 days ago)

To add to my lame noob answer, I found this, which has a better rundown of ollama vs llama.cpp. I don't know if it's considered bad form to link to ##ddit on lemmy, so ~~I'll just put the title here and you can search for it on there if you want~~ link added per comment from mutual_ayed below. There are a couple informative posts which are upvoted. "There is a big difference between use LM-Studio, Ollama, LLama.cpp?"

21
submitted 2 days ago by [email protected] to c/[email protected]

Hey everybody. I'm just getting into LLMs. Total noob. I started using llama-server's web interface, but I'm experimenting with a frontend called SillyTavern. It looks much more powerful, but there's still a lot I don't understand about it, and some design choices I found confusing.

I'm trying the Harbinger-24B model to act as a D&D-style DM, and to run one party character while I control another. I tried several general purpose models too, but I felt the Harbinger purpose-built adventure model was noticeably superior for this.

I'll write a little about my experience with it, and then some thoughts about LLMs and D&D. (Or D&D-ish. I'm not fussy about the exact thing, I just want that flavour of experience).

General Experience

I've run two scenarios. My first try was a 4/10 for my personal satisfaction, and the 2nd was 8/10. I made no changes to the prompts or anything between, so that's all due to the story the model settled into. I'm trying not to give the model any story details, so it makes everything up, and I won't know about it in advance. The first story the model invented was so-so. The second was surprisingly fun. It had historical intrigue, a tie-in to a dark family secret from ancestors of the AI-controlled char, and the dungeon-diving mattered to the overarching story. Solid marks.

My suggestion for others trying this is, if you don't get a story you like out of the model, try a few more times. You might land something much better.

The Good

Harbinger provided a nice mixture of combat and non-combat. I enjoy combat, but I also like solving mysteries and advancing the plot by talking to NPCs or finding a book in the town library, as long as it feels meaningful.

It writes fairly nice descriptions of areas you encounter, and thoughts for the AI-run character.

It seems to know D&D spells and abilities. It lets you use them in creative but very reasonable ways you could do in a pen and paper game, but can't do in a standard CRPG engine. It might let you get away with too much, so you have to keep yourself honest.

The Bad

You may have to try multiple times until the RNG gives you a nice story. You could also inject a story in the base prompt, but I want the LLM to act as a DM for me, where I'm going in completely blind. Also, in my first 4/10 game, the LLM forced really bad "main character syndrome" on me. The whole thing was about me, me, me, I'm special! I found that off putting, but the 2nd 8/10 attempt wasn't like that at all.

As an LLM, it's loosy-goosy about things like inventory, spells, rules, and character progression.

I had a difficult time giving the model OOC instructions. OOC tended to be "heard" by other characters.

Thoughts about fantasy-adventure RP and LLMs

I feel like the LLM is very good at providing descriptions, situations, and locations. It's also very good at understanding how you're trying to be creative with abilities and items, and it lets you solve problems in creative ways. It's more satisfying than a normal CRPG engine in this way.

As an LLM though, it let you steer things in ways you shouldn't be able to in an RPG with fixed rules. Like disallowing a spell you don't know, or remembering how many feet of rope you're carrying. I enjoy the character leveling and crunchy stats part of pen-and-paper or CRPGs, and I haven't found a good way to get the LLM to do that without just handling everything manually and whacking it into the context.

That leads me to think that using an LLM for creativity inside a non-LLM framework to enforce rules, stats, spells, inventory, and abilities might be phenomenal. Maybe AI-dungeon does that? Never tried, and anyway I want local. A hybrid system like that might be scriptable somehow, but I'm too much of a noob to know.

[-] [email protected] 4 points 2 days ago

What’s the advantage over Ollama?

I'm very new to this so someone more knowledgeable should probably answer this for real.

My impression was that ollama somehow uses the llama.cpp source internally, but wraps it up to provide features like auto-downloading of models. I didn't care about that, but I liked the very tiny dependency footprint of llama.cpp. I haven't tried ollama for network inference.

There are other backends too which support network inference, and some posts allege they are better for that than llama.cpp is. vllm and ... exllama or something like that? I haven't looked into either of them. I'm running on inertia so far with llama.cpp, since it was so easy to get going and I'm kinda lazy.

[-] [email protected] 2 points 2 days ago

I like this project. Very nice!

I haven't tried RAG yet, nor the fancy vector space whatsit which looks like it requires a specialized model(?) to create. I've been wanting to do something similar in spirit to your project here, but for an online RPG, so I dig this.

18
submitted 2 days ago by [email protected] to c/[email protected]

Hey everybody, brand new to running local LLMs, so I'm learning as I go. Also brand new to lemmy.

I have a 16 GB VRAM card, and I was running some models that would overflow 16GB by using the CPU+RAM to run some of the layers. It worked, but was very slow, even for only a few layers.

Well I noticed llama.cpp has an rpc-server feature, so I tried it. It was very easy to use. Lin here, but probably similar on Win or Mac. I had an older gaming rig sitting around with a GTX 1080 in it. Much slower than my 4080, but using it to run a few layers is still FAR faster than using the CPU. Night and day almost.

The main drawbacks I've experienced so far are,

  • By default it tries to split the model evenly between machines. That's fine if you have the same card in all of them, but I wanted to put as much of the model as possible on the fastest card. You can do that using the --tensor-split parameter, but it requires some experimenting to get it right.

  • It loads the rpc machine's part of the model across the network every time you start the server, which can be slow on 1 gigabit network. I didn't see any way to tell rpc-server to load the model from a local copy. It makes my startups go from 1-2 seconds, up to like 30-50 sec.

  • Q8 quantized KV cache works, but Q4 does not.

Lots of people may not be able to run 2 or 3 GPUs in one PC, but might have another PC they can add over the network. Worth a try, I'd say, if you want more VRAM space.

ThreeJawedChuck

0 post score
0 comment score
joined 2 days ago