467
submitted 2 weeks ago* (last edited 1 week ago) by SuspciousCarrot78@lemmy.world to c/privacy@lemmy.ml

Hello!

As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:

It lies. Confidently. ALL THE TIME.

(Technically, it “bullshits” - https://link.springer.com/article/10.1007/s10676-024-09775-5

I’m autistic and extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s useful to you too.

The thing: llama-conductor

llama-conductor is a router that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first (because fuck big AI), but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).

I tried to make a glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.

TL;DR: “In God we trust. All others must bring data.”

Three examples:

1) KB mechanics that don’t suck (1990s engineering: markdown, JSON, checksums)

You keep “knowledge” as dumb folders on disk. Drop docs (.txt, .md, .pdf) in them. Then:

  • >>attach <kb> — attaches a KB folder
  • >>summ new — generates SUMM_*.md files with SHA-256 provenance baked in
  • `>> moves the original to a sub-folder

Now, when you ask something like:

“yo, what did the Commodore C64 retail for in 1982?”

…it answers from the attached KBs only. If the fact isn’t there, it tells you - explicitly - instead of winging it. Eg:

The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amiga’s pricing and timeline are also not detailed in the given facts.

Missing information includes the exact 1982 retail price for Commodore’s product line and which specific model(s) were sold then. The answer assumes the C64 is the intended product but cannot confirm this from the facts.

Confidence: medium | Source: Mixed

No vibes. No “well probably…”. Just: here’s what’s in your docs, here’s what’s missing, don't GIGO yourself into stupid.

And when you’re happy with your summaries, you can:

  • >>move to vault — promote those SUMMs into Qdrant for the heavy mode.

2) Mentats: proof-or-refusal mode (Vault-only)

Mentats is the “deep think” pipeline against your curated sources. It’s enforced isolation:

  • no chat history
  • no filesystem KBs
  • no Vodka
  • Vault-only grounding (Qdrant)

It runs triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:

FINAL_ANSWER:
The provided facts do not contain information about the Acorn computer or its 1995 sale price.

Sources: Vault
FACTS_USED: NONE
[ZARDOZ HATH SPOKEN]

Also yes, it writes a mentats_debug.log, because of course it does. Go look at it any time you want.

The flow is basically: Attach KBs → SUMM → Move to Vault → Mentats. No mystery meat. No “trust me bro, embeddings.”

3) Vodka: deterministic memory on a potato budget

Local LLMs have two classic problems: goldfish memory + context bloat that murders your VRAM.

Vodka fixes both without extra model compute. (Yes, I used the power of JSON files to hack the planet instead of buying more VRAM from NVIDIA).

  • !! stores facts verbatim (JSON on disk)
  • ?? recalls them verbatim (TTL + touch limits so memory doesn’t become landfill)
  • CTC (Cut The Crap) hard-caps context (last N messages + char cap) so you don’t get VRAM spikes after 400 messages

So instead of:

“Remember my server is 203.0.113.42” → “Got it!” → [100 msgs later] → “127.0.0.1 🥰”

you get:

!! my server is 203.0.113.42 ?? server ip203.0.113.42 (with TTL/touch metadata)

And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.


There’s more (a lot more) in the README, but I’ve already over-autism’ed this post.

TL;DR:

If you want your local LLM to shut up when it doesn’t know and show receipts when it does, come poke it:

PS: Sorry about the AI slop image. I can't draw for shit.

PPS: A human with ASD wrote this using Notepad++. If it the formatting is weird, now you know why.

top 50 comments
sorted by: hot top new old
[-] BaroqueInMind@piefed.social 79 points 2 weeks ago

I have no remarks, just really amused with your writing in your repo.

Going to build a Docker and self host this shit you made and enjoy your hard work.

Thank you for this!

[-] SuspciousCarrot78@lemmy.world 27 points 2 weeks ago

Thank you <3

Please let me know how it works...and enjoy the >>FR settings. If you've ever wanted to trolled by Bender (or a host of other 1990s / 2000s era memes), you'll love it.

load more comments (2 replies)
[-] FrankLaskey@lemmy.ml 27 points 2 weeks ago

This is very cool. Will dig into it a bit more later but do you have any data on how much it reduces hallucinations or mistakes? I’m sure that’s not easy to come by but figured I would ask. And would this prevent you from still using the built-in web search in OWUI to augment the context if desired?

[-] SuspciousCarrot78@lemmy.world 9 points 2 weeks ago

Comment removed by (auto-mod?) cause I said sexy bot. Weird.

Restating again: On the stuff you use the pipeline/s on? About 85-90% in my tests. Just don't GIGO (Garbage in, Garbage Out) your source docs...and don't use a dumb LLM. That's why I recommend Qwen3-4 2507 Instruct. It does what you tell it to (even the abilterated one I use).

load more comments (5 replies)
load more comments (1 replies)
[-] WolfLink@sh.itjust.works 22 points 2 weeks ago

I’m probably going to give this a try, but I think you should make it clearer for those who aren’t going to dig through the code that it’s still LLMs all the way down and can still have issues - it’s just there are LLMs double-checking other LLMs work to try to find those issues. There are still no guarantees since it’s still all LLMs.

[-] skisnow@lemmy.ca 8 points 1 week ago

I haven’t tried this tool specifically, but I do on occasion ask both Gemini and ChatGPT’s search-connected models to cite sources when claiming stuff and it doesn’t seem to even slightly stop them bullshitting and claiming a source says something that it doesn’t.

load more comments (4 replies)
[-] SuspciousCarrot78@lemmy.world 7 points 1 week ago* (last edited 1 week ago)

Fair point on setting expectations, but this isn’t just LLMs checking LLMs. The important parts are non-LLM constraints.

The model never gets to “decide what’s true.” In KB mode it can only answer from attached files. Don't feed it shit and it won't say shit.

In Mentats mode it can only answer from the Vault. If retrieval returns nothing, the system forces a refusal. That’s enforced by the router, not by another model.

The triple-pass (thinker → critic → thinker) is just for internal consistency and formatting. The grounding, provenance, and refusal logic live outside the LLM.

So yeah, no absolute guarantees (nothing in this space has those), but the failure mode is “I don’t know / not in my sources, get fucked” not “confidently invented gibberish.”

[-] SlimePirate@lemmy.dbzer0.com 20 points 2 weeks ago

Voodoo is not magic btw, it was sullied by colonists

[-] SuspciousCarrot78@lemmy.world 14 points 2 weeks ago

Damn Englishmen. With their..ways.

load more comments (4 replies)
load more comments (1 replies)
[-] bilouba@jlai.lu 15 points 2 weeks ago

Very impressive! Do you have benchmark to test the reliability? A paper would be awesome to contribute to the science.

[-] SuspciousCarrot78@lemmy.world 14 points 2 weeks ago

Just bush-league ones I did myself, that have no validation or normative values. Not that any of the LLM benchmarks seem to have those either LOL

I'm open to ideas, time wiling. Believe it or not, I'm not a code monkey. I do this shit for fun to get away from my real job

load more comments (1 replies)
[-] floquant@lemmy.dbzer0.com 14 points 1 week ago* (last edited 1 week ago)

Holy shit I'm glad to be on the autistic side of the internet.

Thank you for proving that fucking JSON text files are all you need and not "just a couple billion more parameters bro"

Awesome work, all the kudos.

load more comments (1 replies)
[-] angelmountain@feddit.nl 14 points 2 weeks ago

Super interesting build

And if programming doesn't pan out please start writing for a magazine, love your style (or was this written with your AI?)

[-] SuspciousCarrot78@lemmy.world 14 points 2 weeks ago

Once again: I am a meat popsicle (with ASD), not AI. All errors and foibles are mine :)

[-] Karkitoo@lemmy.ml 7 points 2 weeks ago* (last edited 2 weeks ago)

meat popsicle

( ͡° ͜ʖ ͡°)

Anyway, the other person is right. Your writing style is great !

I successfully read your whole post and even the README. Probably the random outbursts grabbed my attention back to te text.

Anyway version 2, this Is a very cool idea ! I cannot wait to either :

  • incorporate it to my workflows
  • let it sit in a tab to never be touched ever again
  • tgeoryceaft, do tests and request features so much as to burnout

Last but not least, thank you for not using github as your primary repo

load more comments (3 replies)
[-] itkovian@lemmy.world 13 points 2 weeks ago

Based AF. Can anyone more knowledgeable explain how it works? I am not able to understand.

[-] SuspciousCarrot78@lemmy.world 9 points 2 weeks ago

Hell yes I can explain. What would you like to know.

[-] itkovian@lemmy.world 18 points 2 weeks ago

As I understand it, it corrects the output of LLMs. If so, how does it actually work?

[-] SuspciousCarrot78@lemmy.world 24 points 2 weeks ago

Good question.

It doesn’t “correct” the model after the fact. It controls what the model is allowed to see and use before it ever answers.

There are basically three modes, each stricter than the last. The default is "serious mode" (governed by serious.py). Low temp, punishes chattiness and inventiveness, forces it to state context for whatever it says.

Additionally, Vodka (made up of two sub-modules - "cut the crap" and "fast recall") operate at all times. Cut the crap trims context so the model only sees a bounded, stable window. You can think of it like a rolling, summary of what's been said. That summary is not LLM generated summary either - it's concatenation (dumb text matching), so no made up vibes.

Fast recall OTOH stores and recalls facts verbatim from disk, not from the model’s latent memory.

It writes what you tell it to a text file and then when you ask about it, spits it back out verbatim ((!! / ??)

And that's the baseline

In KB mode, you make the LLM answer based on the above settings + with reference to your docs ONLY (in the first instance).

When you >>attach , the router gets stricter again. Now the model is instructed to answer only from the attached documents.

Those docs can even get summarized via an internal prompt if you run >>summ new, so that extra details are stripped out and you are left with just baseline who-what-where-when-why-how.

The SUMM_*.md file come SHA-256 provenance, so every claim can be traced back to a specific origin file (which gets moved to a subfolder)

TL;DR: If the answer isn’t in the KB, it’s told to say so instead of guessing.

Finally, Mentats mode (Vault / Qdrant). This is the “I am done with your shit" path.

It's all of the three above PLUS a counter-factual sweep.

It runs ONLY on stuff you've promoted into the vault.

What it does is it takes your question and forms in in a particular way so that all of the particulars must be answered in order for there to BE an answer. Any part missing or not in context? No soup for you!

In step 1, it runs that past the thinker model. The answer is then passed onto a "critic" model (different llm). That model has the job of looking at the thinkers output and say "bullshit - what about xyz?".

It sends that back to the thinker...who then answers and provides final output. But if it CANNOT answer the critics questions (based on the stored info?). It will tell you. No soup for you, again!

TL;DR:

The “corrections” happen by routing and constraint. The model never gets the chance to hallucinate in the first place, because it literally isn’t shown anything it’s not allowed to use. Basic premise - trust but verify (and I've given you all the tools I could think of to do that).

Does that explain it better? The repo has a FAQ but if I can explain anything more specifically or clearly, please let me know. I built this for people like you and me.

[-] itkovian@lemmy.world 9 points 2 weeks ago* (last edited 2 weeks ago)

That is much clearer. Thank you for making this. It actually makes LLMs useful with much lesser downsides.

[-] SuspciousCarrot78@lemmy.world 14 points 2 weeks ago

God, I hope so. Else I just pissed 4 months up the wall and shouted a lot of swears at my monitor for nada :)

Let me know if it works for you

load more comments (1 replies)
[-] sp3ctr4l@lemmy.dbzer0.com 11 points 2 weeks ago

This seems astonishingly more useful than the current paradigm, this is genuinely incredible!

I mean, fellow Autist here, so I guess I am also... biased towards... facts...

But anyway, ... I am currently uh, running on Bazzite.

I have been using Alpaca so far, and have been successfully running Qwen3 8B through it... your system would address a lot of problems I have had to figurr out my own workarounds for.

I am guessing this is not available as a flatpak, lol.

I would feel terrible to ask you to do anything more after all of this work, but if anyone does actually set up a podman installable container for this that actually properly grabs all required dependencies, please let me know!

load more comments (6 replies)
[-] ThirdConsul@lemmy.zip 11 points 1 week ago

I want to believe you, but that would mean you solved hallucination.

Either:

A) you're lying

B) you're wrong

C) KB is very small

[-] SuspciousCarrot78@lemmy.world 17 points 1 week ago

D) None of the above.

I didn’t "solve hallucination". I changed the failure mode. The model can still hallucinate internally. The difference is it’s not allowed to surface claims unless they’re grounded in attached sources.

If retrieval returns nothing relevant, the router forces a refusal instead of letting the model free-associate. So the guarantee isn’t “the model is always right.”

The guarantee is “the system won’t pretend it knows when the sources don’t support it.” That's it. That's the whole trick.

KB size doesn’t matter much here. Small or large, the constraint is the same: no source, no claim. GTFO.

That’s a control-layer property, not a model property. If it helps: think of it as moving from “LLM answers questions” to “LLM summarizes evidence I give it, or says ‘insufficient evidence.’”

Again, that’s the whole trick.

You don't need to believe me. In fact, please don't. Test it.

I could be wrong...but if I'm right (and if you attach this to a non-retarded LLM), then maybe, just maybe, this doesn't suck balls as much as you think it might.

Maybe it's even useful to you.

I dunno. Try it?

[-] ThirdConsul@lemmy.zip 6 points 1 week ago

So... Rag with extra steps and rag summarization? What about facts that are not rag retrieval?

[-] SuspciousCarrot78@lemmy.world 12 points 1 week ago* (last edited 1 week ago)

Parts of this are RAG, sure

RAG parts:

  • Vault / Mentats is classic retrieval + generation.
  • Vector store = Qdrant
  • Embedding and reranker

So yes, that layer is RAG with extra steps.

What’s not RAG -

KB mode (filesystem SUMM path)

This isn’t vector search. It’s deterministic, file-backed grounding. You attach folders as needed. The system summarizes and hashes docs. The model can only answer from those summaries in that mode. There’s no semantic retrieval step. It can style and jazz around the answer a little, but the answer is the answer is the answer.

If the fact isn’t in the attached KB, the router forces a refusal. Put up or shut up.

Vodka (facts memory)

That’s not retrieval at all, in the LLM sense. It's verbatim key-value recall.

  • JSON on disk
  • Exact store (!!)
  • Exact recall (??)

Again, no embeddings, no similarity search, no model interpretation.

"Facts that aren’t RAG"

In my set up, they land in one of two buckets.

  1. Short-term / user facts → Vodka. That for things like numbers, appointments, lists, one-off notes etc. Deterministic recall, no synthesis.

  2. Curated knowledge → KB / Vault. Things you want grounded, auditable, and reusable.

In response to the implicit "why not just RAG then"

Classic RAG failure mode is: retrieval is fuzzy → model fills gaps → user can’t tell which part came from where.

The extra "steps" are there to separate memory from knowledge, separate retrieval from synthesis and make refusal a legal output, not a model choice.

So yeah; some of it is RAG. RAG is good. The point is this system is designed so not everything of value is forced through a semantic search + generate loop. I don't trust LLMs. I am actively hostile to them. This is me telling my LLM to STFU and prove it, or GTFO. I know that's a weird way to operate maybe (advesarial, assume worst, engineer around issue) but that's how ASD brains work.

load more comments (5 replies)
load more comments (4 replies)
[-] UNY0N@lemmy.wtf 10 points 2 weeks ago

THIS IS AWESOME!!! I've been working on using an obsidian vault and a podman ollama container to do something similar, with VSCodium + continue as middleware. But this! This looks to me like it is far superior to what I have cobbled together.

I will study your codeberg repo, and see if I can use your conductor with my ollama instance and vault program. I just registered at codeberg, if I make any progress I will contact you there, and you can do with it what you like.

On an unrelated note, you can download wikipedia. Might work well in conjunction with your conductor.

https://en.wikipedia.org/wiki/Wikipedia:Database_download

load more comments (2 replies)
[-] termaxima@slrpnk.net 9 points 1 week ago

Hallucination is mathematically proven to be unsolvable with LLMs. I don't deny this may have drastically reduced it, or not, I have no idea.

But hallucinations will just always be there as long as we use LLMs.

load more comments (1 replies)
[-] Terces@lemmy.world 9 points 2 weeks ago

Fuck yeah...good job. This is how I would like to see "AI" implemented. Is there some way to attach other data sources? Something like a local hosted wiki?

[-] SuspciousCarrot78@lemmy.world 7 points 2 weeks ago

Hmm. I dunno - never tried. I suppose if the wiki could be imported in a compatible format...it should be able to chew thru it just fine. Wiki's are usually just gussied up text files anyway :) Drop the contents of your wiki in there a .md's and see what it does

load more comments (6 replies)
[-] Disillusionist@piefed.world 9 points 2 weeks ago

Awesome work. And I agree that we can have good and responsible AI (and other tech) if we start seeing it for what it is and isn't, and actually being serious about addressing its problems and limitations. It's projects like yours that can demonstrate pathways toward achieving better AI.

[-] recklessengagement@lemmy.world 8 points 1 week ago

I strongly feel that the best way to improve the useability of LLMs is through better human-written tooling/software. Unfortunately most of the people promoting LLMs are tools themselves and all their software is vibe-coded.

Thank you for this. I will test it on my local install this weekend.

load more comments (1 replies)
[-] cypherpunks@lemmy.ml 8 points 1 week ago
load more comments (1 replies)
[-] nagaram@startrek.website 7 points 1 week ago

This + Local Wikipedia + My own writings would be sick

[-] SuspciousCarrot78@lemmy.world 11 points 1 week ago* (last edited 1 week ago)

I’m not claiming I “fixed” bullshitting. I said I was TIRED of bullshit.

So, the claim I’m making is: I made bullshit visible and bounded.

The problem I’m solving isn’t “LLMs sometimes get things wrong.” That’s unsolvable AFAIK. What I'm solving for is "LLMs get things wrong in ways that are opaque and untraceable".

That's solvable. That’s what hashes get you. Attribution, clear fail states and auditability. YOU still have to check sources if you care about correctness.

The difference is - YOU are no longer checking a moving target or a black box. You're checking a frozen, reproducible input.

That’s… not how any of this works…

Please don't teach me to suck lemons. I have very strict parameters for fail states. When I say three strikes and your out, I do mean three strikes and you're out. Quants ain't quants, and models ain't models. I am very particular in what I run, how I run it and what I tolerate.

load more comments (3 replies)
[-] rollin@piefed.social 6 points 2 weeks ago

At first blush, this looks great to me. Are there limitations with what models it will work with? In particular, can you use this on a lightweight model that will run in 16 Gb RAM to prevent it hallucinating? I've experimented a little with running ollama as an NPC AI for Skyrim - I'd love to be able to ask random passers-by if they know where the nearest blacksmith is for instance. It was just far too unreliable, and worse it was always confidently unreliable.

This sounds like it could really help these kinds of uses. Sadly I'm away from home for a while so I don't know when I'll get a chance to get back on my home rig.

[-] SuspciousCarrot78@lemmy.world 14 points 2 weeks ago

My brother in virtual silicon: I run this shit on a $200 p.o.s with 4gb of VRAM.

If you can run an LLM at all, this will run. BONUS: because of the way "Vodka" operates, you can run with a smaller context window without eating shit of OOM errors. So...that means.. if you could only run a 4B model (because the GGUF itself is 3GBs without the over-heads...then you add in the drag from the KV cache accumulation).. maybe you can now run next sized up model...or enjoy no slow down chats with the model size you have.

load more comments (2 replies)
load more comments
view more: next ›
this post was submitted on 22 Jan 2026
467 points (94.0% liked)

Privacy

45621 readers
275 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

much thanks to @gary_host_laptop for the logo design :)

founded 6 years ago
MODERATORS