468
submitted 3 months ago* (last edited 3 months ago) by SuspciousCarrot78@lemmy.world to c/privacy@lemmy.ml

Hello!

As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:

It lies. Confidently. ALL THE TIME.

(Technically, it “bullshits” - https://link.springer.com/article/10.1007/s10676-024-09775-5

I’m autistic and extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s useful to you too.

The thing: llama-conductor

llama-conductor is a router that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first (because fuck big AI), but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).

I tried to make a glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.

TL;DR: “In God we trust. All others must bring data.”

Three examples:

1) KB mechanics that don’t suck (1990s engineering: markdown, JSON, checksums)

You keep “knowledge” as dumb folders on disk. Drop docs (.txt, .md, .pdf) in them. Then:

  • >>attach <kb> — attaches a KB folder
  • >>summ new — generates SUMM_*.md files with SHA-256 provenance baked in
  • `>> moves the original to a sub-folder

Now, when you ask something like:

“yo, what did the Commodore C64 retail for in 1982?”

…it answers from the attached KBs only. If the fact isn’t there, it tells you - explicitly - instead of winging it. Eg:

The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amiga’s pricing and timeline are also not detailed in the given facts.

Missing information includes the exact 1982 retail price for Commodore’s product line and which specific model(s) were sold then. The answer assumes the C64 is the intended product but cannot confirm this from the facts.

Confidence: medium | Source: Mixed

No vibes. No “well probably…”. Just: here’s what’s in your docs, here’s what’s missing, don't GIGO yourself into stupid.

And when you’re happy with your summaries, you can:

  • >>move to vault — promote those SUMMs into Qdrant for the heavy mode.

2) Mentats: proof-or-refusal mode (Vault-only)

Mentats is the “deep think” pipeline against your curated sources. It’s enforced isolation:

  • no chat history
  • no filesystem KBs
  • no Vodka
  • Vault-only grounding (Qdrant)

It runs triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:

FINAL_ANSWER:
The provided facts do not contain information about the Acorn computer or its 1995 sale price.

Sources: Vault
FACTS_USED: NONE
[ZARDOZ HATH SPOKEN]

Also yes, it writes a mentats_debug.log, because of course it does. Go look at it any time you want.

The flow is basically: Attach KBs → SUMM → Move to Vault → Mentats. No mystery meat. No “trust me bro, embeddings.”

3) Vodka: deterministic memory on a potato budget

Local LLMs have two classic problems: goldfish memory + context bloat that murders your VRAM.

Vodka fixes both without extra model compute. (Yes, I used the power of JSON files to hack the planet instead of buying more VRAM from NVIDIA).

  • !! stores facts verbatim (JSON on disk)
  • ?? recalls them verbatim (TTL + touch limits so memory doesn’t become landfill)
  • CTC (Cut The Crap) hard-caps context (last N messages + char cap) so you don’t get VRAM spikes after 400 messages

So instead of:

“Remember my server is 203.0.113.42” → “Got it!” → [100 msgs later] → “127.0.0.1 🥰”

you get:

!! my server is 203.0.113.42 ?? server ip203.0.113.42 (with TTL/touch metadata)

And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.


There’s more (a lot more) in the README, but I’ve already over-autism’ed this post.

TL;DR:

If you want your local LLM to shut up when it doesn’t know and show receipts when it does, come poke it:

PS: Sorry about the AI slop image. I can't draw for shit.

PPS: A human with ASD wrote this using Notepad++. If it the formatting is weird, now you know why.

(page 2) 50 comments
sorted by: hot top new old
[-] rollin@piefed.social 6 points 3 months ago

At first blush, this looks great to me. Are there limitations with what models it will work with? In particular, can you use this on a lightweight model that will run in 16 Gb RAM to prevent it hallucinating? I've experimented a little with running ollama as an NPC AI for Skyrim - I'd love to be able to ask random passers-by if they know where the nearest blacksmith is for instance. It was just far too unreliable, and worse it was always confidently unreliable.

This sounds like it could really help these kinds of uses. Sadly I'm away from home for a while so I don't know when I'll get a chance to get back on my home rig.

load more comments (3 replies)
[-] Murdoc@sh.itjust.works 6 points 3 months ago

I wouldn't know how to get this going, but I very much enjoyed reading it and your comments and think that it looks like a great project. 👍

(I mean, as a fellow autist I might be able to hyperfocus on it for a while, but I'm sure that the ADHD would keep me from finishing to go work on something else. 🙃)

[-] SuspciousCarrot78@lemmy.world 4 points 3 months ago* (last edited 3 months ago)

Ah - ASD, ADHD and Lemmy. You're a triple threat, Harry! :)

Glad if it was entertaining, if even a little!

[-] SuspciousCarrot78@lemmy.world 6 points 3 months ago

Responding to my own top post like a FB boomer: May I make one request?

If you found this little curio interesting at all, please share in the places you go.

And especially, if you're on Reddit, where normies go.

I use to post heavily on there, but then Reddit did a reddit and I'm done with it.

https://lemmy.world/post/41398418/21528414

Much as I love Lemmy and HN, they're not exactly normcore, and I'd like to put this into the hands of people :)

PS: I am think of taking some of the questions you all asked me here (de-identified) and writing a "Q&A_with_drBobbyLLM.md" and sticking it on the repo. It might explain some common concerns.

And, If nothing else, it might be mildly amusing.

[-] null@piefed.nullspace.lol 5 points 3 months ago

This is awesome. Definitely gonna dig into this later.

[-] 7toed@midwest.social 5 points 3 months ago

Okay pardon the double comment, but I now have no choice but to set this up after reading your explainations. Doing what TRILLIONS of dollars hasn't cooked up yet.. I hope you're ready by whatever means you deam, when someone else "invents" this

load more comments (1 replies)
[-] domi@lemmy.secnd.me 5 points 3 months ago

I have a Strix Halo machine with 128GB VRAM so I'm definitely going to give this a try with gpt-oss-120b this weekend.

load more comments (6 replies)
[-] pineapple@lemmy.ml 5 points 3 months ago

This is amazing! I will either abandon all my other commitments and install this tomorrow or I will maybe hopefully get it done in the next 5 years.

Likely accurate jokes aside this will be a perfect match with my obsidian volt as well as researching things much more quickly.

[-] SuspciousCarrot78@lemmy.world 4 points 3 months ago

I hope it does what it I claim it does for you. Choose a good LLM model. Not one of the sex-chat ones. Or maybe, exactly one of those. For uh...research.

[-] Zexks@lemmy.world 4 points 3 months ago* (last edited 3 months ago)

This is awesome. Ive been working on something similar. Youre not likely to get much useful from here though. Anything AI is by default bad here

[-] SuspciousCarrot78@lemmy.world 6 points 3 months ago

Well, to butcher Sinatra: if it can make it on Lemmy and HN, it can make it anywhere :)

[-] Pudutr0n@lemmy.world 4 points 3 months ago* (last edited 3 months ago)

re: the KB tool, why not just skip the llm and do two chained fuzzy finds? (what knowledge base & question keywords)

load more comments (3 replies)
[-] PolarKraken@lemmy.dbzer0.com 4 points 3 months ago

This sounds really interesting, I'm looking forward to reading the comments here in detail and looking at the project, might even end up incorporating it into my own!

I'm working on something that addresses the same problem in a different way, the problem of constraining or delineating the specifically non-deterministic behavior one wants to involve in a complex workflow. Your approach is interesting and has a lot of conceptual overlap with mine, regarding things like strictly defining compliance criteria and rejecting noncompliant outputs, and chaining discrete steps into a packaged kind of "super step" that integrates non-deterministic substeps into a somewhat more deterministic output, etc.

How involved was it to build it to comply with the OpenAI API format? I haven't looked into that myself but may.

[-] SuspciousCarrot78@lemmy.world 4 points 3 months ago* (last edited 3 months ago)

Cheers!

Re: OpenAI API format: 3.6 - not great, not terrible :)

In practice I only had to implement a thin subset: POST /v1/chat/completions + GET /v1/models (most UIs just need those). The payload is basically {model, messages, temperature, stream...} and you return a choices[] with an assistant message. The annoying bits are the edge cases: streaming/SSE if you want it, matching the error shapes UIs expect, and being consistent about model IDs so clients don’t scream “model not found”. Which is actually a bug I still need to squash some more for OWUI 0.7.2. It likes to have its little conniptions.

But TL;DR: more plumbing than rocket science. The real pain was sitting down with pen and paper and drawing what went where and what wasn't allowed to do what. Because I knew I'd eventually fuck something up (I did, many times), I needed a thing that told me "no, that's not what this is designed to do. Do not pass go. Do not collect $200".

shrug I tried.

[-] PolarKraken@lemmy.dbzer0.com 3 points 3 months ago

The very hardest part of designing software, and especially designing abstractions that aim to streamline use of other tools, is deciding exactly where you draw the line(s) between intended flexibility (user should be able and find it easy to do what they want), and opinionated "do it my way here, and I'll constrain options for doing otherwise".

You have very clear and thoughtful lines drawn here, about where the flexibility starts and ends, and where the opinionated "this is the point of the package/approach, so do it this way" parts are, too.

Sincerely that's a big compliment and something I see as a strong signal about your software design instincts. Well done! (I haven't played with it yet, to be clear, lol)

load more comments (1 replies)
load more comments
view more: ‹ prev next ›
this post was submitted on 22 Jan 2026
468 points (94.0% liked)

Privacy

48262 readers
1248 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

much thanks to @gary_host_laptop for the logo design :)

founded 6 years ago
MODERATORS