[-] SuspciousCarrot78@lemmy.world 9 points 20 hours ago

It already happened. And didn't happen. At the same time.

[-] SuspciousCarrot78@lemmy.world 5 points 6 days ago* (last edited 6 days ago)

BTW, I had to put all my media in chronological folders yesterday so Nova Media Player could see / stream it from my NAS correctly while I fix my Raspberry Pi / finally bite the bullet and install Proxmox.

Firefly, Harold And Kumar, Knights Tale, Constantine, Austin Powers, Iron man 1, Matrix and bunch of other stuff circa 1999-2009.

It took me right back to people and places. And then it hit me -

"All those moments will be lost in time, like tears in rain".

Fuck you for hitting me while I'm down.

[-] SuspciousCarrot78@lemmy.world 19 points 6 days ago* (last edited 6 days ago)

It's 21 years old this year 😭

Take solace that old != obsolete.

I still play Just Cause 2, Fallout 3 and a bunch of 360 GOATs

[-] SuspciousCarrot78@lemmy.world 17 points 1 week ago

D) None of the above.

I didn’t "solve hallucination". I changed the failure mode. The model can still hallucinate internally. The difference is it’s not allowed to surface claims unless they’re grounded in attached sources.

If retrieval returns nothing relevant, the router forces a refusal instead of letting the model free-associate. So the guarantee isn’t “the model is always right.”

The guarantee is “the system won’t pretend it knows when the sources don’t support it.” That's it. That's the whole trick.

KB size doesn’t matter much here. Small or large, the constraint is the same: no source, no claim. GTFO.

That’s a control-layer property, not a model property. If it helps: think of it as moving from “LLM answers questions” to “LLM summarizes evidence I give it, or says ‘insufficient evidence.’”

Again, that’s the whole trick.

You don't need to believe me. In fact, please don't. Test it.

I could be wrong...but if I'm right (and if you attach this to a non-retarded LLM), then maybe, just maybe, this doesn't suck balls as much as you think it might.

Maybe it's even useful to you.

I dunno. Try it?

[-] SuspciousCarrot78@lemmy.world 16 points 2 weeks ago* (last edited 2 weeks ago)

don’t see how it addresses hallucinations. It’s really cool! But seems to still be inherently unreliable (because LLMs are)

LLMs are inherently unreliable in “free chat” mode. What llama-conductor changes is the failure mode: it only allows the LLM to argue from user curated ground truth and leaves an audit trail.

You don't have to trust it (black box). You can poke it (glass box). Failure leaves a trail and it can’t just hallucinate a source out of thin air without breaking LOUDLY and OBVIOUSLY.

TL;DR: it won't piss in your pocket and tell you it's rain. It may still piss in your pocket (but much less often, because it's house trained)

[-] SuspciousCarrot78@lemmy.world 20 points 2 weeks ago

Probably that latter. I unironically used "Obeyant" the other day, like a time traveling barrister from the 1600s.

I have 2e ASD and my hyperfocus is language.

[-] SuspciousCarrot78@lemmy.world 24 points 2 weeks ago

Good question.

It doesn’t “correct” the model after the fact. It controls what the model is allowed to see and use before it ever answers.

There are basically three modes, each stricter than the last. The default is "serious mode" (governed by serious.py). Low temp, punishes chattiness and inventiveness, forces it to state context for whatever it says.

Additionally, Vodka (made up of two sub-modules - "cut the crap" and "fast recall") operate at all times. Cut the crap trims context so the model only sees a bounded, stable window. You can think of it like a rolling, summary of what's been said. That summary is not LLM generated summary either - it's concatenation (dumb text matching), so no made up vibes.

Fast recall OTOH stores and recalls facts verbatim from disk, not from the model’s latent memory.

It writes what you tell it to a text file and then when you ask about it, spits it back out verbatim ((!! / ??)

And that's the baseline

In KB mode, you make the LLM answer based on the above settings + with reference to your docs ONLY (in the first instance).

When you >>attach , the router gets stricter again. Now the model is instructed to answer only from the attached documents.

Those docs can even get summarized via an internal prompt if you run >>summ new, so that extra details are stripped out and you are left with just baseline who-what-where-when-why-how.

The SUMM_*.md file come SHA-256 provenance, so every claim can be traced back to a specific origin file (which gets moved to a subfolder)

TL;DR: If the answer isn’t in the KB, it’s told to say so instead of guessing.

Finally, Mentats mode (Vault / Qdrant). This is the “I am done with your shit" path.

It's all of the three above PLUS a counter-factual sweep.

It runs ONLY on stuff you've promoted into the vault.

What it does is it takes your question and forms in in a particular way so that all of the particulars must be answered in order for there to BE an answer. Any part missing or not in context? No soup for you!

In step 1, it runs that past the thinker model. The answer is then passed onto a "critic" model (different llm). That model has the job of looking at the thinkers output and say "bullshit - what about xyz?".

It sends that back to the thinker...who then answers and provides final output. But if it CANNOT answer the critics questions (based on the stored info?). It will tell you. No soup for you, again!

TL;DR:

The “corrections” happen by routing and constraint. The model never gets the chance to hallucinate in the first place, because it literally isn’t shown anything it’s not allowed to use. Basic premise - trust but verify (and I've given you all the tools I could think of to do that).

Does that explain it better? The repo has a FAQ but if I can explain anything more specifically or clearly, please let me know. I built this for people like you and me.

[-] SuspciousCarrot78@lemmy.world 27 points 2 weeks ago

Thank you <3

Please let me know how it works...and enjoy the >>FR settings. If you've ever wanted to trolled by Bender (or a host of other 1990s / 2000s era memes), you'll love it.

15
submitted 2 weeks ago* (last edited 1 week ago) by SuspciousCarrot78@lemmy.world to c/localllm@lemmy.world

Hello!

As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:

It lies. Confidently. ALL THE TIME.

(Technically, it “bullshits” - https://link.springer.com/article/10.1007/s10676-024-09775-5

I’m autistic and extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s useful to you too.

The thing: llama-conductor

llama-conductor is a router that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first (because fuck big AI), but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).

I tried to make a glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.

TL;DR: “In God we trust. All others must bring data.”

Three examples:

1) KB mechanics that don’t suck (1990s engineering: markdown, JSON, checksums)

You keep “knowledge” as dumb folders on disk. Drop docs (.txt, .md, .pdf) in them. Then:

  • >>attach <kb> — attaches a KB folder
  • >>summ new — generates SUMM_*.md files with SHA-256 provenance baked in
  • `>> moves the original to a sub-folder

Now, when you ask something like:

“yo, what did the Commodore C64 retail for in 1982?”

…it answers from the attached KBs only. If the fact isn’t there, it tells you - explicitly - instead of winging it. Eg:

The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amiga’s pricing and timeline are also not detailed in the given facts.

Missing information includes the exact 1982 retail price for Commodore’s product line and which specific model(s) were sold then. The answer assumes the C64 is the intended product but cannot confirm this from the facts.

Confidence: medium | Source: Mixed

No vibes. No “well probably…”. Just: here’s what’s in your docs, here’s what’s missing, don't GIGO yourself into stupid.

And when you’re happy with your summaries, you can:

  • >>move to vault — promote those SUMMs into Qdrant for the heavy mode.

2) Mentats: proof-or-refusal mode (Vault-only)

Mentats is the “deep think” pipeline against your curated sources. It’s enforced isolation:

  • no chat history
  • no filesystem KBs
  • no Vodka
  • Vault-only grounding (Qdrant)

It runs triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:

FINAL_ANSWER:
The provided facts do not contain information about the Acorn computer or its 1995 sale price.

Sources: Vault
FACTS_USED: NONE
[ZARDOZ HATH SPOKEN]

Also yes, it writes a mentats_debug.log, because of course it does. Go look at it any time you want.

The flow is basically: Attach KBs → SUMM → Move to Vault → Mentats. No mystery meat. No “trust me bro, embeddings.”

3) Vodka: deterministic memory on a potato budget

Local LLMs have two classic problems: goldfish memory + context bloat that murders your VRAM.

Vodka fixes both without extra model compute. (Yes, I used the power of JSON files to hack the planet instead of buying more VRAM from NVIDIA).

  • !! stores facts verbatim (JSON on disk)
  • ?? recalls them verbatim (TTL + touch limits so memory doesn’t become landfill)
  • CTC (Cut The Crap) hard-caps context (last N messages + char cap) so you don’t get VRAM spikes after 400 messages

So instead of:

“Remember my server is 203.0.113.42” → “Got it!” → [100 msgs later] → “127.0.0.1 🥰”

you get:

!! my server is 203.0.113.42 ?? server ip203.0.113.42 (with TTL/touch metadata)

And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.


There’s more (a lot more) in the README, but I’ve already over-autism’ed this post.

TL;DR:

If you want your local LLM to shut up when it doesn’t know and show receipts when it does, come poke it:

PS: Sorry about the AI slop image. I can’t draw for shit.

PPS: A human with ASD wrote this using Notepad++. If it the formatting is weird, now you know why.

467
submitted 2 weeks ago* (last edited 1 week ago) by SuspciousCarrot78@lemmy.world to c/privacy@lemmy.ml

Hello!

As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:

It lies. Confidently. ALL THE TIME.

(Technically, it “bullshits” - https://link.springer.com/article/10.1007/s10676-024-09775-5

I’m autistic and extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s useful to you too.

The thing: llama-conductor

llama-conductor is a router that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first (because fuck big AI), but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).

I tried to make a glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.

TL;DR: “In God we trust. All others must bring data.”

Three examples:

1) KB mechanics that don’t suck (1990s engineering: markdown, JSON, checksums)

You keep “knowledge” as dumb folders on disk. Drop docs (.txt, .md, .pdf) in them. Then:

  • >>attach <kb> — attaches a KB folder
  • >>summ new — generates SUMM_*.md files with SHA-256 provenance baked in
  • `>> moves the original to a sub-folder

Now, when you ask something like:

“yo, what did the Commodore C64 retail for in 1982?”

…it answers from the attached KBs only. If the fact isn’t there, it tells you - explicitly - instead of winging it. Eg:

The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amiga’s pricing and timeline are also not detailed in the given facts.

Missing information includes the exact 1982 retail price for Commodore’s product line and which specific model(s) were sold then. The answer assumes the C64 is the intended product but cannot confirm this from the facts.

Confidence: medium | Source: Mixed

No vibes. No “well probably…”. Just: here’s what’s in your docs, here’s what’s missing, don't GIGO yourself into stupid.

And when you’re happy with your summaries, you can:

  • >>move to vault — promote those SUMMs into Qdrant for the heavy mode.

2) Mentats: proof-or-refusal mode (Vault-only)

Mentats is the “deep think” pipeline against your curated sources. It’s enforced isolation:

  • no chat history
  • no filesystem KBs
  • no Vodka
  • Vault-only grounding (Qdrant)

It runs triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:

FINAL_ANSWER:
The provided facts do not contain information about the Acorn computer or its 1995 sale price.

Sources: Vault
FACTS_USED: NONE
[ZARDOZ HATH SPOKEN]

Also yes, it writes a mentats_debug.log, because of course it does. Go look at it any time you want.

The flow is basically: Attach KBs → SUMM → Move to Vault → Mentats. No mystery meat. No “trust me bro, embeddings.”

3) Vodka: deterministic memory on a potato budget

Local LLMs have two classic problems: goldfish memory + context bloat that murders your VRAM.

Vodka fixes both without extra model compute. (Yes, I used the power of JSON files to hack the planet instead of buying more VRAM from NVIDIA).

  • !! stores facts verbatim (JSON on disk)
  • ?? recalls them verbatim (TTL + touch limits so memory doesn’t become landfill)
  • CTC (Cut The Crap) hard-caps context (last N messages + char cap) so you don’t get VRAM spikes after 400 messages

So instead of:

“Remember my server is 203.0.113.42” → “Got it!” → [100 msgs later] → “127.0.0.1 🥰”

you get:

!! my server is 203.0.113.42 ?? server ip203.0.113.42 (with TTL/touch metadata)

And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.


There’s more (a lot more) in the README, but I’ve already over-autism’ed this post.

TL;DR:

If you want your local LLM to shut up when it doesn’t know and show receipts when it does, come poke it:

PS: Sorry about the AI slop image. I can't draw for shit.

PPS: A human with ASD wrote this using Notepad++. If it the formatting is weird, now you know why.

[-] SuspciousCarrot78@lemmy.world 64 points 2 weeks ago

Instructions unclear. Vending machine now prangent.

[-] SuspciousCarrot78@lemmy.world 21 points 3 weeks ago

I like to secretly imagine it stands for SIG SAUER. Bang = process ded

[-] SuspciousCarrot78@lemmy.world 22 points 4 weeks ago* (last edited 4 weeks ago)

I'm exactly doing this atm. I'm running a homelab on a $200 USD lenovo p330 tiny with a Tesla P4 GPU, via Proxmox, CasaOS and various containers. I'm about 80% finished with what I want it to do.

Uses 40W at the wall (peak around 100W). IOW about the cost of a light bulb. Here's what I run -

LXC 1: Media stack

Radarr, Sonarr, Sabnzdb, Jellyfin. Bye bye Netflix, D+ etc

LXC 2: Gaming stack

Emulation and PC gaming I like. Lots of fun indie titles, older games (GameCube, Wii, PS2). Stream from homelab to any TV in house via Sunshine / Moonlight. Bye bye Gforce now.

LXC 3: AI stack

  • Llama.cpp + llama-swap (AI back ends)

  • Qdrant server (document server)

  • Openwebui (front end)

Bespoke MoA system I designed (which I affectionately call my Mixture of Assholes, not agents) using python router and some clever tricks to make a self hosted AI that doesn't scrape my shit and is fully auditble and non hallucinatory...which would otherwise be impossible with typical cloud "black box" approaches. I don't want black box; I want glass box.

Bye bye ChatGPT.

LXC 4: Telecom stack

Vocechat (self hosted family chat replacement for WhatsApp / messenger),

Lemmy node (TBC).

Bye bye WhatsApp and Reddit

LXC 5: Security stack

Wireguard (own VPN). NPM (reverse proxy). Fail2Ban. PiHole (block ads).

LXC 6: Document stack

Immich (Google photos replacement), Joplin (Google keep), Snapdrop (Airdrop), Filedrop (Dropbox), SearXNG (Search engine).

Once I have everything tuned perfectly, I'm going to share everything on Github / Codeberg. I think the LLM stack alone is interesting enough to merit attention. Everyone makes big claims but I've got the data and method to prove it. I welcome others poking it.

Ultimately, people need to know how to do this, and I'm doing my best to document what I did so that someone could replicate and improve it. Make it easier for the next person. That's the only way forward - together. Faster alone, further together and all that.

PS: It's funny how far spite will take someone. I got into media servers after YouTube premium, Netflix etc jacked their prices up and baked in ads.

I got into lowendgaming when some PCMR midwit said "you can't play that on your p.o.s. rig". Wrong - I can and I did. It just needed know how, not "throw money at problem till it goes away".

I got into self hosting LLM when ChatGPT kept being...ChatGPT. Wasting my time and money with its confident, smooth lies. No, unacceptable.

The final straw was when Reddit locked my account and shadow banned me for using different IP addresses while travelling / staying at different AirBNBs during holiday "for my safety".

I had all the pieces there...but that was the final "fine...I'll do it myself" Thanos moment.

view more: next ›

SuspciousCarrot78

0 post score
0 comment score
joined 5 months ago