• 1 Post
  • 105 Comments
Joined 5 months ago
cake
Cake day: August 27th, 2025

help-circle
  • Thank you for saying that and for noticing it! Seeing you were kind enough to say that, I’d like to say a few things about how/why I made this stupid thing. It might be of interest to people. Or not LOL.

    To begin with, when I say I’m not a coder, I really mean it. It’s not false modesty. I taught myself this much over the course of a year and the reactivation of some very old skills (30 years hence). When I decided to do this, it wasn’t from any school of thought or design principle. I don’t know how CS professionals build things. The last time I looked at an IDE was Turbo Pascal. (Yes, I’m that many years old. I think it probably shows, what with the >> ?? !! ## all over the place. I stopped IT-ing when Pascal, Amiga and BBS were still the hot new things)

    What I do know is - what was the problem I was trying to solve?

    IF the following are true;

    1. I have ASD. If you tell me a thing, I assume your telling me a thing. I don’t assume you’re telling me one thing but mean something else.
    2. A LLM could “lie” to me, and I would believe it, because I’m not a subject matter expert on the thing (usually). Also see point 1.
    3. I want to believe it, because why would a tool say X but mean Y? See point 1.
    4. A LLM could lie to me in a way that is undetectable, because I have no idea what it’s reasoning over, how it’s reasoning over it. It’s literally a black box. I ask a Question—>MAGIC WIRES---->Answer.

    AND

    1. “The first principle is that you must not fool yourself and you are the easiest person to fool”

    THEN

    STOP.

    I’m fucked. This problem is unsolvable.

    Assuming LLMs are inherently hallucinatory within bounds (AFAIK, the current iterations all are), if there’s even a 1% chance that it will fuck me over (it has), then for my own sanity, I have to assume that such an outcome is a mathematical certainty. I cannot operate in this environment.

    PROBLEM: How do I interact with a system that is dangerously mimetic and dangerously opaque? What levers can I pull? Or do I just need to walk away?

    1. Unchangeable. Eat shit, BobbyLLM. Ok.
    2. I can do something about that…or at least, I can verify what’s being said, if the process isn’t too mentally taxing. Hmm. How?
    3. Fine, I want to believe it…but, do I have to believe it blindly? How about a defensive position - “Trust but verify”?. Hmm. How?
    4. Why does it HAVE to be opaque? If I build it, why do I have to hide the workings? I want to know how it works, breaks, and what it can do.

    Everything else flowed from those ideas. I actually came up with a design document (list of invariants). It’s about 1200 words or so, and unashamedly inspired by Asimov :)

    MoA / Llama-swap System

    System Invariants


    0. What an invariant is (binding)

    An invariant is a rule that:

    • Must always hold, regardless of refactor, feature, or model choice
    • Must not be violated temporarily, even internally. The system must not fuck me over silently.
    • Overrides convenience, performance, and cleverness.

    If a feature conflicts with an invariant, the feature is wrong. Do not add.


    1. Global system invariant rules:

    1.1 Determinism over cleverness

    • Given the same inputs and state, the system must behave predictably.

    • No component may:

      • infer hidden intent,
      • rely on emergent LLM behavior
      • or silently adapt across turns without explicit user action.

    1.2 Explicit beats implicit

    • Any influence on an answer must be inspectable and user-controllable.

    • This includes:

      • memory,
      • retrieval,
      • reasoning mode,
      • style transformation.

    If something affects the output, the user must be able to:

    • enable it,
    • disable it,
    • and see that it ran.

    Assume system is going to lie. Make its lies loud and obvious.


    On and on it drones LOL. I spent a good 4-5 months just revising a tighter and tighter series of constraints, so that 1) it would be less likely to break 2) if it did break, it do in a loud, obvious way.

    What you see on the repo is the best I could do, with what I had.

    I hope it’s something and I didn’t GIGO myself into stupid. But no promises :)



  • Agree-ish

    Hallucination is inherent to unconstrained generative models: if you ask them to fill gaps, they will. I don’t know how to “solve” that at the model level.

    What you can do is make “I don’t know” an enforced output, via constraints outside the model.

    My claim isn’t “LLMs won’t hallucinate.” It’s “the system won’t silently propagate hallucinations.” Grounding + refusal + provenance live outside the LLM, so the failure mode becomes “no supported answer” instead of “confident, slick lies.”

    So yeah: generation will always be fuzzy. Workflow-level determinism doesn’t have to be.

    I tried yelling, shouting, and even percussive maintenance but the stochastic parrot still insisted “gottle of geer” was the correct response.


  • Cheers!

    Re: OpenAI API format: 3.6 - not great, not terrible :)

    In practice I only had to implement a thin subset: POST /v1/chat/completions + GET /v1/models (most UIs just need those). The payload is basically {model, messages, temperature, stream…} and you return a choices[] with an assistant message. The annoying bits are the edge cases: streaming/SSE if you want it, matching the error shapes UIs expect, and being consistent about model IDs so clients don’t scream “model not found”. Which is actually a bug I still need to squash some more for OWUI 0.7.2. It likes to have its little conniptions.

    But TL;DR: more plumbing than rocket science. The real pain was sitting down with pen and paper and drawing what went where and what wasn’t allowed to do what. Because I knew I’d eventually fuck something up (I did, many times), I needed a thing that told me “no, that’s not what this is designed to do. Do not pass go. Do not collect $200”.

    shrug I tried.



  • Replying in specific

    “SUMM -> human reviews That would be fixed, but will work only for small KBs, as otherwise the summary would be exhaustive.”

    Correct: filesystem SUMM + human review is intentionally for small/curated KBs, not “review 3,000 entities.” The point of SUMM is curation, not bulk ingestion at scale. If the KB is so large that summaries become exhaustive, that dataset is in the wrong layer.

    “Case in point: assume a Person model with 3-7 facts per Person. Assume small 3000 size set of Persons. How would the SUMM of work?”

    Poorly. It shouldn’t work via filesystem SUMM. A “Person table” is structured data; SUMM is for documents. For 3,000 people × (3–7 facts), you’d put that in a structured store (SQLite/CSV/JSONL/whatever) and query it via a non-LLM tool (exact lookup/filter) or via Vault retrieval if you insist on LLM synthesis on top.

    Do you expect a human to verify that SUMM?”

    No - not for that use case. Human verification is realistic when you’re curating dozens/hundreds of docs, not thousands of structured records. For 3,000 persons, verification is done by data validation rules (schema, constraints, unit tests, diff checks), not reading summaries.

    “How are you going to converse with your system to get the data from that KB Person set?”

    Not by attaching a folder and “asking the model nicely.” You’d do one of these -

    • Exact tool lookup: person(“Alice”) -> facts, or search by ID/name, return rows deterministically.
    • Hybrid: tool lookup returns the relevant rows, then the LLM formats/summarizes them.
    • Vault retrieval: embed/chunk rows and retrieve top-k, but that’s still weaker than exact lookup for structured “Person facts.”

    So: conversation is fine as UX, but the retrieval step should be tool-based (exact) for that dataset.

    But actually, you give me a good idea here. It wouldn’t be the work of ages to build a >>look or >>find function into this thing. Maybe I will.

    My mental model for this was always “1 person, 1 box, personal scale” but maybe I need to think bigger. Then again, scope creep is a cruel bitch.

    “Because to me that sounds like case C, only works for small KBs.”

    For filesystem SUMM + human review: yes. That’s the design. It’s a personal, “curate your sources” workflow, not an enterprise entity store.

    This was never designed to be a multi-tenant look up system. I don’t know how to build that and still keep it 1) small 2) potato friendly 3) account for ALL the moving part nightmares that brings.

    What I built is STRICTLY for personal use, not enterprise use.

    Fair. Except that you are still left with the original problem of you don’t know WHEN the information is incorrect if you missed it at SUMM time.”

    Sort of. Summation via LLM was always going to be a lossy proposition. What this system changes is the failure mode:

    • Without this: errors can get injected and later you can’t tell where they came from.
    • With this: if a SUMM is wrong, it is pinned to a specific source file hash + summary hash, and you can fix it by re-summarizing or replacing the source.

    In other words: it doesn’t guarantee correctness; it guarantees traceability and non-silent drift. You still need to “trust but verify”.

    TL;DR:

    You don’t query big, structured datasets (like 3,000 “Person” records) via SUMM at all. You use exact tools/lookup first (DB/JSON/CSV), then let the LLM format or explain the result. That can probably be added reasonably quickly, because I tried to build something that future me wouldn’t hate past me for. We’ll see if he/I succeeded.

    SUMM is for curated documents, not tables. I can try adding a >>find >>grep or similar tool (the system is modular so I should be able to accommodate a few things like that, but I don’t want to end up with 1500 “micro tools” and hating my life)

    And yeah, you can still miss errors at SUMM time - the system doesn’t guarantee correctness. That’s on you. Sorry.

    What it guarantees is traceability: every answer is tied to a specific source + hash, so when something’s wrong, you can see where it came from and fix it instead of having silent drift. That’s the “glass box, not black box” part of the build.

    Sorry - really. This is the best I could figure out for caging the stochastic parrot. I built this while I was physically incapacitated and confined to be rest, and shooting the shit with Gippity all day. Built it for myself and then though “hmm, this might help someone else too. I can’t be the only one that’s noticed this problem”.

    If you or anyone else has a better idea, I’m willing to consider.







  • I’m not claiming I “fixed” bullshitting. I said I was TIRED of bullshit.

    So, the claim I’m making is: I made bullshit visible and bounded.

    The problem I’m solving isn’t “LLMs sometimes get things wrong.” That’s unsolvable AFAIK. What I’m solving for is “LLMs get things wrong in ways that are opaque and untraceable”.

    That’s solvable. That’s what hashes get you. Attribution, clear fail states and auditability. YOU still have to check sources if you care about correctness.

    The difference is - YOU are no longer checking a moving target or a black box. You’re checking a frozen, reproducible input.

    That’s… not how any of this works…

    Please don’t teach me to suck lemons. I have very strict parameters for fail states. When I say three strikes and your out, I do mean three strikes and you’re out. Quants ain’t quants, and models ain’t models. I am very particular in what I run, how I run it and what I tolerate.


  • Oh boy. So hallucination will occur here, and all further retrievals will be deterministically poisoned?

    Huh? That is the literal opposite of what I said. Like, diametrically opposite.

    Let me try this a different way.

    Hallucination in SUMM doesn’t “poison” the KB, because SUMMs are not authoritative facts, they’re derived artifacts with provenance. They’re explicitly marked as model output tied to a specific source hash. Two key mechanics that stop the cascade you’re describing:

    1. SUMM is not a “source of truth”

    The source of truth is still the original document, not the summary. The summary is just a compressed view of it. That’s why it carries a SHA of the original file. If a SUMM looks wrong, you can:

    a) trace it back to the exact document version b) regenerate it c) discard it d) read the original doc yourself and manually curate it.

    Nothing is “silently accepted” as ground truth.

    1. Promotion is manual, not automatic

    The dangerous step would be: model output -> auto-ingest into long-term knowledge.

    That’s explicitly not how this works.

    The Flow is: Attach KB -> SUMM -> human reviews -> Ok, move to Vault -> Mentats runs against that

    Don’t like a SUMM? Don’t push it into the vault. There’s a gate between “model said a thing” and “system treats this as curated knowledge.” That’s you - the human. Don’t GI and it won’t GO.

    Determinism works for you here. The hash doesn’t freeze the hallucination; it freezes the input snapshot. That makes bad summaries:

    • reproducible
    • inspectable
    • fixable

    Which is the opposite of silent drift.

    If SUMM is wrong and you miss it, the system will be consistently wrong in a traceable way, not creatively wrong in a new way every time.

    That’s a much easier class of bug to detect and correct. Again: the proposition is not “the model will never hallucinate.”. It’s “it can’t silently propagate hallucinations without a human explicitly allowing it to, and when it does, you trace it back to source version”.

    And that, is ultimately what keeps the pipeline from becoming “poisoned”.


  • Parts of this are RAG, sure

    RAG parts:

    • Vault / Mentats is classic retrieval + generation.
    • Vector store = Qdrant
    • Embedding and reranker

    So yes, that layer is RAG with extra steps.

    What’s not RAG -

    KB mode (filesystem SUMM path)

    This isn’t vector search. It’s deterministic, file-backed grounding. You attach folders as needed. The system summarizes and hashes docs. The model can only answer from those summaries in that mode. There’s no semantic retrieval step. It can style and jazz around the answer a little, but the answer is the answer is the answer.

    If the fact isn’t in the attached KB, the router forces a refusal. Put up or shut up.

    Vodka (facts memory)

    That’s not retrieval at all, in the LLM sense. It’s verbatim key-value recall.

    • JSON on disk
    • Exact store (!!)
    • Exact recall (??)

    Again, no embeddings, no similarity search, no model interpretation.

    “Facts that aren’t RAG”

    In my set up, they land in one of two buckets.

    1. Short-term / user facts → Vodka. That for things like numbers, appointments, lists, one-off notes etc. Deterministic recall, no synthesis.

    2. Curated knowledge → KB / Vault. Things you want grounded, auditable, and reusable.

    In response to the implicit “why not just RAG then”

    Classic RAG failure mode is: retrieval is fuzzy → model fills gaps → user can’t tell which part came from where.

    The extra “steps” are there to separate memory from knowledge, separate retrieval from synthesis and make refusal a legal output, not a model choice.

    So yeah; some of it is RAG. RAG is good. The point is this system is designed so not everything of value is forced through a semantic search + generate loop. I don’t trust LLMs. I am actively hostile to them. This is me telling my LLM to STFU and prove it, or GTFO. I know that’s a weird way to operate maybe (advesarial, assume worst, engineer around issue) but that’s how ASD brains work.





  • Yeah.

    The SHA isn’t there to make the model smarter. It’s there to make the source immutable and auditable.

    Having been burnt by LLMs (far too many times), I now start from a position of “fuck you, prove it”.

    The hash proves which bytes the answer was grounded in, should I ever want to check it. If the model misreads or misinterprets, you can point to the source and say “the mistake is here, not in my memory of what the source was.”.

    If it does that more than twice, straight in the bin. I have zero chill any more.

    Secondly, drift detection. If someone edits or swaps a file later, the hash changes. That means yesterday’s answer can’t silently pretend it came from today’s document. I doubt my kids are going to sneak in and change the historical prices of 8 bit computers (well, the big one might…she’s dead keen on being a hacker) but I wanted to be sure no one and no-thing was fucking with me.

    Finally, you (or someone else) can re-run the same question against the same hashed inputs and see if the system behaves the same way.

    So: the hashes don’t fix hallucinations (I don’t even think that’s possible, even with magic). The hashes make it possible to audit the answer and spot why hallucinations might have happened.

    PS: You’re right that interpretation errors still exist. That’s why Mentats does the triple-pass and why the system clearly flags “missing / unsupported” instead of filling gaps. The SHA is there to make the pipeline inspectable, instead of “trust me, bro.”.

    Guess what? I don’t trust you. Prove it or GTFO.