

I see what you did there :)
Claude! Look how they massacred my boy!


I see what you did there :)
Claude! Look how they massacred my boy!


I’d love to put a custom OS on mine, even if it tripped the Knox fuse (which disables the Samsung Pay NFC option). The issue I have is that no CFW allows / guarantees compatible VoLTE…and without that, phones don’t really work on Australian networks. Have to have 4G + white listed VoLTE.
Its a mess down here.
Ironically, my Duoquin F21 pro works perfectly. How they got white listed I have no idea


Cheers for that!


Yeah. I had ChatGPT (more than once) take the code given, cut it in half, scramble it and then claim “see? I did it! Code works now”.
When you point out what it did, by pasting its own code back in, it will say “oh, why did you do that? There’s a mistake in your code at XYZ”. No…there’s a mistake in your code, buddy.
When you paste in what you want it to add, it “fixes” XYZ … and …surprise surprise… It’s either your OG code or more breaks.
The only one ive seen that doesn’t do this is (or does it a lot less) is Claude.
I think Lumo for the most part is really just Mistral, Nemotron and Openhands in a trench coat. ICBW.
I think Lumo’s value proposition is around data retention and privacy, not SOTA llm tech.


Ah; as I recall, it’s because they polled users and there was an overwhelming “yes please”, based on Proton’s privacy stance.
Given proton is hosted in EU, they’re likely quite serious about GDPR and zero data retention.
Lumo is interesting. Architecturally I mean, as a LLM enjoyer. I played around with it a bit, and stole a few ideas from them when I jury rigged my system. Having said that, you could get a ton more with $10 on OpenRouter. Hell, the free models on there are better than lumo and you can choose to only use privacy respecting providers.


Thank you for saying that and for noticing it! Seeing you were kind enough to say that, I’d like to say a few things about how/why I made this stupid thing. It might be of interest to people. Or not LOL.
To begin with, when I say I’m not a coder, I really mean it. It’s not false modesty. I taught myself this much over the course of a year and the reactivation of some very old skills (30 years hence). When I decided to do this, it wasn’t from any school of thought or design principle. I don’t know how CS professionals build things. The last time I looked at an IDE was Turbo Pascal. (Yes, I’m that many years old. I think it probably shows, what with the >> ?? !! ## all over the place. I stopped IT-ing when Pascal, Amiga and BBS were still the hot new things)
What I do know is - what was the problem I was trying to solve?
IF the following are true;
AND
THEN
STOP.
I’m fucked. This problem is unsolvable.
Assuming LLMs are inherently hallucinatory within bounds (AFAIK, the current iterations all are), if there’s even a 1% chance that it will fuck me over (it has), then for my own sanity, I have to assume that such an outcome is a mathematical certainty. I cannot operate in this environment.
PROBLEM: How do I interact with a system that is dangerously mimetic and dangerously opaque? What levers can I pull? Or do I just need to walk away?
Everything else flowed from those ideas. I actually came up with a design document (list of invariants). It’s about 1200 words or so, and unashamedly inspired by Asimov :)
MoA / Llama-swap System
System Invariants
0. What an invariant is (binding)
An invariant is a rule that:
If a feature conflicts with an invariant, the feature is wrong. Do not add.
1. Global system invariant rules:
1.1 Determinism over cleverness
Given the same inputs and state, the system must behave predictably.
No component may:
1.2 Explicit beats implicit
Any influence on an answer must be inspectable and user-controllable.
This includes:
If something affects the output, the user must be able to:
Assume system is going to lie. Make its lies loud and obvious.
On and on it drones LOL. I spent a good 4-5 months just revising a tighter and tighter series of constraints, so that 1) it would be less likely to break 2) if it did break, it do in a loud, obvious way.
What you see on the repo is the best I could do, with what I had.
I hope it’s something and I didn’t GIGO myself into stupid. But no promises :)


You’re welcome. Hope its of some use to.you


Agree-ish
Hallucination is inherent to unconstrained generative models: if you ask them to fill gaps, they will. I don’t know how to “solve” that at the model level.
What you can do is make “I don’t know” an enforced output, via constraints outside the model.
My claim isn’t “LLMs won’t hallucinate.” It’s “the system won’t silently propagate hallucinations.” Grounding + refusal + provenance live outside the LLM, so the failure mode becomes “no supported answer” instead of “confident, slick lies.”
So yeah: generation will always be fuzzy. Workflow-level determinism doesn’t have to be.
I tried yelling, shouting, and even percussive maintenance but the stochastic parrot still insisted “gottle of geer” was the correct response.


Cheers!
Re: OpenAI API format: 3.6 - not great, not terrible :)
In practice I only had to implement a thin subset: POST /v1/chat/completions + GET /v1/models (most UIs just need those). The payload is basically {model, messages, temperature, stream…} and you return a choices[] with an assistant message. The annoying bits are the edge cases: streaming/SSE if you want it, matching the error shapes UIs expect, and being consistent about model IDs so clients don’t scream “model not found”. Which is actually a bug I still need to squash some more for OWUI 0.7.2. It likes to have its little conniptions.
But TL;DR: more plumbing than rocket science. The real pain was sitting down with pen and paper and drawing what went where and what wasn’t allowed to do what. Because I knew I’d eventually fuck something up (I did, many times), I needed a thing that told me “no, that’s not what this is designed to do. Do not pass go. Do not collect $200”.
shrug I tried.


Thanks. It’s not perfect but I hope it’s a step in a useful direction


Replying in specific
“SUMM -> human reviews That would be fixed, but will work only for small KBs, as otherwise the summary would be exhaustive.”
Correct: filesystem SUMM + human review is intentionally for small/curated KBs, not “review 3,000 entities.” The point of SUMM is curation, not bulk ingestion at scale. If the KB is so large that summaries become exhaustive, that dataset is in the wrong layer.
“Case in point: assume a Person model with 3-7 facts per Person. Assume small 3000 size set of Persons. How would the SUMM of work?”
Poorly. It shouldn’t work via filesystem SUMM. A “Person table” is structured data; SUMM is for documents. For 3,000 people × (3–7 facts), you’d put that in a structured store (SQLite/CSV/JSONL/whatever) and query it via a non-LLM tool (exact lookup/filter) or via Vault retrieval if you insist on LLM synthesis on top.
Do you expect a human to verify that SUMM?”
No - not for that use case. Human verification is realistic when you’re curating dozens/hundreds of docs, not thousands of structured records. For 3,000 persons, verification is done by data validation rules (schema, constraints, unit tests, diff checks), not reading summaries.
“How are you going to converse with your system to get the data from that KB Person set?”
Not by attaching a folder and “asking the model nicely.” You’d do one of these -
So: conversation is fine as UX, but the retrieval step should be tool-based (exact) for that dataset.
But actually, you give me a good idea here. It wouldn’t be the work of ages to build a >>look or >>find function into this thing. Maybe I will.
My mental model for this was always “1 person, 1 box, personal scale” but maybe I need to think bigger. Then again, scope creep is a cruel bitch.
“Because to me that sounds like case C, only works for small KBs.”
For filesystem SUMM + human review: yes. That’s the design. It’s a personal, “curate your sources” workflow, not an enterprise entity store.
This was never designed to be a multi-tenant look up system. I don’t know how to build that and still keep it 1) small 2) potato friendly 3) account for ALL the moving part nightmares that brings.
What I built is STRICTLY for personal use, not enterprise use.
Fair. Except that you are still left with the original problem of you don’t know WHEN the information is incorrect if you missed it at SUMM time.”
Sort of. Summation via LLM was always going to be a lossy proposition. What this system changes is the failure mode:
In other words: it doesn’t guarantee correctness; it guarantees traceability and non-silent drift. You still need to “trust but verify”.
TL;DR:
You don’t query big, structured datasets (like 3,000 “Person” records) via SUMM at all. You use exact tools/lookup first (DB/JSON/CSV), then let the LLM format or explain the result. That can probably be added reasonably quickly, because I tried to build something that future me wouldn’t hate past me for. We’ll see if he/I succeeded.
SUMM is for curated documents, not tables. I can try adding a >>find >>grep or similar tool (the system is modular so I should be able to accommodate a few things like that, but I don’t want to end up with 1500 “micro tools” and hating my life)
And yeah, you can still miss errors at SUMM time - the system doesn’t guarantee correctness. That’s on you. Sorry.
What it guarantees is traceability: every answer is tied to a specific source + hash, so when something’s wrong, you can see where it came from and fix it instead of having silent drift. That’s the “glass box, not black box” part of the build.
Sorry - really. This is the best I could figure out for caging the stochastic parrot. I built this while I was physically incapacitated and confined to be rest, and shooting the shit with Gippity all day. Built it for myself and then though “hmm, this might help someone else too. I can’t be the only one that’s noticed this problem”.
If you or anyone else has a better idea, I’m willing to consider.


Thank you. I appreciate you saying so!


Please enjoy. Make sure you use >>FR mode at least once. You probably won’t like the seed quotes but maybe just maybe you might and I’ll be able to hear the “ha” from here.


Spite based inference?
You dirty pirate hooker.
I don’t believe you.


Thank you! I appreciate you.
PS: Where’s the guy this should be targeted at?


Well, to butcher Sinatra: if it can make it on Lemmy and HN, it can make it anywhere :)


I’m not claiming I “fixed” bullshitting. I said I was TIRED of bullshit.
So, the claim I’m making is: I made bullshit visible and bounded.
The problem I’m solving isn’t “LLMs sometimes get things wrong.” That’s unsolvable AFAIK. What I’m solving for is “LLMs get things wrong in ways that are opaque and untraceable”.
That’s solvable. That’s what hashes get you. Attribution, clear fail states and auditability. YOU still have to check sources if you care about correctness.
The difference is - YOU are no longer checking a moving target or a black box. You’re checking a frozen, reproducible input.
That’s… not how any of this works…
Please don’t teach me to suck lemons. I have very strict parameters for fail states. When I say three strikes and your out, I do mean three strikes and you’re out. Quants ain’t quants, and models ain’t models. I am very particular in what I run, how I run it and what I tolerate.


Oh boy. So hallucination will occur here, and all further retrievals will be deterministically poisoned?
Huh? That is the literal opposite of what I said. Like, diametrically opposite.
Let me try this a different way.
Hallucination in SUMM doesn’t “poison” the KB, because SUMMs are not authoritative facts, they’re derived artifacts with provenance. They’re explicitly marked as model output tied to a specific source hash. Two key mechanics that stop the cascade you’re describing:
The source of truth is still the original document, not the summary. The summary is just a compressed view of it. That’s why it carries a SHA of the original file. If a SUMM looks wrong, you can:
a) trace it back to the exact document version b) regenerate it c) discard it d) read the original doc yourself and manually curate it.
Nothing is “silently accepted” as ground truth.
The dangerous step would be: model output -> auto-ingest into long-term knowledge.
That’s explicitly not how this works.
The Flow is: Attach KB -> SUMM -> human reviews -> Ok, move to Vault -> Mentats runs against that
Don’t like a SUMM? Don’t push it into the vault. There’s a gate between “model said a thing” and “system treats this as curated knowledge.” That’s you - the human. Don’t GI and it won’t GO.
Determinism works for you here. The hash doesn’t freeze the hallucination; it freezes the input snapshot. That makes bad summaries:
Which is the opposite of silent drift.
If SUMM is wrong and you miss it, the system will be consistently wrong in a traceable way, not creatively wrong in a new way every time.
That’s a much easier class of bug to detect and correct. Again: the proposition is not “the model will never hallucinate.”. It’s “it can’t silently propagate hallucinations without a human explicitly allowing it to, and when it does, you trace it back to source version”.
And that, is ultimately what keeps the pipeline from becoming “poisoned”.


Parts of this are RAG, sure
RAG parts:
So yes, that layer is RAG with extra steps.
What’s not RAG -
KB mode (filesystem SUMM path)
This isn’t vector search. It’s deterministic, file-backed grounding. You attach folders as needed. The system summarizes and hashes docs. The model can only answer from those summaries in that mode. There’s no semantic retrieval step. It can style and jazz around the answer a little, but the answer is the answer is the answer.
If the fact isn’t in the attached KB, the router forces a refusal. Put up or shut up.
Vodka (facts memory)
That’s not retrieval at all, in the LLM sense. It’s verbatim key-value recall.
Again, no embeddings, no similarity search, no model interpretation.
“Facts that aren’t RAG”
In my set up, they land in one of two buckets.
Short-term / user facts → Vodka. That for things like numbers, appointments, lists, one-off notes etc. Deterministic recall, no synthesis.
Curated knowledge → KB / Vault. Things you want grounded, auditable, and reusable.
In response to the implicit “why not just RAG then”
Classic RAG failure mode is: retrieval is fuzzy → model fills gaps → user can’t tell which part came from where.
The extra “steps” are there to separate memory from knowledge, separate retrieval from synthesis and make refusal a legal output, not a model choice.
So yeah; some of it is RAG. RAG is good. The point is this system is designed so not everything of value is forced through a semantic search + generate loop. I don’t trust LLMs. I am actively hostile to them. This is me telling my LLM to STFU and prove it, or GTFO. I know that’s a weird way to operate maybe (advesarial, assume worst, engineer around issue) but that’s how ASD brains work.
I haven’t tried wiring it up to Claude, that might be fun.
Claude had done alright by me :) Swears a lot, helps me fix code (honestly, I have no idea where he gets that from… :P). Expensive tho.
Now ChatGPT… well… Gippity being Gippity is the reason llama-conductor exists in the first place.
Anyway, I just added some OCR stuff into the router. So now, you can drop in a screenshot and get it to mull over that, or extract text directly from images etc.
I have a few other little side-cars I’m thinking of adding of the next few months, based on what folks here have mentioned
!!LIST command (list all stored in vodka memories)!!FLUSH (flush rolling chat summary)>>RAW (keep all the router mechanics but remove presentation/polish prompts and just raw dog it.>>JSON Schema + Validity Verifier>>CALC (math, unit conversion, percentages, timestamps, sizes etc)>>FIND (Pulls IPs, emails, URLs, hashes, IDs, etc from documents and returns exact structured output)I’m open to other suggestions / ideas.
PS: It’s astonishing to me (and I built it!) just how FAST .py commands run. Basically instantaneous. So, I’m all for adding a few more “useful” cheat-codes like this.