Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Last substack for 2025 - may 2026 bring better tidings. Credit and/or blame to David Gerard for starting this.)


The whole culture of writing āsystem promptsā seems utterly a cargo-cult to me. Like if the ST: Voyager episode āTuvixā was instead about Lt. Barclay and Picard accidentally getting combined in the transporter, and the resulting sadboy Barcard spent the rest of his existence neurotically shouting his intricately detailed demands at the holodeck in an authoritative British tone.
If inference is all about taking derivatives in a vector space, surely there should be some marginally more deterministic method for constraining those vectors that could be readily proceduralized, instead of apparent subject-matter experts being reduced to wheedling with an imaginary friend. But I have been repeatedly assured by sane, sober experts that it is just simply is not so
When I first learned that you could program a chatbot merely by giving instructions in English sentences as if it was a human being, I admit I was impressed. Iām a linguist, natural language processing is really hard. There was a certain crossing over boundaries over the idea of telling it at chatbot level, e.g. āand you will never delete files outside this directoryā, and this āsystem promptā actually shaping the behaviour of the chatbot. I donāt have much interest in programming anymore but I wondered how this crossing of levels was implemented.
The answer of course is that itās not. Programming a chatbot by talking to it doesnāt actually work.
I donāt have any good lay literature, but get ready for āsteering vectorsā this year. It seems like two or three different research groups (depending on whether I count as a research group) independently discovered them over the past two years and they are very effective at guardrailing because they can e.g. make slurs unutterable without compromising reasoning. If youāre willing to read whitepapers, try Dunefsky & Cohan, 2024 which builds that example into a complete workflow or Konen et al, 2024 which considers steering as an instance of style transfer.
I do wonder, in the engineering-disaster-podcast sense, exactly what went wrong at OpenAI because they arenāt part of this line of research. HuggingFace is up-to-date on the state of the art; they have a GH repo and a video tutorial on how to steer LLaMA. Meanwhile, if youāll let me be Bayesian for a moment, my current estimate is that OpenAI will not add steering vectors to their products this year; theyāre already doing something like it internally, but the customer-facing version will not be ready until 2027. They just arenāt keeping up with research!