Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Picked up a sneer in the wild (through trawling David Gerardās Bluesky):
You want my take, Kathrynās on the money - future expectations on how people speak will actively shift away from anything that could be mistaken for sounding like an LLM, whether because you want to avoid being falsely accused of posting slop, or because the slop-nami has pushed your writing habits away from slop-like traits.
kinda related but wouldnāt it be fun to believe that LLMs were invented by Big Em Dash as a conspiracy
I fucking hate them for ruining the em dash, I liked to use it from time to time
somewhere out there, thereās a writer who really likes the em dash, the word ādelve,ā and answering questions with a one-word hyper-chipper affirmative, followed by three sentences of people pleasing. He canāt get a job because he keeps being accused of using AI
Itās not just blank, itās blank
Forgot to save who said it, but on bsky somebody said they or their friends had come up with a slur for people who use genAI for everything: sloppers.
More people should have read Zima Blue.
Zima Blue
Didnāt know it was something readable! I just know it as an episode of Love Death + Robots. It was a standout episode in an otherwise pretty boring first two seasons.
Forgot to save who said it, but on bsky somebody said they or their friends had come up with a slur for people who use genAI for everything: sloppers.
Finally, a slur my British ass can sling at people guilt-free
https://www.astralcodexten.com/p/your-review-the-astral-codex-ten lol but good for lore:
The most toxic the comments section has ever got (beyond the very early days) was on the post Gupta on Enlightenment. I feel like the comments section on this post should be part of the ACX main canon because it is so cosmically hilarious. It concerns a man name Vinay Gupta (founder of a blockchain-based dating website) and his claims to have reached enlightenment. Some people in the comments are sceptical that Vinay Gupta is indeed an enlightened being, citing that enlightened people donāt typically found blockchain-based dating websites. A new forum poster with the handle āVinay Guptaā, claiming to be Vinay Gupta and writing in a very similar style to the actual Vinay Gupta, turns up and starts arguing with everyone in an extremely toxic way (in the objective sense that his comments score very highly on the toxic-bert scoring system), which provokes more merriment that a self-described enlightened being would deploy such classic internet tough-guy approaches as āI donāt think much of a four-on-one face off against untrained opponentsā (link) and āthis board is filled with self-satisfied assholes who feel free to hold forth on whatever subject crosses their minds, with the absolute certainty that theyāre the smartest people in the roomā
this board is filled with self-satisfied assholes who feel free to hold forth on whatever subject crosses their minds, with the absolute certainty that theyāre the smartest people in the room
If your system flagged that as toxic, makes me wonder about the system. Also check your bias against people saying this because it def comes off as true. (And hey if this truth hurts, remember that he didnt claim yall are not the smartest people, yall have 130+ iqs remember).
Damn you, Scott! Stop making me agree with people who created blockchain-based dating apps!
I didnāt see this article here yet, but I just saw it elsewhere and itās pretty good: Potemkin Understanding in Large Language Models
Copilot will be given a little avatar with a āroomā and will āageā. In other words: we have now reached the Microsoft Bob stage of the AI bubble.
Fuck you Microsoft, Iām gonna have to pretend to be autoplagās dad now, at least have the courtesy not to make it look like a cum blob.
This sounds like the plot of an oglaf comic
At least Microsoft Bob gave us comic sans.
This is literally just a Tamagotchi but worse
EDIT: This was supposed to be an offhanded comment, but reading further makes me think Mustafa Suleyman has literally never heard of a Tamagotchi
Those who do not study history are doomed to recreate neopets.
Neopets at least brought joy to a generation of nascent furries. Copilot is fixing to have the exact opposite impact on internet infrastructure.
Of course, it is now funded by growth at all cost VC people, they do not understand fun or joy, all they want to see is n = n+1
I try to talk to ChatGPT for 2 straight hours but go crazy and have to stop | Daniel Hentschel
Iām here to help in whatever way I can. Just let me know what youād like to do next.
What Iād like to do next is fucking hang myself. Oh shit I shouldnāt have said that. No, Iām joking. That was a joke. I was joking about that. Joking.
Iām sorry he feels that way. Iām here for him if he wants to talk about anything, just let me know.
The guy who thinks itās important to communicate clearly (https://awful.systems/comment/7904956) wants to flip the number order around
https://www.lesswrong.com/posts/KXr8ys8PYppKXgGWj/english-writes-numbers-backwards
Iāll consider that when the Yanks abandon middle-endian date formatting.
Edit itās now tagged as āHumorā on LW. Cowards. Own your cranks.
Okay what the fuck, this is completely deranged. How can anyoneās intuitions about reading be this wrong? Is he secretly illiterate, did he dictate the article?
damn, a clanker pretending to be a human. humans read entire words at once, and this includes numbers, length and first digit already give some indication of magnitude
i like how on lw thereās a comment saying exactly the same thing but 5x more verbose
Yes, and largest place value is literally called the most significant digit. It makes perfect sense that it comes first.
Computers use both big endian and little endian and it doesnāt seem to matter much. Yet humans should switch their entire number system?
E: this guy canāt grasp the concept that left-to-right is arbitrary, which is really ironic given his point. Ok so in arabic itās exactly how this guy wants it, except no, the universally correct reading direction is left-to-right and arabic does it backwards just to be quirkyš, and humans, just like programs, flip a bit to read it left-to-right, where itās the opposite of how you should be reading it! of course.
Ok so in arabic itās exactly how this guy wants it
reminded me of this
text of image/tweet
@amasad. Apr 22
Silicon Valley will rediscover Islam:
- fasting for clarity and focus
- mindfulness 5 times a day
- no alcohol that ruins the soul & body
- long-term faithful relationships for a fulfilling happy family life
- effective altruism by giving zakat directly to the poor
The argument would be stronger (not strong, but stronger) if he could point to an existing numbering system that is little-endian and somehow show itās better
Starting this fight and not āstop counting at zero you damn computer nerds!ā is a choice. DIJKSTRAAAAAAAA shakes fist
(There is more to it in a way, as he is trying to be a Dijkstra, and changing an ingrained system which would confuse everybody and cause so many problems down the line. See all the off by one errors made by programmers. Damn DIJKSTRAAAAAA!).
i needed this giggle, gods bless our dubbers
Sometimes while browsing a website I catch a glimpse of the cute jackal girl and it makes me smile. Anubis isnāt a perfect thing by any means, but itās what the web deserves for its sins.
Even some pretty big name sites seem to use it as-is, down to the mascot. Youād think the software is pretty simple to customize into something more corporate and soulless, but Iām happy to see the animal eared cartoon girl on otherwise quite sterile sites.
Xe has talked a bit about the mascot in question before - by her own testimony, its there to act as a shopping cart test to see whoās willing to support the project. Reportedly, sheās planning to exploit it to make some more elaborate checks as well.
Huh, interesting approach. So the idea is that you either use the free version and (preferably) retain the anime girl mascot to promote Anubis itself, or you pay for a commercial license to remove animu in a way that is officially supported.
Basically. Its to explicitly prevent Xe from becoming the load-bearing peg for a massive portion of the Internet, thus ensuring this project doesnāt send her health down the shitter.
You want my prediction, I suspect future FOSS projects may decide to adopt mascots of their own, to avoid the āload-bearing maintainerā issue in a similar manner.
Seems a bit early to say whether others are going to do that. This experiment hasnāt had much time to prove itself and so far I havenāt recognized anyone using a corporate branded BotStopper instance, only the jackal girl version.
Responsibility manahement through branding is an interrsting idea and I wouldnāt mind seeing it working, but a prediction like that seems like jumping to conclusions prematurely. Then again, I guess thatās kinda what āpredictionā means in general.
certainly better than seeing the damned cloudflare Click Here To Human box, although I suspect a number of these deployments still donāt sponsor Xe or the project development :/
Should we give up on all altruist causes because the AGI God is nearly here? the answer may surprise you!
tldr; actually you shouldnāt give because the AGI God might not be quite omnipotent and thus would still benefit from your help and maybe there will be multiple Gods, some used for Good and some for Evil so your efforts are still needed. Shrimp are getting their eyeballs cut off right now!
Stomatopodcasting
Alex OāConnor platformed Sabine on his philosophy podcast. Iām irritated that he is turning into Lex Friedman simply by being completely uncritical. Well, no, wait, he was critical of Bellās theorem, and even Sabine had to tell him that Bellās work is mathematically proven. This is what a philosophy degree does to your epistemology, I guess.
My main sneer here is just some links. See, Maryās Room is answered by neuroscience; Mary does experience something new when color vision is restored. In particular, check out the testimonials from this 2021 Oregon experiment that restored color vision to some folks born without it. Focusing on physics, Iād like to introduce you all to Richard Behiel, particularly his explanations of electromagnetism and the Anderson-Higgs mechanism; there are deeper explanations for electricity and magnets, my dude. Also, if you havenāt yet, go read Alexās Wikipedia article, linked at the top of the sneer.
In the case of OāConnor and people like him, I think itās about much more than his philosophy background. Heās a YouTube creator who creates content on a regular schedule and makes a living off it. Once you start doing that, youāre exposed to all the horrible incentives of the YouTube engagement algorithm, which inevitably leads you to start seeking out other controversial YouTubers to platform and become friendly with. Itās an āIāll scratch your back if you scratch mineā situation dialed up to 11.
The same thing has happened to Sabine herself. Sheās been captured by the algorithm, which has naturally shifted her audience to the right, and now sheās been fully captured by that new audience.
I fully expect Alex OāConnor to remain on this treadmill. <remind me in 12months>
What getting c*cked by Mikhaila Peterson does to a mfer.
New piece from Brian Merchant, and a new edition of AI Killed My Job just dropped
Itās hard to come up with analogies for AI because itās so goddamn stupid. Itās like if asbestos was flammable.
itās like leaded gasoline for internet - it makes people stupid and aggressive, kids are hit the worst by it, fallout will be felt for decades, cleanup might be hard to impossible, and ultimately itās a product of corporate greed. except even leaded gasoline solved some problem
itās also like gambling as in hook model. itās like cocaine in that it has been marketed to managerial class as a status symbol of sorts
Or like the radium craze of the early 20th century (even if radium may have a lot more legitimate use cases than current-day LLM).
One of the products was removal of unwanted hair. You radiated and the hair just fell off! How practical!
To be fair to the radium people, I donāt think the correlation between radiation and cancer was established until the aftermath of the bombings of Hiroshima and Nagasaki. Still one could see hair falling of as a warning sign of sorts.
Someone I know called AI āa non-invasive procedure to lobotomise peopleā after I mentioned this Pivot to AI, and its stuck with me ever since
todayās least surprising news: substackās coterie of totally-not-nazis fucking love AI https://on.substack.com/p/the-substack-ai-report
So this blog post was framed positively towards LLMās and is too generous in accepting many of the claims around them, but even so, the end conclusions are pretty harsh on practical LLM agents: https://utkarshkanwat.com/writing/betting-against-agents/
Basically, the author has tried extensively, in multiple projects, to make LLM agents work in various useful ways, but in practice:
The dirty secret of every production agent system is that the AI is doing maybe 30% of the work. The other 70% is tool engineering: designing feedback interfaces, managing context efficiently, handling partial failures, and building recovery mechanisms that the AI can actually understand and use.
The author strips down and simplifies and sanitizes everything going into the LLMs and then implements both automated checks and human confirmation on everything they put out. At that point it makes you question what value you are even getting out of the LLM. (The real answer, which the author only indirectly acknowledges, is attracting idiotic VC funding and upper management approval).
Even as critcal as they are, the author doesnāt acknowledge a lot of the bigger problems. The API cost is a major expense and design constraint on the LLM agents they have made, but the author doesnāt acknowledge the prices are likely to rise dramatically once VC subsidization runs out.