

Probably one part normalisation, one part AI supporters throwing tantrums when people don’t treat them like the specialiest little geniuses they believe they are. These people have incredibly fragile egos, after all.
he/they


Probably one part normalisation, one part AI supporters throwing tantrums when people don’t treat them like the specialiest little geniuses they believe they are. These people have incredibly fragile egos, after all.


Checked back on the smoldering dumpster fire that is Framework today.
Linux Community Ambassadors Tommi and Fraxinas have jumped ship, sneering the company’s fash turn on the way out.


they’ll just heat up a metal heat sink per request and then eject that into the sun
I know you’re joking, but I ended up quickly skimming Wikipedia to determine the viability of this (assuming the metal heatsinks were copper, since copper’s great for handling heat). Far as I can tell:
The sun isn’t hot enough or big enough to fuse anything heavier than hydrogen, so the copper’s gonna be doing jack shit when it gets dumped into the core
Fusing elements heavier than iron loses you energy rather than gaining it, and copper’s a heavier element than iron (atomic number of 29, compared to iron’s 26), so the copper undergoing fusion is a bad thing
The conditions necessary for fusing copper into anything else only happen during a supernova (i.e. the star is literally exploding)
So, this idea’s fucked from the outset. Does make me wonder if dumping enough metal into a large enough star (e.g. a dyson sphere collapsing into a supermassive star) could kick off a supernova, but that’s a question for another day.


The question of how to cool shit in space is something that BioWare asked themselves when writing the Mass Effect series, and they came up with some pretty detailed answers that they put in the game’s Codex (“Starships: Heat Management” in the Secondary section, if you’re looking for it).
That was for a series of sci-fi RPGs which haven’t had a new installment since 2017, and yet nobody’s bothering to even ask these questions when discussing technological proposals which could very well cost billions of dollars.


It also integrates Stake into your IDE, so you can ruin yourself financially whilst ruining the company’s codebase with AI garbage


“you can set the sycophancy engines so they aren’t sycophancy engines”
I’ll take “Shit that’s Impossible” for 500, Alex


I wonder when the market finally realises that AI is not actually smart and is not bringing any profits, and subsequently the bubble bursts, will it change this perception and in what direction? I would wager that crashing the US economy will give a big incentive to change it but will it be enough?
Once the bubble bursts, I expect artificial intelligence as a concept will suffer a swift death, with the many harms and failures of this bubble (hallucinations, plagiarism, the slop-nami, etcetera) coming to be viewed as the ultimate proof that computers are incapable of humanlike intelligence (let alone Superintelligence™). There will likely be a contingent of true believers even after the bubble’s burst, but the vast majority of people will respond to the question of “Can machines think?” with a resounding “no”.
AI’s usefulness to fascists (for propaganda, accountability sinks, misinformation, etcetera) and the actions of CEOs and AI supporters involved in the bubble (defending open theft, mocking their victims, cultural vandalism, denigrating human work, etcetera) will also pound a good few nails into AI’s coffin, by giving the public plenty of reason to treat any use of AI as a major red flag.


Checked back in on the ongoing Framework dumpster fire - Project Bluefin’s quietly cut ties, and the DHH connection is the reason why.


This entire newsstory sounds like the plotline for a rejected Captain Planet episode. What the fuck.


A judge has given George RR Martin the green light to sue OpenAI for copyright infringement.
We are now one step closer to the courts declaring open season on the slop-bots. Unsurprisingly, there’s jubilation on Bluesky.


Decided to check the Grokipedia “article” on the Muskrat out of morbid curiosity.
I haven’t seen anything this fawning since that one YouTube video which called him, and I quote its title directly, “The guy who is saving the world”.


I have a nasty feeling there’s a lot of ordinary people who are desperate to throw their money away on OpenAI stock. It’s the AI company! The flagship of the AI bubble! AI’s here to stay, you know! OpenAI? Sure bet!
Remember when a bunch of people poured their life savings into GameStop and started a financial doomsday cult once they lost everything? That will happen again if OpenAI goes public. (I recommend checking out This Is Financial Advice if you want a deep-dive into the GameStop apes, it is a trip)
One really bad consequence this deal just opened the gates to is to make it much easier for corporations to gut charities. A proper charity can run very like a business, but it gets a lot of free rides — and it can grow into quite the juicy plum. The California and Delaware decisions on OpenAI are precedents for large investors to come in and drain a charity if they say the right forms of words. I predict that will become a problem.
…why do I get the feeling companies are gonna start immediately gutting charities once the bubble pops


The follow-up’s worth mentioning too:
It’s interesting they’re citing specifically DHH and Ladybird as examples to follow, considering:
https://drewdevault.com/2025/09/24/2025-09-24-Cloudflare-and-fascists.html


Performing the SPARTAN Program’s original aim, sir.


Baldur Bjarnason’s (indirectly) given his thoughts on the piece, treating its existence (and the subsequent fallout) as a cautionary tale on why journalistic practices exist and how conflicts of interest can come back to haunt you.
(In particular, Baldur notes that Zitron could’ve nipped this problem in the bud by firing his AI-related clients after he became the premier AI critic.)


OpenAI’s data stealing scheme disguised as a browser can be prompt injected. In other news, water is wet.
EDIT: How did I not notice I was referring to OpenAI as ChatGPT (anyways, fixed it now)


Watched Once Upon A Time in Space recently - pretty damn good documentary series.


Trump Administration Providing Weapons Grade Plutonium to Sam Altman
The “Weapons Grade” part is almost certainly editorializing (hopefully), but this whole shit sounds like another Chernobyl waiting to happen


so is the moral decline a side effect, or technocapitalism working as designed.
AI is an accountability sink by design, its technocapitalism working as designed
If even a single case pops up, I’d be surprised - AFAIK, cybercriminals are exclusively using AI as a social engineering tool (e.g. voice cloning scams, AI-extruded phishing emails, etcetera). Humans are the weakest part of any cybersec system, after all.
Given AI’s track record on security, that sounds like an easy way to become an enticing target.