Stubsack: weekly thread for sneers not worth an entire post, week ending 9 March 2025 - eviltoast

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    13 hours ago

    I mean, it’s obviously true that games have their own internal structures and languages that aren’t always obvious without knowledge or context, and the FireRed comparison is a neat case where you can see that language improving as designers have both more tools (here meaning colors and pixels) and also more experience in using them. But also even in the LW thread they mention that when humans run into that kind of problem they don’t just act randomly for 6 hours. Either they came up with some systematic approach for solving the problem, they walked away from the game to ask for help, or something else. Also you have the metacognition to be able to understand easily “that rug at the bottom marks the exit” once it’s explained, which I’m pretty sure the LLM doesn’t have the ability to process. It’s not even like a particularly dumb 6-year-old. Even if it’s prone to similar levels of over matching and pattern recognition errors, the 6-year-old has an actual conscious brain to help solve those problems. The whole thing shows once again that pattern recognition and reproduction can get you impressively far in terms of imitating thought, but there’s a world of difference between that imitation and the real deal.