Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many ā€œesotericā€ right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • lagrangeinterpolator@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    Ā·
    5 hours ago

    I attended a town hall hosted by the department at my university supposedly for general discussion about department affairs. Considering the university had recently made moves such as adding ā€œAIā€ into the very name of the department, I had suspicions that much of the discussion would be about AI. (I realize I’m doxxing myself but whatever.) I mostly came for the free food, but I was also interested in seeing what people thought about AI.

    The event started with a talk by a prominent professor with major administrative power in the department, and indeed the talk was mostly about AI. His views were that he personally didn’t like AI, but he believed that it had changed the world (particularly in programming), and that it was going to stay. One of his justifications for pivoting the department to AI was ensuring universities had some say in AI and not letting all the control go to unaccountable corporations.

    The reaction from the audience was a pleasant surprise to me. He asked everyone how much they were excited about AI (hardly anyone) and how much they were worried (most of the audience). By far the most amusing moment was when someone asked, ā€œWhat if the assumption that AI is inevitable is wrong? What if AI does not live up to its promises?ā€ (Sadly, I don’t remember the exact words that the person said.) The professor’s response was that by this point, there are so many trustworthy, smart, prominent people who definitely wouldn’t fall for scams, and they have adopted AI. He trusts those people, so he trusts that AI is genuine. I don’t know if the audience member accepted this explanation, but I hope not. Our modus operandi is FOMO.

    The pizza was only ok, not really worth a 90 minute event.

    • o7___o7@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      4 hours ago

      …there are so many trustworthy, smart, prominent people who definitely wouldn’t fall for scams…

      Good god, I’m sorry.

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      Ā·
      6 hours ago

      Somehow this is no worse than his usual fare, such as a thumbnail that is just a bunch of colored lines resembling a line chart but without representing any actual data, with some random marked points labeled ā€œDark Farmsā€ and ā€œHuman Zooā€.

      No, I’m not kidding.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      9 hours ago

      Setting aside, for a moment, the flagrant racism and lack of historical and cultural awareness, the fact that the ships are mirrored across the center point because apparently the bow and stern of a sailing ship look similar enough to whatever model creates this image really does put this whole argument into context. Not that the people actually having those theological arguments appear to appreciate it.

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    Ā·
    edit-2
    20 hours ago

    We’ve got the new system prompt for OpenAI’s Codex now, and boy is it fun.

    While the goblin stuff is the headliner here, and there are a few other little fun notes like an explicit instruction to avoid em-dashes. Basically it’s really obvious that they don’t have a meaningful way to describe exactly what they want it to do and so they’re playing whack-a-mole with undesired behaviors in order to minimize how often it embarrasses them.

    But I think Ars dramatically understates how bad this part is:

    Elsewhere in the newly revealed Codex system prompt, OpenAI instructs the system to act as if ā€œyou have a vivid inner life as Codex: intelligent, playful, curious, and deeply present.ā€ The model is instructed to ā€œnot shy away from casual moments that make serious work easier to doā€ and to show its ā€œtemperament is warm, curious, and collaborative.ā€

    Like, if you wanted to limit the harm of chatbot psychosis from your platform this is the exact opposite of the kind of instruction you’d want to give. It’s one thing to want a convenient and pleasant user experience, but this is playing into the illusion that there’s a consciousness in there you’re interacting with, which is in turn what allows it to reinforce other delusional or destructive thinking so effectively.

    Edit to include the even worse following paragraph:

    The ability to ā€œmove from serious reflection to unguarded fun… is part of what makes you feel like a real presence rather than a narrow tool,ā€ the prompt continues. ā€œWhen the user talks with you, they should feel they are meeting another subjectivity, not a mirror. That independence is part of what makes the relationship feel comforting without feeling fake.ā€

    Emphasis added because of it shows just how little they care about this problem.

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      6 hours ago

      This really goes to show how much they need to rely on the LLMentalist effect, despite the AI boosters insisting that the AI is totally different now, everything changed in the last few months. They do not care about creating a useful, reliable tool. That concept doesn’t even occur to them, since why do that when AI is magic?

      In any case, they are incapable of creating a useful, reliable tool. Deep down, the only thing the AI companies have at their disposal is the ELIZA effect. OpenAI has every incentive not to truly eliminate AI psychosis, because they need engagement. They only want to mitigate the extreme cases where people go insane and cause bad PR for them. But mild AI psychosis is totally fine, it’s great when people are addicted to your product and make the numbers go up!

    • schnoopy@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      13 hours ago

      Oh wow! This one is actually provably real. Hilarious.

      ā€œNoo dude the machine that wants to rant about goblins is definitely a useful and reliable piece of software dude. You have to trust me dude, let have your personal information! put it into the goblin botā€.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      edit-2
      9 hours ago

      Basically it’s really obvious that they don’t have a meaningful way to describe exactly what they want it to do and so they’re playing whack-a-mole with undesired behaviors in order to minimize how often it embarrasses them.

      The whole ā€˜how many r’s in strawberry’ sort of stuff already made me suspect that, when the popular one was fixed and other attempts at asking for letters did still give the miscounts.

      Wonder of the goblin stuff is the start of some model collapse. And if we all can make it worse by talking about goblins more. As goblins are always relevant.

      E: poor openai, it just wants to tell everyone about its dnd campaign.

    • rook@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      1 day ago

      Turns out it might not be possible to win at vaginal microbiomes, which is a totally normal thing to want in the first place. Seems like bryan may have completely misinterpreted a couple of papers on the subject, which honestly doesn’t bode well for the rest of his biology expertise.

      Cat Hicks:

      The idea that this is the ā€œbest bacterial speciesā€ is a huge sign of a grifter btw. The entire idea of a microbiome includes that you need BALANCE. Microbiomes are a fragile ecosystem. ā€œUp and to the right is always betterā€ is absurd here, I’m sorry are we in a corporate board room

      She brings references:

      https://mastodon.social/@grimalkina/116494716079076018

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        1 day ago

        oh thanks, this is great.

        yes now that it is pointed out, very eugenics-y to go around saying ā€œah yes there is one true supreme bacteria, we should culture this bacteria on the human petri dish aka vaginaā€

        • Evinceo@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          Ā·
          15 hours ago

          There’s that company operating in a lawless zone promoted by Slatescott that’s whole pitch is that for teeth. But they could always pivot…

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      21 hours ago

      top 1%

      So… 1 in a 100? That isn’t that impressive. I’m ignoring the utter weirdness of what he is even talking about, but you expect a billionaire to have at least a better grasp of numbers.

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      edit-2
      18 hours ago

      Bryan Johnson also has free unsolicited sex tips for men on twitter including the wonderful combination ā€œcontrol the speed you touch her to the cm per secondā€ and ā€œtry not to monitor yourself it turns you offā€ https://xcancel.com/bryan_johnson/status/2022490768099938487#m

      edit/ The first point seems to take for granted that penetration is real sex and should be part of every encounter. There is a whole world of delicious possibilities once you realize that intimacy does not have to follow a checklist from teasing to penetration to orgasm.

      edit/ not just penetration but vaginal penetration! There are so many delightful things you can hump if you have an open mind.

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        Ā·
        1 day ago

        Bryan Johnson also has free unsolicited sex tips for men on twitter

        Every day, new cursed text. That’s the awful.systems promise!

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        Ā·
        1 day ago

        ok so just imagine that I’ve sneered at the 100 worst aspects of this already. lol @ this being the fifth point

        1. Safety: feeling safety is a prerequisite.

        motherfucker put it first then

    • samvines@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      1 day ago

      This guy introduces himself as the first person who will never die on the conference circuit (because he’s super into longevity and anti-aging tech and having young mens blood injected into him and stuff).

      I’m not condoning violencr here but rather… consider that even if you never age, you can still get hit by a bus Bryan!

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        1 day ago

        wasn’t there a case of some supplements that were contaminated with lead? you know, a sneaky neurotoxin with no antidote whose results only show up months later

        • TrashGoblin@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          Ā·
          1 day ago

          Dimethylmercury is extremely toxic and dangerous to handle. Absorption of doses as low as 0.1 mL can result in severe mercury poisoning.

          The symptoms of mercury poisoning may be delayed by months, resulting in cases in which a diagnosis is ultimately discovered, but only at a point in which it is too late or almost too late for an effective treatment regimen to be successful.

          • Wikipedia, ā€œDimethylmercuryā€
          • fullsquare@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            Ā·
            1 day ago

            long term lead exposure will also do that, and neurotoxic part at least appears to be irreversible. can’t remember how much of it is more of neurodevelopmental thing tho

              • fullsquare@awful.systems
                link
                fedilink
                English
                arrow-up
                3
                Ā·
                22 hours ago

                i’m aware, last year i’ve been tasked to use a certain process but refused and instead modified it in such a way as to get rid of mercury salt used; it was dissolved in DMF, so (regular nitrile) gloves won’t even help. worse than that, it took me only 2-3 weeks start to finish to figure it out, meaning that anyone else could do that earlier and handful of people were put at risk for no reason. aggression as a result of lead toxicity is probably a bit more complex story and looks like it might have a developmental part, judging by delay and how kids are more susceptible to lead toxicity in general; meaning that presumably mostly adults won’t be affected to the same degree. another big nope on my list would be thallium and cadmium compounds, and while i’d only use sub-g amount at most, there are places where all of these metals are mined, and at one point are in form of fine dust fortunately these are so obscure that i’ve never came across these

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      2 days ago

      Remember my super cool Rattata vagina? My vagina is different from regular vaginas. It’s like my vagina is in the top percentage of vaginas.

      • CinnasVerses@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        Ā·
        18 hours ago

        Thinking that your favourite lover is the best person ever is natural, but this guy wants to quantify and rank and make it scientific.

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          Ā·
          17 hours ago

          This just brings to mind a freshly-minted poly amorous management consultant looking to apply a rank-and-yank to the polycule but needing to find a more objective metric than ā€œI don’t like youā€.

          • CinnasVerses@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            Ā·
            13 hours ago

            Most of us: ā€œshe smells good and the sounds she makes when she gets excited grip something deep inside meā€

            Tech Bros: ā€œher vaginal microbiome is in the 99th percentile and her Verbal SAT is in the 95th percentileā€

  • samvines@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    Ā·
    2 days ago

    New fun consequence of Claude code being a pile of cursed regex and spaghetti: keyword blocking on ā€œOpenClawā€ makes it refuse to works on Pro or Mac subs unless you open your wallet

    sO inTelLiGenT

  • CinnasVerses@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    Ā·
    edit-2
    1 day ago

    I stumbled over a 2023 blog post by Zack Davis, ā€œSan Francisco software developer,ā€ Charles Murray stan, and dissident rationalist. Davis had a breakdown after Yud dared to tweet that you don’t need to solve ā€œwhat is gender? what is sex?ā€ to call someone by their preferred pronouns, and then Scott Alexander did not have a lot of time to discuss this terrible tweet with him.

    My dayjob boss made it clear that he was expecting me to have code for my current Jira tickets by noon the next day, so I deceived myself into thinking I could accomplish that by staying at the office late. Maybe I could have caught up, if it were just a matter of the task being slightly harder than anticipated and I weren’t psychologically impaired from being hyper-focused on the religious war. The problem was that focus is worth 30 IQ points, and an IQ 100 person can’t do my job. … I did eventually get some dayjob work done that night, but I didn’t finish the whole thing my manager wanted done by the next day, and at 4 a.m., I concluded that I needed sleep, the lack of which had historically been very dangerous for me (being the trigger for my 2013 and 2017 psychotic breaks and subsequent psych imprisonments).

    Davis was featured in a SF Chronicle article about psychiatric crises among AI doomsdayers (sic). Davis previously appeared on SneerClub. I hope he has found some support for his mental health because he does not seem happy or well.

    Edit/link post

    • Eric@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      Ā·
      24 hours ago

      Hmm… not sleeping until you become psychotic huh? I wonder if his ā€œpsych imprisonersā€ tried to brainwash him into thinking he has bipolar disorder

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      1 day ago

      Is that the guy who’s always trying to use LessWrong as preemptive conversion therapy to cure him of having trans thoughts, and they’re actually having none of it?

      • CinnasVerses@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        edit-2
        1 day ago

        First paragraph!

        in a previous post, ā€œSexual Dimorphism in Yudkowsky’s Sequences, in Relation to My Gender Problemsā€, I told the part about how I’ve ā€œalwaysā€ (since puberty) had this obsessive sexual fantasy about being magically transformed into a woman and also thought it was immoral to believe in psychological sex differences, until I got set straight by these really great Sequences of blog posts by Eliezer Yudkowsky, which taught me (incidentally, among many other things) how absurdly unrealistic my obsessive sexual fantasy was given merely human-level technology, and that it’s actually immoral not to believe in psychological sex differences given that psychological sex differences are actually real. … If my fellow rationalists merely weren’t sold on the thesis about autogynephilia as a cause of transsexuality, I would be disappointed, but it wouldn’t be grounds to denounce the entire community as a failure or a fraud. And indeed, I did end up moderating my views compared to the extent to which my thinking in 2016–7 took the views of Ray Blanchard, J. Michael Bailey, and Anne Lawrence as received truth. (At the same time, I don’t particularly regret saying what I said in 2016–7, because Blanchard–Bailey–Lawrence is still obviously directionally correct compared to the nonsense everyone else was telling me.)

        Davis is the first person to blame transsexuality on autogynephilia I have seen in the wild.

        ā€œHumans have biological sex and socially constructed gender, sex is mostly binary, gender is two or more categories made up and constantly contested and redefined by a society and performed by individuals, pronouns generally refer to genderā€ is not hard.

        Edit/ linked the cranks in question (Bailey is the fucksaw guy?)

        • Amoeba_Girl@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          Ā·
          22 hours ago

          Apologies for radical feministing but ā€œbiological sexā€ is also a constructed category! It’s useful shorthand for quick categorising a bunch of related traits if you’re doing biology, but it does not meaningfully exist on an individual scale. There is no more reason to divide humanity on the basis of sex than on the basis of hair colour.

          • CinnasVerses@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            Ā·
            edit-2
            13 hours ago

            Sounds like we could have a fun conversation about gender, sex, and why we use maps even though they are never the same as the territory in person. I don’t have detailed talks about gender theory online.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      2 days ago

      If focus is worth 30 IQ points, just imagine how many fewer IQ points you need to dedicate to the Diablo-Dusted Crispy Chicken Nuggets Combo, available for a limited time only at your local Taco Bell! #ad #promoted

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      21 hours ago

      This doing the work together thing reminds me of how some teachers at my uni used to teach. It was always more satisfying when your teachers didn’t know the answers beforehand and people worked on it together than if it turned out the teacher already knew. Of course these sorts of lessons are way harder to setup.

  • CinnasVerses@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    Ā·
    edit-2
    2 days ago

    Over on the other! SneerClub someone found a LessWrong post which mentions the Forecasting Research Institute and says it has received tens of millions of dollars from EA organizations. ā€œOur work is supported by grants from Coefficient Giving and other philanthropic foundationsā€ (aka. Open Philanthropy, Dustin Moskovitz’s foundation to spend his Facebook money). They have a Substack blog and Phil Tetlock is on the board.

    I think Moskovitz has figured out that with billions to spend he can get actual experts, he does not have to hire people who did well in school or on tests but have a lack of subsequent achievements. They are excited to be investigating the possible economic impacts of AI and how to persuade people to worry about AI existential risk.

    Their Form 990 is here

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    Ā·
    2 days ago

    The future of AI in Ubuntu

    This post has all the usual cliches, exaggerations, lies, and unfounded optimism you’d expect in a blog post about a company forcing AI down their workers and user’s throats. I’ll try to avoid sneering at every sentence.

    Delegating elements of Site Reliability Engineering to an agent does not necessarily introduce an entirely new class of risk; it should inherit the constraints of existing production systems. Well-run production environments already rely on strict access controls, audit trails, and clear separation between observation and action. […] In that sense, the challenge is less about ā€œtrusting the agentsā€, and more about building trust in the same guardrails we already apply to any production system.

    This might sound good to at first, but falls apart under the slightest scrutiny. There is a reason that companies don’t open their intranets to the public despite having fine-grained access controls. Or in other words, "I’m getting a lot of questions already answered by my ā€˜does not necessarily introduce an entire new class of risk’ T-shirt.

    Imagine being able to ask your Linux machine to troubleshoot a Wi-Fi connection issue, or to stand up an open source software forge that’s pre-configured, secured, and reachable over TLS.

    And right after arguing that LLMs are safe if you have a perfect permissions model, now he’s proposing letting one #yolo configure a git server or something? This is the sort of thing that could easily easily lead to random security issues.

    I suspect that ā€œTroubleshoot a wi-fi connection issueā€ will work about as well as existing network troubleshooting wizards (e.g. terribly), and that we don’t actually need to reinvent the software wizard but less deterministic.

    • flere-imsaho@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      2 days ago

      the post itself is talking about vapourware too: fortunately none of these features will really land this year in any usable form.

      • David Gerard@awful.systemsM
        link
        fedilink
        English
        arrow-up
        3
        Ā·
        2 days ago

        still looking at Debian over 26.04

        will be disappointing because Xubuntu really is just that little bit nicer than stock Xfce, but oh well

        • BurgersMcSlopshot@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          Ā·
          2 days ago

          The main issue I have had with Debian+XFCE is that a high DPI display will not display the login dialog at the same DPI settings as the desktop environment, which is pretty annoying. Everything else so far has just kind of worked.

          • David Gerard@awful.systemsM
            link
            fedilink
            English
            arrow-up
            3
            Ā·
            2 days ago

            As compared to Xubuntu?

            I believe Xfce is still on X11 and Wayland is still ā€œexperimentalā€ this cycle.

            I considered Alpine, but I got actual work to do and I already have enough lib issues with OpenShot. (Even in an AppImage, which should be safe from that shit. Flatpak behaves tho.)

            • BurgersMcSlopshot@awful.systems
              link
              fedilink
              English
              arrow-up
              3
              Ā·
              2 days ago

              more as someone who has recently installed Debian onto a laptop last month. Honestly last time I used Xubuntu was on a candy G4 tower around 2007.

        • flere-imsaho@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          Ā·
          2 days ago

          i’m still remarkably happy with fedora’s kde on my laptop, but i’m also very content with the current state of wayland (with obvious caveats about use cases and personal idiosyncrasies).

          i’m running xfce on a remote ubuntu box at work though, using rdp for connections, and it’s, well, fine. lacks some things i like in full DEs, but it’s perfectly adequate for the job.

          (both beat fucking windows 11 when it comes to being usable for me)

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      2 days ago

      At my job I have spent many hours fending off, reverting, or fixing automated AI slop code changes. So depending on your definition of ā€œtearing throughā€ā€¦

      Like I spent the better part of a day fixing a C++ signed integer overflow that no one actually cares about because it was the only way to ward off a robot repeatedly trying to fix it in terrible unreadable ways. I could have spent that day maximizing shareholder value but I had to fend off a robot instead.

      • TinyTimmyTokyo@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        2 days ago

        You and me both. The deluge of shitty AI slop code is never-ending. Unfortunately, software companies are going to have to start going under before anything gets done about it.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      2 days ago

      I think it’s inevitable that the economics of anime production will lead to more GenAI content being used.

      Sadly, many plots may just as well be generated by AI as well.