Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 9 June 2024 - eviltoast

Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cutā€™nā€™paste it into its own post ā€” thereā€™s no quota for posting and the bar really isnā€™t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but thereā€™s no appropriate sneer-space for them. Iā€™m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. Iā€™m talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyā€™re inescapable at this point, yet I donā€™t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnā€™t be surgeons because they didnā€™t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canā€™t escape them, I would love to sneer at them.

  • ebu@awful.systems
    link
    fedilink
    English
    arrow-up
    20
    Ā·
    edit-2
    7 months ago

    i really, really donā€™t get how so many people are making the leaps from ā€œneural nets are effective at text predictionā€ to ā€œthe machine learns like a human doesā€ to ā€œweā€™re going to be intellectually outclassed by Microsoft Clippy in ten yearsā€.

    like itā€™s multiple modes of failing to even understand the question happening at once. iā€™m no philosopher; i have no coherent definition of ā€œintelligenceā€, but itā€™s also pretty obvious that all LLMā€™s are doing is statistical extrapolation on language. iā€™m just baffled at how many so-called enthusiasts and skeptics alike justā€¦ completely fail at the first step of asking ā€œso what exactly is the program doing?ā€

    • BigMuffin69@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      Ā·
      7 months ago

      The y-axis is absolute eye bleach. Also implying that an ā€œAI researcherā€ has the effective compute of 10^6 smart high schoolers. What the fuck are these chodes smoking?

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          Ā·
          7 months ago

          Either sexy voice or the voice used in commercials for women and children. (I noticed a while back that they use the same tone of voice and that tone of voice now lowkey annoys me every time I hear it).

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      7 months ago

      this article/dynamic comes to mind for me in this, along with a toot I saw the other day but donā€™t currently have the link for. the toot detailed a story of some teacher somewhere speaking about ai hype, making a pencil or something personable with googly eyes and making it ā€œspeakā€, then breaking it in half the moment people were even slightly ā€œengagedā€ with the idea of a personā€™d pencil - the point of it was that people are remarkably good at seeing personhood/consciousness/etc in things where it just outright isnā€™t there

      (combined with a bit of en vogue hype wave fuckery, where genpop follows and uses this stuff, but theyā€™re not quite the drivers of the itsintelligent.gif crowd)

        • blakestacey@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          14
          Ā·
          edit-2
          7 months ago

          Transcript: a post by Greg Stolze on Bluesky.

          I heard some professor put googly eyes on a pencil and waved it at his class saying ā€œHi! Iā€™m Tim the pencil! I love helping children with their homework but my favorite is drawing pictures!ā€

          Then, without warning, he snapped the pencil in half.

          When half his college students gasped, he said ā€œTHATā€™S where all this AI hype comes from. Weā€™re not good at programming consciousness. But weā€™re GREAT at imagining non-conscious things are people.ā€

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          Ā·
          7 months ago

          yeah, was that. not sure it happened either, but itā€™s a good concise story for the point nonetheless :)

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      7 months ago

      Same with when they added some features to the UI of gpt with the gpt40 chatbot thing. Donā€™t get me wrong, the tech to do real time audioprocessing etc is impressive (but has nothing to do with LLMs, it was a different technique) but it certainly is very much smoke and mirrors.

      I recall when they taught developers to be careful with small UI changes without backend changes as for non-insiders that feels like a massive change while the backend still needs a lot of work (so the client thinks you are 90% done while only 10% is done), but now half the tech people get tricked by the same problem.

      • ebu@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        7 months ago

        i suppose there is something more ā€œmagicalā€ about having the computer respond in realtime, and maybe itā€™s that ā€œmagicalā€ feeling thatā€™s getting so many people to just kinda shut off their brains when creators/fans start wildly speculating on what it can/will be able to do.

        how that manages to override peopleā€™s perceptions of their own experiences happening right in front of it still boggles my mind. theyā€™ll watch a person point out that it gets basic facts wrong or speaks incoherently, and assume the fault lies with the person for not having the true vision or what have you.

        (and if i were to channel my inner 2010ā€™s reddit atheist for just a moment it feels distinctly like the ways people talk about Christian Rapture, where flaws and issues youā€™re pointing out in the system get spun as personal flaws. you arenā€™t observing basic facts about the system making errors, you are actively in ego-preserving denial about the ā€œinevitability of aiā€)