Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 23 June 2024 - eviltoast

Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cutā€™nā€™paste it into its own post ā€” thereā€™s no quota for posting and the bar really isnā€™t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but thereā€™s no appropriate sneer-space for them. Iā€™m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. Iā€™m talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyā€™re inescapable at this point, yet I donā€™t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnā€™t be surgeons because they didnā€™t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canā€™t escape them, I would love to sneer at them.

  • skillissuer@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    24
    Ā·
    edit-2
    6 months ago

    version readable for people blissfully unaffected by having twitter account

    ā€œOver the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans.ā€

    yeah ez just lemme build dc worth 1% of global gdp and run exclusively wisdom woodchipper on this

    ā€œBehind the scenes, thereā€™s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might.ā€

    power grid equipment manufacture always had long lead times, and now, thereā€™s a country in eastern europe that has something like 9GW of generating capacity knocked out, you big dumb bitch, maybe that has some relation to all packaged substations disappearing

    They are doing to summon a god. And we canā€™t do anything to stop it. Because if we do, the power will slip into the hands of the CCP.

    i see that besides 50s aesthetics they like mccarthyism

    ā€œAs the race to AGI intensifies, the national security state will get involved. The USG will wake from its slumber, and by 27/28 weā€™ll get some form of government AGI project. No startup can handle superintelligence. Somewhere in a SCIF, the endgame will be on. ā€œ

    how cute, they think that their startup gets nationalized before it dies from terminal hype starvation

    ā€œI make the following claim: it is strikingly plausible that by 2027, models will be able to do the work of an AI researcher/engineer. That doesnā€™t require believing in sci-fi; it just requires believing in straight lines on a graph.

    ā€œWe donā€™t need to automate everythingā€”just AI researchā€

    ā€œOnce we get AGI, weā€™ll turn the crank one more timeā€”or two or three more timesā€”and AI systems will become superhumanā€”vastly superhuman. They will become qualitatively smarter than you or I, much smarter, perhaps similar to how you or I are qualitatively smarter than an elementary schooler. ā€œ

    just needs tiny increase of six orders of magnitude, pinky swear, and itā€™ll all work out

    it weakly reminds me how Edward Teller got an idea of a primitive thermonuclear weapon, then some of his subordinates ran numbers and decided that it will never work. his solution? Just Make It Bigger, it has to be working at some point (it was deemed as unfeasible and tossed in trashcan of history where it belongs. nobody needs gigaton range nukes, even if his scheme worked). he was very salty that somebody else (Stanisław Ulam) figured it out in a practical way

    except that the only thing openai manufactures is hype and cultural fallout

    ā€œWeā€™d be able to run millions of copies (and soon at 10x+ human speed) of the automated AI researchers.ā€ ā€œā€¦given inference fleets in 2027, we should be able to generate an entire internetā€™s worth of tokens, every single day.ā€

    whatā€™s ā€œmodel collapseā€

    ā€œWhat does it feel like to stand here?ā€

    beyond parody

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      19
      Ā·
      6 months ago

      ā€œOnce we get AGI, weā€™ll turn the crank one more timeā€”or two or three more timesā€”and AI systems will become superhumanā€”vastly superhuman. They will become qualitatively smarter than you or I, much smarter, perhaps similar to how you or I are qualitatively smarter than an elementary schooler. ā€œ

      Also this doesnā€™t give enough credit to gradeschoolers. I certainly donā€™t think I am much smarter (if at all) than when I was a kid. Donā€™t these people remember being children? Do they think intelligence is limited to speaking fancy, and/or having the tools to solve specific problems? Iā€™m not sure if itā€™s me being the weird one, to me growing up is not about becoming smarter, itā€™s more about gaining perspective, that is vital, but actual intelligence/personhood is a pre-requisite for perspective.

      • Mii@awful.systems
        link
        fedilink
        English
        arrow-up
        18
        Ā·
        edit-2
        6 months ago

        Do they think intelligence is limited to speaking fancy, and/or having the tools to solve specific problems?

        Yes. They literally think that. I mean, why else would they assume a spicy text extruder with a built-in thesaurus is so smart?

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      16
      Ā·
      6 months ago

      To engage with the content:

      That doesnā€™t require believing in sci-fi; it just requires believing in straight lines on a graph.

      I see this is becoming their version of ā€œtoo the moonā€, and itā€™s even dumber.

      To engage with the form:

      wisdom woodchipper

      Amazing, 10/10 no notes.

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        11
        Ā·
        6 months ago

        I see this is becoming their version of ā€œtoo the moonā€, and itā€™s even dumber.

        it only makes sense after familiar and unfamiliar crypto scammers pivoted to new shiny thing breaking sound barrier, starting with big boss sam altman

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        6 months ago

        wisdom woodchipper

        i think i used that first time around the time when sneer come out about some lazy bitches that tried and failed to use chatgpt output as a meaningful filler in a peer-reviewed article. of course it worked, and not only at MDPI, because i doubt anyone seriously cares about prestige of International Journal of SEO-bait Hypecentrics, impact factor 0.62, least of all reviewers

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      Ā·
      6 months ago

      They are doing to summon a god. And we canā€™t do anything to stop it. Because if we do, the power will slip into the hands of the CCP.

      Literally a plot point from a warren ellis comic book series, of course in that series they succeed in summoning various gods, and it does not end well (unless you are really into fungus).

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      edit-2
      6 months ago

      source of that image is also bad hxxps://waitbutwhy[.]com/2015/01/artificial-intelligence-revolution-1.html i think iā€™ve seen it listed on lessonline? canā€™t remember

      not only they seem like true believers, they are so for a decade at this point

      In 2013, Vincent C. MĆ¼ller and Nick Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: ā€œFor the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI4 to exist?ā€ It asked them to name an optimistic year (one in which they believe thereā€™s a 10% chance weā€™ll have AGI), a realistic guess (a year they believe thereā€™s a 50% chance of AGIā€”i.e. after that year they think itā€™s more likely than not that weā€™ll have AGI), and a safe guess (the earliest year by which they can say with 90% certainty weā€™ll have AGI). Gathered together as one data set, here were the results:2

      Median optimistic year (10% likelihood): 2022

      Median realistic year (50% likelihood): 2040

      Median pessimistic year (90% likelihood): 2075

      just like fusion, itā€™s gonna happen in next decade guys, trust me

      • 200fifty@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        Ā·
        edit-2
        6 months ago

        I believe waitbutwhy came up before on old sneerclub though in that case we were making fun of them for bad political philosophy rather than bad ai takes

        • skillissuer@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          11
          Ā·
          6 months ago

          thereā€™s a lot of bad everything, it looks like a failed attempt at rat-scented xkcd. and yeah they were invited to lessonline but didnā€™t arrive

    • o7___o7@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      edit-2
      6 months ago

      ā€œOver the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans.ā€

      They are doing to summon a god. And we canā€™t do anything to stop it.

      This is a direct rip-off of the plot of The Labyrinth Index, except in the book itā€™s a public-partnership between the US occult deep state, defense contractors, and silicon valley rather than a purely free market apocalypse, and theyā€™re trying to execute cthulhu.exe rather than implement the Acausal Robot God.