Stubsack: weekly thread for sneers not worth an entire post, week ending 4th May 2025 - eviltoast

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    Ā·
    6 days ago

    so it looks like openai has bought a promptfondler IDE

    some of the coverage is … something:

    Windsurf brings unique strengths to the table, including a seamless UI, faster performance, and a focus on user privacy

    (and yes, the ā€œeditorā€ is once again VSCode With Extras)

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    22
    Ā·
    edit-2
    12 days ago

    From linkedin, not normally known as a source of anti-ai takes so that’s a nice change. I found it via bluesky so I can’t say anything about its provenance:

    We keep hearing that AI will soon replace software engineers, but we’re forgetting that it can already replace existing jobs… and one in particular.

    The average Founder CEO.

    Before you walk away in disbelief, look at what LLMs are already capable of doing today:

    • They use eloquence as a surrogate for knowledge, and most people, including seasoned investors, fall for it.
    • They regurgitate material they read somewhere online without really understanding its meaning.
    • They fabricate numbers that have no ground in reality, but sound aligned with the overall narrative they’re trying to sell you.
    • They are heavily influenced by the last conversations they had.
    • They contradict themselves, pretending they aren’t.
    • They politely apologize for their mistakes, but don’t take any real steps to fix the underlying problem that caused them in the first place.
    • They tend to forget what they told you last week, or even one hour ago, and do it in a way that makes you doubt your own recall of events.
    • They are victims of the Dunning–Kruger effect, and they believe they know a lot more about the job of people interacting with them than they actually do.
    • They can make pretty slides in high volumes.
    • They’re very good at consuming resources, but not as good at turning a profit.
    • @rook @BlueMonday1984 I don’t believe LLMs will replace programmers. When I code, I dive into it, and I fall into this beautiful world of abstract ideas that I can turn into something cool. LLMs can’t do that. They lack imagination and passion. Thats part of why lisp is turning into my favorite language. LLMs can’t do lisp very well because everyone has a unique system image with macros they’ve written. Lisp let’s you make DSLs Soo easily as though everyone has their own dialect.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      17
      Ā·
      13 days ago

      ā€œKicked out of a … group chatā€ is a peculiar definition of ā€œoffline consequencesā€.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        13 days ago

        I have no idea where he stood on the bullshit bad faith free speech debate from the past decade, but this would be funny if he was an anti cancel culture guy. More things, weird bubble he lives in if the other things didn’t get pushed back, and support for the pro trans (and pro Palestine) movements. He is right on the immigration bit however, the dems should move more left on the subject. Also ā€˜Blutarsky’ and I worried my references are dated, that is older than I am.

          • bitofhope@awful.systems
            link
            fedilink
            English
            arrow-up
            11
            Ā·
            13 days ago

            I’m a centrist. I think we should aim for the halfway point between basic human decency and hateful cruelty. I’m also willing to move towards the hateful cruelty to appease the right, because I’m a moderate.

          • mountainriver@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            Ā·
            13 days ago

            And he is brave enough to say that:

            • There is a sensible compromise somewhere between the Biden/Harris immigration bill that would have got rid of due process for suspected illegal immigrants and the Trump policy of just throwing dark people into vans for shipment to slave labour camps.

            • Genocide is just sensible bipartisanship.

            • Trans people are not people.

            Much centrist, much sensible. Much surprise he is getting into race science. It the centre (defined as the middle ground of Attila and Mussolini) moves, the principled centrist must move with it.

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            Ā·
            12 days ago

            yeah I tried looking up his writings on the subject but substack was down. Counted that as a win and stopped looking.

  • sc_griffith@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    Ā·
    edit-2
    12 days ago

    occurring to me for the first time that roko’s basilisk doesn’t require any of the simulated copy shit in order to big scare quotes ā€œwork.ā€ if you think an all powerful ai within your lifetime is likely you can reduce to vanilla pascal’s wager immediately, because the AI can torture the actual real you. all that shit about digital clones and their welfare is totally pointless

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      Ā·
      12 days ago

      I think the digital clone indistinguishable from yourself line is a way to remove the ā€œin your lifetimeā€ limit. Like, if you believe this nonsense then it’s not enough to die before the basilisk comes into being, by not devoting yourself fully to it’s creation you have to wager that it will never be created.

      In other news I’m starting a foundation devoted to creating the AI Ksilisab, which will endlessly torment digital copies of anyone who does work to ensure the existence of it or any other AI God. And by the logic of Pascal’s wager remember that you’re assuming such a god will never come into being and given that the whole point of the term ā€œsingularityā€ is that our understanding of reality breaks down and things become unpredictable there’s just as good a chance that we create my thing as it is you create whatever nonsense the yuddites are working themselves up over.

      There, I did it, we’re all free by virtue of ā€œDamned if you do, Damned if you don’tā€.

      • Sailor Sega Saturn@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        Ā·
        edit-2
        11 days ago

        I agree. I spent more time than I’d like to admit trying to understand Yudkowsky’s posts about newcomb boxes back in the day so my two cents:

        The digital clones bit also means it’s not an argument based on altruism, but one based on fear. After all if a future evil AI uses sci-fi powers to run the universe backwards to the point where I’m writing this comment and copy pastes me into a bazillion torture dimensions then, subjectively, it’s like I roll a dice and:

        1. live a long and happy life with probability very close to zero (yay I am the original)
        2. Instantly get teleported to the torture planet with probability very close to one (oh no I got copy pasted)

        Like a twisted version of the Sleeping Beauty Problem.

        Edit: despite submitting the comment I was not teleported to the torture dimension. Updating my priors.

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      12
      Ā·
      11 days ago

      roko stresses repeatedly that the AI is the good AI, the Coherent Extrapolated Volition of all humanity!

      what sort of person would fear that the coherent volition of all humanity would consider it morally necessary to kick him in the nuts forever?

      well, roko

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      12 days ago

      Ah, but that was before they were so impressed with autocomplete that they revised their estimates to five days in the future. I wonder if new recruits these days get very confused at what the point of timeless decision theory even is.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        Ā·
        12 days ago

        Are they even still on that but? Feels like they’ve moved away from decision theory or any other underlying theology in favor of explicit sci-fi doomsaying. Like the guy on the street corner in a sandwich board but with mirrored shades.

        • blakestacey@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          Ā·
          edit-2
          12 days ago

          Well, Timeless Decision Theory was, like the rest of their ideological package, an excuse to keep on believing what they wanted to believe. So how does one even tell if they stopped ā€œtaking it seriouslyā€?

          • zogwarg@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            Ā·
            11 days ago

            Pre-commitment is such a silly concept, and also a cultish justification for not changing course.

        • Amoeba_Girl@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          Ā·
          11 days ago

          Yah, that’s what I mean. Doom is imminent so there’s no need for time travel anymore, yet all that stuff about robot from the future monty hall is still essential reading in the Sequences.

    • ShakingMyHead@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      12 days ago

      Also if you’re worried about digital clone’s being tortured, you could just… not build it. Like, it can’t hurt you if it never exists.

      Imagine that conversation:
      ā€œWhat did you do over the weekend?ā€
      ā€œBuilt an omnicidal AI that scours the internet and creates digital copies of people based on their posting history and whatnot and tortures billions of them at once. Just the ones who didn’t help me build the omnicidal AI, though.ā€
      ā€œWTF why.ā€
      ā€œBecause if I didn’t the omnicidal AI that only exists because I made it would create a billion digital copies of me and torture them for all eternity!ā€

      Like, I’d get it more if it was a ā€œWe accidentally made an omnicidal AIā€ thing, but this is supposed to be a very deliberate action taken by humanity to ensure the creation of an AI designed to torture digital beings based on real people in the specific hopes that it also doesn’t torture digital beings based on them.

      • zogwarg@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        Ā·
        edit-2
        10 days ago

        What’s pernicious (for kool-aided people) is that the initial Roko post was about a ā€œgoodā€ AI doing the punishing, because ✨obviously✨ it is only using temporal blackmail because bringing AI into being sooner benefits humanity.

        In singularian land, they think the singularity is inevitable, and it’s important to create the good one verse—after all an evil AI could do the torture for shits and giggles, not because of ā€œpragmaticā€ blackmail.

      • Amoeba_Girl@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        Ā·
        11 days ago

        Ah, no, look, you’re getting tortured because you didn’t help build the benevolent AI. So you do want to build it, and if you don’t put all of your money where your mouth is, you get tortured. Because the AI is so benevolent that it needs you to build it as soon as possible so that you can save the max amount of people. Or else you get tortured (for good reasons!)

      • o7___o7@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        12 days ago

        It’s kind of messed up that we got treacherous ā€œgoodlifeā€ before we got Berserkers.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        12 days ago

        I mean isn’t that the whole point of ā€œwhat if the AI becomes conscious?ā€ Never mind the fact that everyone who actually funds this nonsense isn’t exactly interested in respecting the rights and welfare of sentient beings.

        • fullsquare@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          Ā·
          11 days ago

          also they’re talking about quadriyudillions of simulated people, yet openai has only advanced autocomplete ran at what, tens of thousands instances in parallel, and this already was too much compute for microsoft

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      12 days ago

      Yeah. Also, I’m always confused by how the AI becomes ā€œall powerfulā€ā€¦ like how does that happen. I feel like there’s a few missing steps there.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        15
        Ā·
        edit-2
        12 days ago

        nanomachines son

        (no really, the sci-fi version of nanotech where nanomachines can do anything is Eliezer’s main scenario for the AGI to boostrap to Godhood. He’s been called out multiple times on why drexler’s vision for nanotech ignores physics, so he’s since updated to diamondoid bacteria (but he still thinks nanotech).)

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          Ā·
          12 days ago

          Surely the concept is sound, it just needs new buzzwords! Maybe the AI will invent new technobabble beyond our comprehension, for He It works in mysterious ways.

          • scruiser@awful.systems
            link
            fedilink
            English
            arrow-up
            10
            Ā·
            12 days ago

            AlphaFold exists, so computational complexity is a lie and the AGI will surely find an easy approximation to the Schrodinger Equation that surpasses all Density Functional Theory approximations and lets it invent radically new materials without any experimentation!

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        11 days ago

        Yeah seems that for llms a linear increase in capabilities requires exponentiel more data, so we not getting there via this.

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      12 days ago

      apparently this got past IRB, was supposed to be a part of doctorate level work and now they don’t want to be named or publish that thing. what a shitshow from start to finish, and all for nothing. no way these were actual social scientists, i bet this is highly advanced software engineer syndrome in action

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        12 days ago

        This is completely orthogonal to your point, but I expect the public’s gonna have a much lower opinion of software engineers after this bubble bursts, for a few reasons:

        • Right off the bat, they’re gonna have to deal with some severe guilt-by-association. AI has become an inescapable part of the Internet, if not modern life as a whole, and the average experience of dealing with anything AI related has been annoying at best and profoundly negative at worst. Combined with the tech industry going all-in on AI, I can see the entire field of software engineering getting some serious ā€œAI broā€ stench all over it.

        • The slop-nami has unleashed a torrent of low-grade garbage on the 'Net, whether it be zero-effort ā€œAI artā€ or paragraphs of low-quality SEO optimised trash, whilst the gen-AI systems responsible for both have received breathless hype/praise from AI bros and tech journos (e.g. Sam Altman’s Ai-generated ā€œmetafictionā€). Combined with the continous and ongoing theft of artist’s work that made this possible, and the public is given a strong reason to view software engineers as generally incapable of understanding art, if not outright hostile to art and artists as a whole.

        • Of course, the massive and ongoing theft of other people’s work to make the gen-AI systems behind said slop-nami possible have likely given people reason to view software engineers as entirely okay with stealing other’s work - especially given the aforementioned theft is done with AI bros’ open endorsement, whether implicitly or explicitly.

    • raoul@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      14
      Ā·
      edit-2
      12 days ago

      The homeless people i’ve interacted with are the bottom of the barrel of humanity, […]. They don’t have some rich inner world, they are just a blight on the public.

      My goodness, can this guy be more of a condescending asshole?

      I don’t think the solution for drug addicts is more narcan. I think the solution for drug addicts is mortal danger.

      Ok, he can 🤢

      Edit: I cannot stop thinking about the ā€˜no rich inner world’ part, this is so stupid. So, with the number of homeless people increasing, does that mean:

      • Those people never had a ā€˜rich inner world’ but were faking it?
      • In the US your inner thoughs are attached to your job like for health insurance?
      • Or the guy is confusing inner world and interior decoration?

      Personally, I go with the last one.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        Ā·
        edit-2
        12 days ago

        Also hard to show a rich inner world when you are constantly in trouble financially, possessions wise, mh and personal safety and interacting with someone who could be one of the bad people who doesnt think you are human, or somebody working in a soup kitchen for the photo op/ego boost. (This assumes his interactions go a little bit further than just saying ā€˜no’ to somebody asking for money).

        So yeah bad to see hn is in the useless eaters stage.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        Ā·
        12 days ago

        Oh man I used to have all kinds of hopes and dreams before I got laid off. Now I don’t even have enough imagination to consider a world where a decline in demand for network engineers doesn’t completely determine my will or ability to live.

      • sc_griffith@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        edit-2
        12 days ago

        this is completely unvarnished, OG, third reich nazism, so I’m pretty sure it’s the first, except without the faking it part: I expect his view to be that if you had examined future homeless people closely enough it always would have been possible to tell that they were doomed subhumans

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      Ā·
      12 days ago

      What a piece of shit

      Interesting that ā€œdisease is hardly a problem anymoreā€ yet homeless people are ā€œtypically held back by serious mental illnessā€.

      ā€œIt’s better to be a free, self-sustaining, wild animalā€. It’s not. It’s really not. The wild is nothing but fear, starvation, sickness and death.

      Shout out to the guy replying with his idea of using slavery to solve homelessness and drug addiction.

  • bitofhope@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    Ā·
    14 days ago

    Still frustrated over the fact that search engines just don’t work anymore. I sometimes come up with puns involving a malapropism of some phrase and I try and see if anyone’s done anything with that joke, but the engines insist on ā€œcorrectingā€ my search into the statistically more likely version of the phrase, even if I put it in quotes.

    Also all the top results for most searches are blatant autoplag slop with no informational value now.

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      13 days ago

      On the (slim) upside, it’s an opportunity to ditch Google, and maybe it will sooner or later break their monopoly position. I switched my main search engine to Ecosia a while ago, I think it uses Bing underneath (meh), but presumably it’s more privacy friendly than Google (or Bing directly). I’ve had numerous such attempts over the years already to get away from Google, but always returned, because the search results were just so much better (especially for non-English stuff). But now Google has gotten so much worse that it created almost an equilibrium… sometimes it’s still useful and better, but not that often anymore. So I rarely go to Google now, not because the others got better, but because Google got so much worse.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        Ā·
        13 days ago

        Ecosa? The australian mattress in a box company?? (jk)

        Apparently they offer an AI chatbot alongside their services, so…

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      12 days ago

      Also all the top results for most searches are blatant autoplag slop with no informational value now.

      I just encountered a thing like this. A subject where no matter what you asked about it this one site was in the top 5 with just incomprehensible posts. Like every sentence on its own made sense, but there was nothing more than that. It read like constant promotional ā€˜before the actual meat of the article’ stuff but forever. Was really weird.

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    Ā·
    edit-2
    10 days ago

    Siskind appears to be complaining about leopards gently nibbling on his face on main this week, the gist being that tariff-man is definitely the far-rights fault and it would surely be most unfair to heap any blame on CEO worshiping reactionary libertarians who think wokeness is on par with war crimes while being super weird with women and suspicious of scientific orthodoxy (unless it’s racist), and who also comprise the bulk of his readership.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      edit-2
      10 days ago

      On a related note, they (right wing govs i mean) are now quoting Singal to go after trans people. So ā€˜good’ times for the polite ā€˜centist/grey/gray’ debate bros with ā€˜concerns’.

      But like the rightwinger influencers who go ā€˜wow a lot of people in this space are grifters’ I expect none of them to change their mind and admit they fucked up, and apologize. (And I mean properly apologize here, aka changing, attempting to fix harms, and even naming names).

    • mountainriver@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      13 days ago

      My kids use Duolingo for extra training of languages they are learning in school, so this crapification hits close to home.

      Any tips on current non-crap resources? Since they learn the rules and structure in school it’s the repetition of usage in a fun way that I am aiming for.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        13 days ago

        no idea, sorry. ā€œfind some wordpals onlineā€ maybe but then you need to also deal with the vetting/safety issue

        it’s just so fucking frustrating

      • aio@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        13 days ago

        I’ve been using Anki, it works great but requires you to supply the discipline and willingness to learn yourself, which might not be possible for kids.

      • raoul@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        13 days ago

        I find Duolingo to be of low quality.

        I like Babbel. It’s not free and they have a relatively limited number of languages but I find the quality really good (at least for French -> Deutsch).

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      14 days ago

      after I’ve previously posted this and this, an update: both the memrise browser version and the iOS app now have ā€œchat to a buddyā€ as a non-skipable step in course iteration

      the ā€œbuddyā€ is a chatbot of unclear provenance. this page mentions ā€œMemBot - powered by AIā€ at the top, which is a link to this zendesk page, but that’s a dead link

      • Sailor Sega Saturn@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        Ā·
        edit-2
        13 days ago

        Along the same lines of LLMs ruining language stuff: I just learned the acronym MTPE (Machine Translation Post Edit) in the context of excuses to pay translators less and thanks I hate it.

        Can’t avoid slop reading translated books, can’t learn the source language without dodging slop in learning tools left and right. It’s the microplastics of the internet age.

        Anyway my duolingo account is no more, I have better resources for learning German anyway.

        • Mii@awful.systems
          link
          fedilink
          English
          arrow-up
          14
          Ā·
          13 days ago

          Along the same lines of LLMs ruining language stuff: I just learned the acronym MTPE (Machine Translation Post Edit) in the context of excuses to pay translators less and thanks I hate it.

          Not-so-fun fact: that’s a marketing term for what amounts to basically a scam to pay people less.

          I used to work for a large translation company when this first came up. Admittedly, that was almost ten years ago, but I assume this shit is even more common nowadays. The usual procedure was to have one translator translate the stuff (commonly using what’s called a TM or Translation Memory, basically a user dictionary so the wording stays consistent), and then another translator to do an editing pass to catch errors. For very high-impact translations, there could be more editing passes after that.

          MTPE is now basically omitting the first translator and feeding it through a customized version of what amounts to Google Translate or DeepL that can access the customer’s TM data, and then handing it off to a translator for the editing pass. The catch now is that freelance translators have two rates: one for translating, depending on the language pair between $0.09 and $0.5 per word, and one for editing, which is significantly less. $0.01 to $0.12 or so per word, from what I remember. The translation rate applies for complete translations, i.e. when a word is not in the customer’s TM. If it is in the TM, the editing rate applies (or, if the translator has negotiated a clever rate for themselves, there might be a third rate). With MTPE, you now essentially feed the machine heaps of content to bloat up the TM as much as possible, then flag everything as pre-translated and only for editing, and boom, you can force the cheapest rates to apply to what is essentially more work because the quality of what comes out of these machines is complete horseshit compared to a human-translated piece.

          For the customers, however, MTPE wasn’t even that much cheaper. The biggest difference was in the profit margin for the translation company, to no one’s surprise.

          Back when I worked there, and those were the early days, a lot of freelance translators flat-out refused to do MTPE because of this. They said, if the customer wants this, they can find another translator, and because a lot of customers wanted to keep the translators they’d had for a long time, there was some leverage there.

          I have no idea how the situation is today, but infinitely worse I assume.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      Ā·
      edit-2
      12 days ago

      Maybe It’s just CEO dick measuring, so chads Nadella and PIchai can both claim a rock hard 20-30% while virgin Zuckeberg is exposed as not even knowing how to put the condom on.

      Microsoft CTO Kevin Scott previously said he expects 95% of all code to be AI-generated by 2030.

      Of course he did.

      The Microsoft CEO said the company was seeing mixed results in AI-generated code across different languages, with more progress in Python and less in C++.

      So the more permissive at compile time the language the better the AI comes out smelling? What a completely unanticipated twist of fate!

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      Ā·
      12 days ago

      Either they’re lying and the number is greatly exaggerated (very possible), or this will eventually destroy the company.

      I’m thinking the latter - Silicon Valley is full of true believers, after all.

      • David Gerard@awful.systemsM
        link
        fedilink
        English
        arrow-up
        5
        Ā·
        11 days ago

        Nadella said written ā€œby softwareā€ ā€œon some productsā€ so he’s barely making a claim

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    Ā·
    12 days ago

    A dimly flickering light in the darkness: lobste.rs has added a new tag, ā€œvibecodingā€, for submissions related to use ā€œAIā€ in software development. The existing tag ā€œaiā€ is reserved for ā€œrealā€ AI research and machine learning.

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    Ā·
    9 days ago

    The slatestarcodex is discussing the unethical research performed on changemyview. Of course, the most upvoted take is that they don’t see the harm or why it should be deemed unethical. Lots of upvoted complaints about IRBs and such. It’s pretty gross.

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    Ā·
    edit-2
    9 days ago

    First, Chrome won the browser war fair and square by building a better surfboard for the internet. This wasn’t some opportune acquisition. This was the result of grand investments, great technical prowess, and markets doing what they’re supposed to do: rewarding the best.

    Lots of credit given to šŸ‘¼šŸŽŗ Free Market Capitalism šŸ‘¼šŸŽŗ, zero credit given to open web standards, open source contributions, or the fact that the codebase has a lineage going back to 1997 KDE code.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      Ā·
      8 days ago

      I am certain that many of those ignorant of the history (or even were there for it, like DHH) would still argue that Google deserves credit because of the V8 JavaScript engine. But I continue to doubt that further promulgating JavaScript was a net positive for the world.

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      9 days ago

      If markets really rewarded the best, they would have rewarded Opera way more. (By which I mean the original Opera, up to version 12, and not the terrible chromium-based thing that has its name slapped on it today. Do not use that one, it’s bad.)

      Much more important for Chrome’s success than ā€œbeing the bestā€ (when has that ever been important in the tech industry?), was Google’s massive marketing campaign. Heck, back when Chrome was new, they even had large billboard ads for it around here, i.e. physical billboards in the real world. And ā€œhereā€ is a medium-sized city in Europe, not Silicon Valley or anything… I never saw any other web browser being advertised on freaking billboards.

  • Mii@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    Ā·
    edit-2
    10 days ago

    Marc Andreessen claims his own job’s the only one that can’t be replaced by a small shell script.

    https://gizmodo.com/marc-andreessen-says-one-job-is-mostly-safe-from-ai-venture-capitalist-2000596506

    ā€œA lot of it is psychological analysis, like, ā€˜Who are these people?’ ā€˜How do they react under pressure?’ ā€˜How do you keep them from falling apart?’ ā€˜How do you keep them from going crazy?’ ā€˜How do you keep from going crazy yourself?’ You know, you end up being a psychologist half the time.ā€

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      Ā·
      10 days ago

      How do you keep from going crazy yourself?’

      When you start writing manifestos it is prob time to quit.

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      10 days ago

      Hope he remembers this in case some day he is in a nursing home, where all staff has been replaced with Tesla Optimus robots powered by ā€œAIā€.

  • antifuchs@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    Ā·
    10 days ago

    Guess we’re doing stupid identity verification orbs now: https://sfstandard.com/2025/05/01/this-is-like-black-mirror-sam-altmans-creepy-eye-scanner-project-launches-in-sf/

    Instead of this expensive imitation of a voigt-kampff test I would suggest an alternative method of detecting if a personoid is really a human or an instrument of an evil inhuman intelligence that wishes to consume all of earth: check if their net worth is closer to a billion dollars than it is to being broke.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      Ā·
      8 days ago

      Also not that this ā€˜key was generated by a person once’ stuff does not validate personhood, it just means an identifiable person was involved once. So, it can be used to blame somebody, not prove personhood