ASCII art elicits harmful responses from 5 major AI chatbots - eviltoast
    • Fubarberry@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      52
      ·
      8 months ago

      I’m not surprised that a for-profit company for wanting to avoid bad press by censoring stuff like that. There’s no profit in sharing that info, and any media attention over it would be negative.

      • ArmokGoB@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        2
        ·
        8 months ago

        No one’s going after hammer manufacturers because their hammers don’t self-destruct if you try to use one to clobber someone over the head.

      • vithigar@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        I’m more surprised that a for-profit company is willing to use a technology that is able to randomly spew out unwanted content, incorrect information, or just straight gibberish, in any kind of public facing capacity.

        Oh, it let them save money on support staff this quarter. And fixing it can be an actionable OKR for next quarter. Nevermind.

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 months ago

      They use the bomb-making example but mostly “unsafe” or even “harmful” means erotica. It’s really anything, anyone, anywhere would want to censor, ban, or remove from libraries. Sometimes I marvel that the freedom of the (printing) press ever became a thing. Better nip this in the butt, before anyone gets the idea that genAI might be a modern equivalent to the press.