Somebody managed to coax the Gab AI chatbot to reveal its prompt - eviltoast
    • Seasoned_Greetings@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      7 months ago

      You think this is confined to gab? You seem to be looking at this example and taking it for the only example capable of existing.

      Your argument that there’s not anyone out there at all that can ever be offended or misled by something like this is both presumptuous and quite naive.

      What happens when LLMs become widespread enough that they’re used in schools? We already have a problem, for instance, with young boys deciding to model themselves and their world view after figureheads like Andrew Tate.

      In any case, if the only thing you have to contribute to this discussion boils down to “nuh uh won’t happen” then you’ve missed the point and I don’t even know why I’m engaging you.