Somebody managed to coax the Gab AI chatbot to reveal its prompt - eviltoast
  • Seasoned_Greetings@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 months ago

    In your analogy a proposed regulation would just be requiring the book in question to report that it’s endorsed by a nazi. We may not be inclined to change our views because of an LLM like this but you have to consider a world in the future where these things are commonplace.

    There are certainly people out there dumb enough to adopt some views without considering the origins.

      • Seasoned_Greetings@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        And you don’t think those people might be upset if they discovered something like this post was injected into their conversations before they have them and without their knowledge?

          • Seasoned_Greetings@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            7 months ago

            You think this is confined to gab? You seem to be looking at this example and taking it for the only example capable of existing.

            Your argument that there’s not anyone out there at all that can ever be offended or misled by something like this is both presumptuous and quite naive.

            What happens when LLMs become widespread enough that they’re used in schools? We already have a problem, for instance, with young boys deciding to model themselves and their world view after figureheads like Andrew Tate.

            In any case, if the only thing you have to contribute to this discussion boils down to “nuh uh won’t happen” then you’ve missed the point and I don’t even know why I’m engaging you.