Pentagon-Funded Study Uses AI to Detect 'Violations of Social Norms' - eviltoast

I don’t have a quip, just a sorrowful head shake that I somehow got in the shitty timeline.

  • chaogomu@kbin.social
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    The worst part is, ChatGPT cannot generate anything new. It’s pre-trained, which is the P in the name.

    It can only recombine the training data into forms that sort of match the training data. So, if the training data is garbage, the output will be more garbage.

    And this garbage in garbage out is going to be used to harm real people.

    In addition, ChatGPT lies. It hallucinates shit that is provably false, because that’s what it’s generated text needs to look like to match the training data.

    So it will likely lead to a bunch of false positives, because the positive response better matches the training data.