Copilot AI calls journalist a child abuser, Microsoft tries to launder responsibility - eviltoast
  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    23
    ·
    4 months ago

    lazily regex

    I’m having a sneaking suspicion that this is what they do for all the viral ‘here the LLM famously says something wrong’ problems, as I don’t think they can actually reliably train the model it made an error.

    • MagicShel@programming.dev
      link
      fedilink
      English
      arrow-up
      14
      ·
      4 months ago

      That’s the most straightforward fix. You can’t actually fix the output of an LLM, so you have to run something on the output. You can have it scanned by another AI but that costs money and is also fallible. Regex/delete is the most reliable way to censor.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        4 months ago

        Yes, and then the problem is that this doesn’t really scale well. Esp as it is always hard to regexp all the variants correctly without false positives and negatives. Time to regexp html ;).

        • OhNoMoreLemmy@lemmy.ml
          link
          fedilink
          English
          arrow-up
          8
          ·
          4 months ago

          Yeah, and you can really see this in image generation. There’s often blocks on using the names of celebrities in the prompts, but if you misspell the names enough it can bypass the censor, and the image generator still understands it.