Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary' study - eviltoast

Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

  • Daxtron2@startrek.website
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    10 months ago

    Not inherently, I’m sure that’s part of it but it’s really everywhere. Even here on Lemmy I’ve run into nasty folk

    • JustMy2c@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      10 months ago

      True but it’s reddit that’s served as a base for most models…

        • JustMy2c@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          10 months ago

          Obviously but reddit is in the goldilocks zone where you get coherent intelligent stuff and humor and facts.

          But it’s still toxic for an Ai.

            • JustMy2c@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              Correcto but maybe it DOES apply to most asked questions, if you know where I’m going with that