Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary' study - eviltoast

Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

  • irotsoma@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    10 months ago

    The problem is that these LLMs are built with the wrong driving motivator. They’re driven to find one right way whereas the reality is that there is rarely a single right way and computers don’t need to have a single right way like humans tend towards. The LLM shouldn’t be driven to be “right” in its learning model. It should be trained on known good data only as a base, and then given the other data to serve context rather than allowing that data to modify the underlying system. This is more like how biological creatures work in teaching a child to be “good” or “evil” and to know the basic things needed to survive and serve their purpose, and then the stuff they learn in adulthood serves to help them apply those base concepts to the world.

    • phx@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      At the same time, they don’t really behave that much differently from some humans that have been sucked down the path of various conspiracy theories. For a lot of those, the first “lesson” is ‘everyone else is wrong and have been deceived or are trying to trick you, trust nobody but us’. From there, some people end up going down the rabbit-hole to become “Sovereign Citizens” or storm congress.