Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary' study - eviltoast

Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

  • phx@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    At the same time, they don’t really behave that much differently from some humans that have been sucked down the path of various conspiracy theories. For a lot of those, the first “lesson” is ‘everyone else is wrong and have been deceived or are trying to trick you, trust nobody but us’. From there, some people end up going down the rabbit-hole to become “Sovereign Citizens” or storm congress.