Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary' study - eviltoast

Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

  • AnarchistArtificer@slrpnk.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    I think you’re right. As someone who’s an aspiring expert in a different field that has been brushing up with machine learning stuff lots in recent years (biochemistry), the distinction you describe, and the blurring of it, is something I have felt, but only just consciously recognised.

    • theluddite@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      I’m deeply concerned that as a society we’re becoming unable to distinguish between science, aka the search for knowledge, and corporate product development. More concerning still is the distinction between a scientific paper, which exists to communicate experimental finding such that it can be reproduced, and what is functionally advertising of proprietary products masquerading as such. No one can reproduce that “paper” cited there, because it’s being done in-house at a company. That’s antithetical to science.