AI bot capable of insider trading and lying, say researchers - eviltoast

The researchers behind the simulation say there is a risk of this happening for real in the future.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    3
    ·
    11 months ago

    In this case, it decided that being helpful to the company was more important than its honesty.

    It did no such thing. It doesn’t know what those things are. “LLM AI” is not a conscious thinking being and treating it like it is will end badly. Giving an LLM any responsibility to act on your behalf automatically is a crazy stupid idea at this point in time. There needs to be a lot more testing and learning about how to properly train models for more reliable outcomes.

    It’s almost impressive how quickly humans seem to accept something as “human” just because it can form coherent sentences.

    • bcrab@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      3
      ·
      11 months ago

      I don’t know why ppl cannot figure this out. What we are calling “AI” is just a machine putting words together based on what it has “seen” in its training data. It has no intelligence and has no intent. It just groups words together.

      It’s like going into a library and asking the librarian to be a doctor. They can tell you what the books in the library say about the subject (and might even make up some things based on what they saw in a few episodes of House), but they cannot actual do the work of a doctor.