Something Bizarre Is Happening to People Who Use ChatGPT a Lot - eviltoast
  • Shanmugha@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    edit-2
    5 days ago

    I’ll bait. Let’s think:

    -there are three humans who are 98% right about what they say, and where they know they might be wrong, they indicate it

    • now there is an llm (fuck capitalization, I hate the ways they are shoved everywhere that much) trained on their output

    • now llm is asked about the topic and computes the answer string

    By definition that answer string can contain all the probably-wrong things without proper indicators (“might”, “under such and such circumstances” etc)

    If you want to say 40% wrong llm means 40% wrong sources, prove me wrong

    • LovableSidekick@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      5 days ago

      It’s more up to you to prove that a hypothetical edge case you dreamed up is more likely than what happens in a normal bell curve. Given the size of typical LLM data this seems futile, but if that’s how you want to spend your time, hey knock yourself out.