LLMs can’t reason — they just crib reasoning-like steps from their training data - eviltoast
  • DarkThoughts@fedia.io
    link
    fedilink
    arrow-up
    6
    ·
    1 month ago

    My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate. Not sure if I’d call that “reasoning” but I guess it could potentially improve results in some cases. With OpenAI not being so open it is hard to tell though. They’ve been overpromising a lot already so it may as well be just complete bullshit.

    • lunarul@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate.

      Didn’t the previous models already do this?