LLMs can’t reason — they just crib reasoning-like steps from their training data - eviltoast
  • lunarul@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 month ago

    My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate.

    Didn’t the previous models already do this?