LLMs can’t reason — they just crib reasoning-like steps from their training data - eviltoast
  • lunarul@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    1 month ago

    Perhaps the AI bros “think” by guessing the next word and hoping it’s convincing

    Perhaps? Isn’t that the definition of LLMs?

    Edit: oh, i just realized it’s not talking about the LLMs, but about their apologists