LLMs can’t reason — they just crib reasoning-like steps from their training data - eviltoast
  • ebu@awful.systems
    link
    fedilink
    English
    arrow-up
    25
    ·
    2 months ago

    because it encodes semantics.

    if it really did so, performance wouldn’t swing up or down when you change syntactic or symbolic elements of problems. the only information encoded is language-statistical