LLMs can’t reason — they just crib reasoning-like steps from their training data - eviltoast
  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    2 months ago

    Lot easier to do hype when you pretend the previous iterations didn’t exist. (and still do, and actually have more content).