LLMs can’t reason — they just crib reasoning-like steps from their training data - eviltoast
  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    28
    ·
    2 months ago

    there’s a lot of people (especially here, but not only here) who have had the insight to see this being the case, but there’s also been a lot of boosters and promptfondlers (ie. people with a vested interest) putting out claims that their precious word vomit machines are actually thinking

    so while this may confirm a known doubt, rigorous scientific testing (and disproving) of the claims is nonetheless a good thing

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 months ago

      No they do not im afraid, hell I didnt even know that even ELIZA caused people to think it could reason (and this worried the creator) until a few years ago.