LLMs can’t reason — they just crib reasoning-like steps from their training data - eviltoast
  • David Gerard@awful.systemsOPM
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 month ago

    XML works fine for what it is, it’s just a bit verbose. Not sure it’d be my first choice for a new thing, but it’s not a toxic waste dump if you’re allowed to do it properly.