Anthropic has developed an AI 'brain scanner' to understand how LLMs work and it turns out the reason why chatbots are terrible at simple math and hallucinate is weirder than you thought - eviltoast
  • MudMan@fedia.io
    link
    fedilink
    arrow-up
    26
    ·
    15 days ago

    That’s what’s fascinating about how it does language in general.

    The article is interesting in both the ways in which things are similar and the ways they’re different. The rough approximation thing isn’t that weird, but obviously any human would have self-awareness of how they did it and not accidentally lie about the method, especially when both methods yield the same result. It’s a weirdly effective, if accidental example of human-like reasoning versus human-like intelligence.

    And, incidentally, of why AGI and/or ASI are probably much further away than the shills keep claiming.