Are modern LLMs closer to AGI or next word predictor? Where do they fall in this graph with 10 on x-axis being human intelligence. - eviltoast

Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

  • Pumpkin Escobar@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    2 months ago

    I’ll preface by saying I think LLMs are useful and in the next couple years there will be some interesting new uses and existing ones getting streamlined…

    But they’re just next word predictors. The best you could say about intelligence is that they have an impressive ability to encode knowledge in a pretty efficient way (the storage density, not the execution of the LLM), but there’s no logic or reasoning in their execution or interaction with them. It’s one of the reasons they’re so terrible at math.