Generative AI boom "could come to a fairly swift end" - eviltoast
  • BetaDoggo_@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    LLMs only predict the next token. Sometimes those predictions are correct, sometimes they’re incorrect. Larger models trained on a greater number of examples make better predictions, but they are always just predictions. This is why incorrect responses often sound plausable even if logically they don’t make sense.

    Fixing hallucinations is more about decreasing inaccuracies rather than fixing an actual problem with the model itself.