Google Search Is Now a Giant Hallucination - eviltoast

Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

  • atrielienz@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    I understand the gist but I don’t mean that it’s actively like looking up facts. I mean that it is using bad information to give a result (as in the information it was trained on says 1+1 =5 and so it is giving that result because that’s what the training data had as a result. The hallucinations as they are called by the people studying them aren’t that. They are when the training data doesn’t have an answer for 1+1 so then the LLM can’t do math to say that the next likely word is 2. So it doesn’t have a result at all but it is programmed to give a result so it gives nonsense.

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      5 months ago

      Yeah, I think the problem is really that language is ambiguous and the LLMs can get confused about certain features of it.

      For example, I often ask different models when was the Go programming language created just to compare them. Some say 2007 most of the time and some say 2009 — which isn’t all that wrong, as 2009 is when it was officially announced.

      This gives me a hint that LLMs can mix up things that are “close enough” to the concept we’re looking for.