Google Search Is Now a Giant Hallucination - eviltoast

Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

  • afraid_of_zombies@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    5 months ago

    including their own previous responses lol, as input/context so the bot autocompletes the conversation. It literally can’t remember a single word of what you said on it’s own.

    Chatgpt has had memory from previous conversations for about a month now and it’s context window is no longer fixed. Additionally it has the ability to assign sentences to memory on its own. So if it “thinks” what you said is important it saves it.

    • trollbearpig@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      5 months ago

      Can you point me to the paper/article/whatever where this is being discussed please? I’m actually interested on learning about it. Even if I don’t like the way they are using the technology, I’m still a programmer at hearth and would love to read about this.

      To the point of the conversation, honestly man that was just an example of the many problems I see with this. But you have to understand that people like you keep asking us for proof that LLMs are not smart. But come on man, you are the ones claiming you solved the hard problem of mind, on the first try no less hahaha. You are the ones with the burden of proof here and you have provided nothing of the sort. Do better people or stop trying to confuse us with retoric.

      • afraid_of_zombies@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        5 months ago

        I mean it’s just the release notes. Go to their website. I have used the memory feature myself on the app so know it’s working and as for the context window it can actually tell you what it is for each session.

        But you have to understand that people like you keep asking us for proof that LLMs are not smart.

        Where? Where have I asked that? Don’t strawman me, I am not your punching bag and won’t defend something I didn’t say. You can “come on man” all you want but it won’t change my answer. I have made zero claims if this thing is smart or asked anyone to weight in on the issue either way.

        I pointed out two features it has now, which I don’t think anyone can dispute that it does have those features. It has a larger context window and memory that it can update. That is all I said, a very small claim that you can prove for yourself in under five minutes by going to their website.

        • trollbearpig@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          5 months ago

          Oh, you are talking about this https://help.openai.com/en/articles/8590148-memory-faq hahahaha. I’m sorry man, but you are a moron or arguing in bad faith. That’s yet another feature where they inject even more shit in the context/input to make it feel like the thing has memory. That’s literally yet another example of what I was pointing out, so thanks for confirming my suspicions. Seriously dude, do better if you really want to have a conversation. Your response made me waste my time, and on top of that you insult me hahaha.