An Analysis of DeepMind's 'Language Modeling Is Compression' Paper - eviltoast
  • abhi9u@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Interesting. I’m just thinking aloud to understand this.

    In this case, the models are looking at a few sequence of bytes in their context and are able to predict the next byte(s) with good accuracy, which allows efficient encoding. Most of our memories are associative, i.e. we associate them with some concept/name/idea. So, do you mean, our brain uses the concept to predict a token which gets decoded in the form of a memory?

    • AbouBenAdhem@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Firstly—maybe what we consider an “association” is actually an indicator that our brains are using the same internal tokens to store/compress the memories.

      But what I was thinking of specifically is narrative memories: our brains don’t store them frame-by-frame like video, but rather, they probably store only key elements and use their predictive ability to extrapolate the omitted elements on demand.

      • InvertedParallax@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        No, because our brains also use hierarchical activation for association, which is why if we’re talking about bugs and I say “I got a B” you assume its a stinging insect, not a passing grade.

        If it was simple word2vec we wouldn’t have that additional means of noise suppression.