A.I.’s un-learning problem: Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data - eviltoast

I’m rather curious to see how the EU’s privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn’t have a paywall)

  • SatanicNotMessianic@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Could you outline what you think a human cognitive model of “cat” looks like without referring to anything non-cat?

        • Veraticus@lib.lgbt
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          1 year ago

          You can’t! It’s like describing fire to someone that’s never experienced fire.

          This is the root of experience and memory and why humans are different from LLMs. Which, again, can never understand or experience a cat or fire. But the difference is more fundamental than that. To an LLM, there is no difference between fire and cat. They are simply words with frequencies attached that lead to other words. Their difference is the positions they occupy in a mathematical model where sometimes it will output one instead of the other, nothing more.

          Unless you’re arguing my inability to express a mental construct to you completely means I myself don’t experience it. Which I think you would agree is absurd?

            • Veraticus@lib.lgbt
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              1 year ago

              How is that germane to this question? Do you agree humans can experience mental phenomena? Like, do you think I have any mental models at all?

              If so, then that is the difference between me and an LLM.

              • SatanicNotMessianic@lemmy.ml
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                I think you have a mental model and that it is analogous to the model created in an LLM in that it is representable by a semantic graph/n-dimensional matrix relating concepts that are realized via terms.

                You have never in your life encountered a dodo. You know what a dodo is (using the present these because I’m talking about a concept). It is a bird, so it relates evolutionarily and ecologically to “bird.” It’s flightless, so it relates to “patriarch” and “emu.” It is extinct, so it relates to all of the species extinction ideas you have. Humans perhaps contributed to the extinction, so it links to human-caused ecological change, which in turn links to human-caused climate change. Human-introduced invasive species are are causing ecological change in Australia, and that may have been a major factor in driving the dodo to extinction. People ate them, so maybe in your head it has a relation to wild turkeys. And so on. That’s how minds work. That’s how the human cognitive model of the world works. That’s how LLMs work.

                Visualize an n-dimensional space in which these semantic topics are embedded. The interpretation of the dimensions don’t matter. Instead, we’re just worried about the distances between concepts. Dodo is closer to turkey than it is to snake. Dodo is closer to snake than it is to rock. Dodo is closer to rock than it is to the feeling of melancholy I get when listening to Tori Amos. We can grasp this intuitively. We can mathematize it by formally placing the various concepts in a metric space.

                There’s a lot more to unpack, from neural correlates of consciousness to cognitive linguistics and embodied learning using metaphorical reasoning, but that’s kind of the gist of it boiled down to an overly long post.

                • Veraticus@lib.lgbt
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  1 year ago

                  That’s how LLMs work.

                  This is not how LLMs work. LLMs do not have complex thought webs correlating concepts birds, flightlessness, extinction, food, and so on. That is how humans work.

                  An LLM assembles a mathematical model of what word should follow any other word by analyzing terabytes of data. If in its training corpus the nearest word to “dodo” is “attractive,” the LLM will almost always tell you that dodos are attractive. This is not because those concepts are actually related to the LLM, because the LLM is attracted to dodos, or because LLMs have any thoughts at all. It is simply the output of bunch of math based on word proximity.

                  Humans have cognition and mental models. LLMs have frequency and word weights. While you have correctly identified that both of these things can be portrayed as n-dimensional matrixes, you can also use those tools to describe electrical currents or the movement of stars. But those things contain no more thought and have no more mental phenomenon occurring in them than LLMs.