A.I.’s un-learning problem: Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data - eviltoast

I’m rather curious to see how the EU’s privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn’t have a paywall)

  • Veraticus@lib.lgbt
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    1 year ago

    That’s how LLMs work.

    This is not how LLMs work. LLMs do not have complex thought webs correlating concepts birds, flightlessness, extinction, food, and so on. That is how humans work.

    An LLM assembles a mathematical model of what word should follow any other word by analyzing terabytes of data. If in its training corpus the nearest word to “dodo” is “attractive,” the LLM will almost always tell you that dodos are attractive. This is not because those concepts are actually related to the LLM, because the LLM is attracted to dodos, or because LLMs have any thoughts at all. It is simply the output of bunch of math based on word proximity.

    Humans have cognition and mental models. LLMs have frequency and word weights. While you have correctly identified that both of these things can be portrayed as n-dimensional matrixes, you can also use those tools to describe electrical currents or the movement of stars. But those things contain no more thought and have no more mental phenomenon occurring in them than LLMs.

      • Veraticus@lib.lgbt
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        No, they embed word weights in metric spaces. Human thought is more like semantic concepts in a metric space (though I don’t think that’s entirely unequivocal, human thought is not very well-understood). Even if the space is similar what’s in them is definitely not.