A.I.’s un-learning problem: Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data - eviltoast

I’m rather curious to see how the EU’s privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn’t have a paywall)

  • Veraticus@lib.lgbt
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    This is a somewhat sensationalist and frankly uninteresting way to describe neural networks. Obviously it would take years of analysis to understand the weights of each individual node and what they’re accomplishing (if it is even understandable in a way that would make sense to people without very advanced math degrees). But that doesn’t mean we don’t understand the model or what it does. We can and we do.

    You have misunderstood this article if what you took from it is this:

    It’s also very similar in the way that nobody actually can tell precisely how it works, for some reason it just does.

    We do understand how it works – as an overall system. Inspecting the individual nodes is as irrelevant to understanding an LLM as cataloguing trees in a forest tells you the name of the city to which the forest is adjacent.