answer = sum(n) / len(n) - eviltoast
  • magic_lobster_party@kbin.run
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    4 months ago

    This whole “we can’t explain how it works” is bullshit

    Mostly it’s just millions of combinations of y = k*x + m with y = max(0, x) between. You don’t need more than high school algebra to understand the building blocks.

    What we can’t explain is why it works so well. It’s difficult to understand how the information is propagated through all the different pathways. There are some ideas, but it’s not fully understood.

    • Match!!@pawb.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      ??? it works well because we expect the problem space we’re searching to be continuous and differentiable and the targetted variable to be dependent on the features given, why wouldn’t it work

      • magic_lobster_party@kbin.run
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        4 months ago

        The explanation is not that simple. Some model configurations work well. Others don’t. Not all continuous and differentiable models cut it.

        It’s not given a model can generalize the problem so well. It can just memorize the training data, but completely fail on any new data it hasn’t seen.

        What makes a model be able to see a picture of a cat it has never seen before, and respond with “ah yes, that’s a cat”? What kind of “cat-like” features has it managed to generalize? Why does these features work well?

        When I ask ChatGPT to translate a script from Java to Python, how is it able to interpret the instruction and execute it? What features has it managed to generalize to be able to perform this task?

        Just saying “why wouldn’t it work” isn’t a valid explanation.