Make illegally trained LLMs public domain as punishment - eviltoast

It’s all made from our data, anyway, so it should be ours to use as we want

  • ClamDrinker@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    2
    ·
    8 hours ago

    Although I’m a firm believer that most AI models should be public domain or open source by default, the premise of “illegally trained LLMs” is flawed. Because there really is no assurance that LLMs currently in use are illegally trained to begin with. These things are still being argued in court, but the AI companies have a pretty good defense in the fact analyzing publicly viewable information is a pretty deep rooted freedom that provides a lot of positives to the world.

    The idea of… well, ideas, being copyrightable, should shake the boots of anyone in this discussion. Especially since when the laws on the book around these kinds of things become active topic of change, they rarely shift in the direction of more freedom for the exact people we want to give it to. See: Copyright and Disney.

    The underlying technology simply has more than enough good uses that banning it would simply cause it to flourish elsewhere that does not ban it, which means as usual that everyone but the multinational companies lose out. The same would happen with more strict copyright, as only the big companies have the means to build their own models with their own data. The general public is set up for a lose-lose to these companies as it currently stands. By requiring the models to be made available to the public do we ensure that the playing field doesn’t tip further into their favor to the point AI technology only exists to benefit them.

    If the model is built on the corpus of humanity, then humanity should benefit.

    • patatahooligan@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      28 minutes ago

      the AI companies have a pretty good defense in the fact analyzing publicly viewable information is a pretty deep rooted freedom that provides a lot of positives to the world

      They are not “analyzing” the data. They are feeding it into a regurgitating mechanism. There’s a big difference. Their defense is only “good” because AI is being misrepresented and misunderstood.

      I agree that we shouldn’t strive for more strict copyright. We should fight for a much more liberal system. But as long as everyone else has to live by the current copyright laws, we should not let AI companies get away with what they’re doing.

      • gazter@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 minutes ago

        I’ve never really delved into the AI copyright debate before, so forgive my ignorance on the matter.

        I don’t understand how an AI reading a bunch of books and rearranging some of those words into a new story, is different to a human author reading a bunch of books and rearranging those words into a new story.

        Most AI art I’ve seen has been… Unique, to say the least. To me, they tend to be different enough to the art they were trained in to not be a direct ripoff, so personally I don’t see the issue.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 hours ago

      Banning AI is out of the question. Even the EU accepts that and they tend to be pretty ban heavy, unlike the US.

      But it’s important that we have these discussions about how copyright applies to AI so that we can actually get an answer and move on, right now it’s this legal quagmire that no one really wants to get involved in except the big companies. If a small group of university students want to build an AI right now they can’t because of the legal nightmare that would be the Twilight zone of law that is acquiring training data.