The other is whether or not there needs to be stricter filters on output to avoid copyright.
The second one is easy to both argue for and implement, it just means funneling money towards a fine tuned RAG model to detect infringement before spitting out content. I’d expect we’ll be seeing that in the near future. It’s similar to the arguments YouTube was doomed at its acquisition because of rampant copyright infringement, but they just created a tagging system and now people complain about over-zealous DCMA enforcement - generative AI will end up in the same place with the same results for cloud-based models.
The first is much more murky, and I’m extremely skeptical that the suits regarding it will be successful given the degree of transformation and the relative scope of the material in any given suit compared to the total training set. As well the purpose of the laws in the first place were to encourage creation, and setting back arguably the most powerful creative tool in history (particularly when it means likely being eclipsed by other nation states with different attitudes towards IP) doesn’t seem all that encouraging.
If I were putting money on it, we’ll see multiple rulings against training as infringement which will settle the issue, but we will see “copyright detection as a service” models pretty much everywhere for a short period until suddenly the use of generative AI by creatives is so widespread that its being unable to be copyrighted means business models shift from media as a product to a service.
There is clearly value in a trained AI that an untrained model lacks, otherwise you could sell them as a product or service for the same price. That training has value, and price difference between a trained and untrained model is that value.
Because training has a value, the training material has value as well. You can’t commercially extract value from someone’s product to make your own product and sell it, unless you buy their product wholesale or through a license.
And it they argue that paying would be financially prohibitive to training, they admit that the training has financial value. It’d be cheap if the training material wasn’t valuable.
I see two likely paths here for the future, presuming the court rules in favor of the NYT. The first is that AI companies work out a deal with publishers and media companies to use their work while not breaking the bank. The second is that AI companies don’t change the training process, but they change their financial model – if the AI is free to the public, they aren’t making money off of anyone’s work. They’d have to charge for ads or something.
Spaceballs extracts almost all of its value from Star Wars without paying for it.
You absolutely can extract value from things when the way in which you do it is fair use.
Which is typically considered to be use that is transformative enough so as to not simply be derivative, or in the public interest.
And I think you’d have a very difficult time showing LLMs general use to be derivative of any specific part of the training data.
We’ll see soon, as these court cases resolve.
And if the cases find in favor of the plaintiffs, “not charging” isn’t going to work out. You can’t copy material and not charge for it and get away with it. If there’s prior law that training is infringement, it’s unlikely the decision will be worded so narrowly that similar cases against companies that don’t charge will be found not to be infringement.
Keep in mind one of the pending cases is against Meta, whose model is completely free to access and use.
The thing is these are two separate arguments.
One is whether or not training is infringement.
The other is whether or not there needs to be stricter filters on output to avoid copyright.
The second one is easy to both argue for and implement, it just means funneling money towards a fine tuned RAG model to detect infringement before spitting out content. I’d expect we’ll be seeing that in the near future. It’s similar to the arguments YouTube was doomed at its acquisition because of rampant copyright infringement, but they just created a tagging system and now people complain about over-zealous DCMA enforcement - generative AI will end up in the same place with the same results for cloud-based models.
The first is much more murky, and I’m extremely skeptical that the suits regarding it will be successful given the degree of transformation and the relative scope of the material in any given suit compared to the total training set. As well the purpose of the laws in the first place were to encourage creation, and setting back arguably the most powerful creative tool in history (particularly when it means likely being eclipsed by other nation states with different attitudes towards IP) doesn’t seem all that encouraging.
If I were putting money on it, we’ll see multiple rulings against training as infringement which will settle the issue, but we will see “copyright detection as a service” models pretty much everywhere for a short period until suddenly the use of generative AI by creatives is so widespread that its being unable to be copyrighted means business models shift from media as a product to a service.
There is clearly value in a trained AI that an untrained model lacks, otherwise you could sell them as a product or service for the same price. That training has value, and price difference between a trained and untrained model is that value.
Because training has a value, the training material has value as well. You can’t commercially extract value from someone’s product to make your own product and sell it, unless you buy their product wholesale or through a license.
And it they argue that paying would be financially prohibitive to training, they admit that the training has financial value. It’d be cheap if the training material wasn’t valuable.
I see two likely paths here for the future, presuming the court rules in favor of the NYT. The first is that AI companies work out a deal with publishers and media companies to use their work while not breaking the bank. The second is that AI companies don’t change the training process, but they change their financial model – if the AI is free to the public, they aren’t making money off of anyone’s work. They’d have to charge for ads or something.
Spaceballs extracts almost all of its value from Star Wars without paying for it.
You absolutely can extract value from things when the way in which you do it is fair use.
Which is typically considered to be use that is transformative enough so as to not simply be derivative, or in the public interest.
And I think you’d have a very difficult time showing LLMs general use to be derivative of any specific part of the training data.
We’ll see soon, as these court cases resolve.
And if the cases find in favor of the plaintiffs, “not charging” isn’t going to work out. You can’t copy material and not charge for it and get away with it. If there’s prior law that training is infringement, it’s unlikely the decision will be worded so narrowly that similar cases against companies that don’t charge will be found not to be infringement.
Keep in mind one of the pending cases is against Meta, whose model is completely free to access and use.
Just want to say this is great food for thought. Its going to take me time to mull over it
I agree. Both your comments were exciting views to read. Thanks!