Suing Writers Seethe at OpenAI's Excuses in Court - eviltoast
    • makeasnek@lemmy.ml
      link
      fedilink
      arrow-up
      36
      arrow-down
      5
      ·
      edit-2
      1 year ago

      No that’s not how it works. It stores learned information like “word x is more likely to follow word y than word a” or “people from country x are more likely to consume food a than b”. That is what is distributed when the AI model is shared. To learn that, it just reads books zillions of times and updates its table of likelihoods. Just like an artist might listen to a Lil Wayne album hundreds of times and each time they learn a little bit more about his rhyme style or how beats work or whatever. It’s more complicated than that, but that’s a layperson’s explanation of how it works. The book isn’t stored in there somewhere. The book’s contents aren’t transferred to other parties.

      • Madison_rogue@kbin.social
        link
        fedilink
        arrow-up
        13
        arrow-down
        6
        ·
        edit-2
        1 year ago

        The learning model is artificial, vs a human that is sentient. If a human learns from a piece of work, that’s fine if they emulate styles in their own work. However, sample that work, and the original artist is due compensation. This was a huge deal in the late 80s with electronic music sampling earlier musical works, and there are several cases of copyright that back original owners’ claim of royalties due to them.

        The lawsuits allege that the models used copyrighted work to learn. If that is so, writers are due compensation for their copyrighted work.

        This isn’t litigation against the technology. It’s litigation around what a machine can freely use in its learning model. Had ChatGPT, Meta, etc., used works in the public domain this wouldn’t be an issue. Yet it looks as if they did not.

        EDIT

        And before someone mentions that the books may have been bought and then used in the model, it may not matter. The Birthday Song is a perfect example of copyright that caused several restaurant chains to use other tunes up until the copyright was overturned in 2016. Every time the AI uses the copied work in its’ output it may be subject to copyright.

        • Heratiki@lemmy.ml
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          1 year ago

          The creator of ChatGPT is sentient. Why couldn’t it be said that this is their expression of the learned works?

            • Heratiki@lemmy.ml
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              I’ve glanced at these a few times now and there are a lot of if ands and buts in there.

              I’m not understanding how an AI itself infringes on the copyright as it has to be directed in its creation at this point (GPT specifically). How is that any different than me using a program that will find a specific piece of text and copy it for use in my own document. In that case the document would be presented by me and thus I would be infringing not the software. AI (for the time being) are simply software and incapable of infringement. And suing a company who makes the AI simply because they used data to train its software is not infringement as the works are not copied verbatim from their original source unless specifically requested by the user. That would put the infringement on the user.

              • Phanatik@kbin.social
                link
                fedilink
                arrow-up
                3
                arrow-down
                1
                ·
                1 year ago

                There’s a bit more nuance to your example. The company is liable for building a tool that allows plagiarism to happen. That’s not down to how people are using it, that’s just what the tool does.

                • Heratiki@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 year ago

                  So a company that makes lock picking tools is liable for when a burglar uses them to steal? Or a car manufacturer is liable when some uses their car to kill? How about knives, guns, tools, chemicals, restraints, belts, rope, and I could go on and nearly use every single word in the English language yet none of those manufacturers can be sued for someone misusing their products. They’d have to show intent of maliciousness which I just don’t see is possible in the context they’re seeking.

                  • Phanatik@kbin.social
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    1 year ago

                    The reason GPT is different from those examples (not all of them but I’m not going into that), is that the malicious action is on the part of the user. With GPT, it gives you an output that it has plagiarised. The user can take that output and then submit it as their own which is further plagiarism but that doesn’t absolve GPT. The problem is that GPT doesn’t cite its own sources which would be very helpful in understanding the information it’s getting and with fact-checking it.

        • LemmysMum@lemmy.world
          link
          fedilink
          arrow-up
          9
          arrow-down
          4
          ·
          1 year ago

          I can read a copy written work and create a work from the experience and knowledge gained. At what point is what I’m doing any different to the A.I.?

          • mkhoury@lemmy.ca
            link
            fedilink
            arrow-up
            4
            ·
            1 year ago

            For one thing: when you do it, you’re the only one that can express that experience and knowledge. When the AI does it, everyone an express that experience and knowledge. It’s kind of like the difference between artisanal and industrial. There’s a big difference of scale that has a great impact on the livelihood of the creators.

            • LemmysMum@lemmy.world
              link
              fedilink
              arrow-up
              4
              arrow-down
              1
              ·
              1 year ago

              Yes, it’s wonderful. Knowledge might finally become free in the advent of AI tools and we might finally see the death of the copyright system. Oh how we can dream.

              • Phanatik@kbin.social
                link
                fedilink
                arrow-up
                2
                arrow-down
                2
                ·
                1 year ago

                I’m not sure what you mean by this. Information has always been free if you look hard enough. With the advent of the internet, you’re able to connect with people who possess this information and you’re likely to find it for free on YouTube or other websites.

                Copyright exists to protect against plagiarism or theft (in an ideal world). I understand the frustration that comes with archaic laws and that updates to laws move at a glacier’s pace, however, the death of copyright harms more people than you’re expecting.

                Piracy has existed as long as the internet has. Companies have been complaining ceaselessly about lost profits but once LLMs came along, they’re fine with piracy if it’s been masked behind a glorified search algorithm. They’re fine with cutting jobs and replacing them with an LLM that produces less quality output at significantly cheaper rates.

                • LemmysMum@lemmy.world
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  Information has always been free if you look hard enough. With the advent of the internet, you’re able to connect with people who possess this information and you’re likely to find it for free on YouTube or other websites.

                  And with the advent of AI we no longer have to look hard.

          • Phanatik@kbin.social
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            1 year ago

            For one thing, you can do the task completely unprompted. The LLM has to be told what to do. On that front, you have an idea in your head of the task you want to achieve and how you want to go about doing it, the output is unique because it’s determined by your perceptions. The LLM doesn’t really have perceptions, it has probabilities. It’s broken down the outputs of human creativity into numbers and is attempting to replicate them.

            • LemmysMum@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              3
              ·
              edit-2
              1 year ago

              The ai does have perceptions, fed into by us as inputs. I give the ai my perceptions, the ai creates a facsimile, and I adjust the perceptions I feed into the ai until I receive an output that meets the needs of my requirements, no different from doing it myself except I didn’t need to read all the books, and learn all the lessons myself. I still tailor the end product, just not to the same micro scale that we needed to traditionally.

              • Phanatik@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                You can’t feed it perceptions no more than you can feed me your perceptions. You give it text and the quality of the output is determined by how the LLM has been trained to understand that text. If by feeding it perceptions, you mean by what it’s trained on, I have to remind you that the reality GPT is trained on is the one dictated by the internet with all of its biases. The internet is not a reflection of reality, it’s how many people escape from reality and share information. It’s highly subject to survivorship bias. If the information doesn’t appear on the internet, GPT is unaware of it.

                To give an example, if GPT gives you a bad output and you tell it that it’s a bad output, it will apologise. This seems smart but it’s not really. It doesn’t actually feel remorse, it’s giving a predetermined response based on what it’s understood by your text.

                • LemmysMum@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  1 year ago

                  We’re not talking about perceptions as in making an AI literally perceive anything. I can feed you prompts and ideas of my own and get an output no different than if I was using AI tools, the difference being ai tools have already gathered the collective knowledge you’d get from say doing a course in photoshop, taking an art class, reading an encyclopaedia or a novel, going to school for music theory, etc.

                  • Phanatik@kbin.social
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    1 year ago

                    I get that part but I think what gets taken more seriously is how 'human" the responses seem which is a testament to how good the LLM model is. But that’s set dressing when GPT has been known to give incorrect, outdated or contradictory answers. Not always but unless you know what kind of answer to expect, you have to verify what it’s telling you which means you’ll be spending half the time fact-checking the LLM.

          • BraveSirZaphod@kbin.social
            link
            fedilink
            arrow-up
            4
            arrow-down
            2
            ·
            edit-2
            1 year ago

            There is a practical difference in the time required and sheer scale of output in the AI context that makes a very material difference on the actual societal impact, so it’s not unreasonable to consider treating it differently.

            Set up a lemonade stand on a random street corner and you’ll probably be left alone unless you have a particularly Karen-dominated municipal government. Try to set up a thousand lemonade stands in every American city, and you’re probably going to start to attract some negative attention. The scale of an activity is a relevant factor in how society views it.

        • Kichae@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          1 year ago

          It’s litigation around what a machine can freely use in its learning model.

          No, its not that, either. It’s litigation around what resources a person can exploit to develop a product without paying for that right.

          The machine is doing nothing wrong. It’s not feeding itself.

    • Dudewitbow@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      Its less about copying the work, its more like looking at patterns that appear in a work.

      To bring a very rudimentary example, if I wanted a word and the first letter was Q, what would the second letter be.

      Of course, statistically, the next letter is u, and its not common for words starting with Q to have a different letter after that. ML/AI is like taking these small situations, but having a ridiculous amount of parameters to come up with something based on several internal models. These paramters of course generally have some context.

      Its like if you were told to read a book thoroughly, and then after was told to reproduce the same book. You probably cannot make it 1:1, but could probably get the general gist of a story. The difference between you and the machine is the machine read a lot of books, and contextually knows patterns so that it can generate something similar faster and more accurate, but not exactly the original one for one thing.

    • Aria@lemmygrad.ml
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      When you download Vicuna or Stable Diffusion XL, they’re a handful of gigabytes. But when you go download LAION-5B, it’s 240TB. So where did that data go if it’s being copy/pasted and regurgitated in its entirety?

      • andruid@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Exactly! If it were just out putting exact data they wouldn’t care about making new works and just pivot as the world’s greatest source of compression.

        Though there is some work researchers have done to heavily modify these models to over fit to do exactly this.