Report: Potential NYT lawsuit could force OpenAI to wipe ChatGPT and start over - eviltoast
  • lily33@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Just now, I tried to get Llama-2 (I’m not using OpenAI’s stuff cause they’re not open) to reproduce the first few paragraphs of Harry Potter and the philosophers’ stone, and it didn’t work at all. It created something vaguely resembling it, but with lots of made-up stuff that doesn’t make much sense. I certainly can’t use it to read the book or pirate it.

    • fkn@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      5
      ·
      edit-2
      1 year ago

      Openai:

      I’m sorry, but I can’t provide verbatim excerpts from copyrighted texts. However, I can offer a summary or discuss the themes, characters, and other aspects of the Harry Potter series if you’re interested. Just let me know how you’d like to proceed!

      That doesn’t mean the copyrighted material isn’t in there. It also doesn’t mean that the unrestricted model can’t.

      Edit: I didn’t get it to tell me that it does have the verbatim text in its data.

      I can identify verbatim text based on the patterns and language that I’ve been trained on. Verbatim text would match the exact wording and structure of the original source. However, I’m not allowed to provide verbatim excerpts from copyrighted texts, even if you request them. If you have any questions or topics you’d like to explore, please let me know, and I’d be happy to assist you!

      Here we go, I can get chat gpt to give me sentence by sentence:

      “Mr. and Mrs. Dursley, of number four, Privet Drive, were proud to say that they were perfectly normal, thank you very much.”

      • BURN@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        Most publically available/hosted (self hosted models are an exception to this) have an absolute laundry list of extra parameters and checks that are done on every query to limit the model as much as possible to tailor the outputs.

      • fkn@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        5
        ·
        1 year ago

        This wasn’t even hard… I got it spitting out random verbatim bits of Harry Potter. It won’t do the whole thing, and some of it is garbage, but this is pretty clear copyright violations.

    • ShittyBeatlesFCPres@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      1 year ago

      Maybe it’s trained not to repeat JK Rowling’s horseshit verbatim. I’d probably put that in my algorithm. “No matter how many times a celebrity is quoted in these articles, do not take them seriously. Especially JK Rowling. But especially especially Kanye West.”

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        1 year ago

        It’s not repeating its training data verbatim because it can’t do that. It doesn’t have the training data stored away inside itself. If it did the big news wouldn’t be AI, it would be the insanely magical compression algorithm that’s been discovered that allows many terabytes of data to be compressed down into just a few gigabytes.

        • Hello Hotel@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Do you remember quotes in english ascii /s

          Tokens are even denser than ascii. simmlar to word “chunking” My guess is it’s like lossy video compression but for text, [Attacked] with [lazers] by [deatheaters] apon [margret];[has flowery language]; word [margret] [comes first] (Theoretical example has 7 “tokens”)

          It may have actually impressioned a really good copy of that book as it’s lilely read it lots of times.