OpenAI strikes Reddit deal to train its AI on your posts - eviltoast
  • mint_tamas@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    6 months ago

    That paper is yet to be peer reviewed or released. I think you are jumping into conclusion with that statement. How much can you dilute the data until it breaks again?

    • barsoap@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      6 months ago

      That paper is yet to be peer reviewed or released.

      Never doing either (release as in submit to journal) isn’t uncommon in maths, physics, and CS. Not to say that it won’t be released but it’s not a proper standard to measure papers by.

      I think you are jumping into conclusion with that statement. How much can you dilute the data until it breaks again?

      Quoth:

      If each linear model is instead fit to the generate targets of all the preceding linear models i.e. data accumulate, then the test squared error has a finite upper bound, independent of the number of iterations. This suggests that data accumulation might be a robust solution for mitigating model collapse.

      Emphasis on “finite upper bound, independent of the number of iterations” by doing nothing more than keeping the non-synthetic data around each time you ingest new synthetic data. This is an empirical study so of course it’s not proof you’ll have to wait for theorists to have their turn for that one, but it’s darn convincing and should henceforth be the null hypothesis.

      Btw did you know that noone ever proved (or at least hadn’t last I checked) that reversing, determinising, reversing, and determinising again a DFA minimises it? Not proven yet widely accepted as true, crazy, isn’t it? But, wait, no, people actually proved it on a napkin. It’s not interesting enough to do a paper about.

      • mint_tamas@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        Peer review, for all its flaws is a good minimum before a paper is worth taking seriously.

        In your original comment you said tha model collapse can be easily avoided with this technique, which is notably different from it being mitigated. I’m not saying that these findings are not useful, just that you are overselling them a bit with this wording.

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          6 months ago

          It was someone different who said that. There’s a chance the authors might’ve gotten some claim wrong because their maths and/or methodology is shoddy but it’s a large and diverse set of authors so that’s unlikely. Fraud in CS empirics is generally unheard of, I mean what are you going to do when challenged, claim that the dog ate the program you ran to generate the data? There’s shenanigans about the equivalent of p-hacking especially from papers from commercial actors trying to sell stuff but that’s not the case here, either.

          CS academics generally submit papers to journals more because of publish or perish than the additional value formal peer review offers. It’s on the internet, after all. By all means, if you spot something in the paper that’s wrong then be right on the internet.