Reproducibility trial: 246 biologists get different results from same data sets - eviltoast
  • Tar_Alcaran@sh.itjust.works
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    Here’s the paper: https://doi.org/10.32942/X2GG62 Opening it and seeing 2 pages of authors is pretty weird.

    The issue here isn’t getting different results from different analytical methods, they’re different methods. If I go to the store by car, I get a different trip than when I walk, that’s completely expected.

    My big question here is why do some teams pick methods that result in very obvious outliers (apart from them not knowing they’re outliers, of course.) Why did they go the store by pogostick? Is it because these teams are simply bad at their job? Do they have a special reason why their method is the only accurate one in these case? Do they always use that method and simply didn’t consider it? The question “Why did you analyse your data in [method X] and not [method Y]?” is one the worst questions you can get during peer review or thesis defence, and it’s also one of the most important ones. I’d love to know the reason why the outlier teams made that choice.

    I’ve got a gut feeling there’s a lot of “We didn’t care and this was fast” involved, simply because I see that so much in practice.

    • The_v@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      “We didn’t get the results we wanted, so we tried another method.” is the most common justification, but rarely admitted.