Reproducibility trial: 246 biologists get different results from same data sets - eviltoast
  • Boozilla@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    1 year ago

    I worked in medical research for a while. I was just a lowly technical assistant with a bachelor’s degree, not a doctor or PhD.

    But wow, it was eye opening. In the hunt for grant money, folks with letters after their names will massage that data and “reframe” the questions in myriad ways chasing desired outcomes.

    Fortunately, peer review tears some of that bullshit apart. So, science works when done correctly. But, the replication crisis looms large, and I am skeptical of a lot of research papers and science journalism to this day.

    • InfiniteStruggle@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      What sort of solution would work for this though?

      Can we merit-restrict access to academia until the need for and availability of funding match?

      Maybe have a separate verifying authority for experimental observation that need to confirm experimental data before inferences can be drawn from it?

      I’m surprised that even doctor’s would need to depend on such a flawed system for funding - I suppose when the stakes are high enough, everyone starts loosening up on principles, doctor or no doctor.

  • Tar_Alcaran@sh.itjust.works
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    Here’s the paper: https://doi.org/10.32942/X2GG62 Opening it and seeing 2 pages of authors is pretty weird.

    The issue here isn’t getting different results from different analytical methods, they’re different methods. If I go to the store by car, I get a different trip than when I walk, that’s completely expected.

    My big question here is why do some teams pick methods that result in very obvious outliers (apart from them not knowing they’re outliers, of course.) Why did they go the store by pogostick? Is it because these teams are simply bad at their job? Do they have a special reason why their method is the only accurate one in these case? Do they always use that method and simply didn’t consider it? The question “Why did you analyse your data in [method X] and not [method Y]?” is one the worst questions you can get during peer review or thesis defence, and it’s also one of the most important ones. I’d love to know the reason why the outlier teams made that choice.

    I’ve got a gut feeling there’s a lot of “We didn’t care and this was fast” involved, simply because I see that so much in practice.

    • The_v@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      “We didn’t get the results we wanted, so we tried another method.” is the most common justification, but rarely admitted.