• Hazmatastic@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Looks like they put off the science fair project for too long and had to throw this little number together the weekend before. Been there, I still remember mine: what genre of music will cats like? Hypothesis: classical. Result: hard rock. Sampled 4 cats over 5 genres, took an hour. Methodology was crap. Sample size was crap. It was a non-experiment that scraped a “you tried” grade

  • Gork@lemm.ee
    link
    fedilink
    English
    arrow-up
    38
    ·
    1 year ago

    There should be more value placed in publishing things that didn’t work as hypothesized. That way scientists in the future can know if a particular approach just doesn’t work.

    Something like this, but completely normalized in the scientific world, where it’s ok to publish attempts, whether they succeed or not.

    • iAvicenna@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 year ago

      yea unfortunately publishing science (in certain levels) unfortunately now involves %50 razmatazz, %30 having some well established coauthor and %20 over selling. It has turned into a weird ecosystem that feeds on resource (jobs) scarcity in academia and makes insane profits for publishers.

      Not surprised it attracted all kinds of vultures that feed on the scraps (predatory publishers). It is really smelling decay and puss from a mile away.

    • jeffhykin@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I think we can agree “Good reseach” is in the how-its-done. I wish journals would chose/require/verify the how-its-done (time frame, resources, hypothesis, method etc) but after that be contractually required publish whatever conclusion is discovered by the team/project they picked and verified.

  • otp@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    Often, it’s about not proving your idea wrong, but about proving wrong the idea that your idea is wrong.

  • jeffhykin@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    This is why my field (reinforcement learning) is unfortunately not science.

    (Can’t really publish “hey I tried this algorithm and it didn’t work”)

      • jeffhykin@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        I guess I should’ve clarified; in reforcement learning “I was wrong in numerous ways” almost always translates to “unpublishable, try to not be wrong next time”. Nobody cares if a reinforcement learning hypothesis didn’t work, its only worth publishing if it worked well.

        • overcast5348@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Gotcha.

          I thought that was the norm in all academia these days? Can a physicist (or anyone from another field) publish results that didn’t go as expected and save future scientists some time?

          • jeffhykin@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I know a good bit of micro biology, psychology, and medical trial fields can. But thats about the limit of my “other fields” knowledge.