• ALoafOfBread@lemmy.ml
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    3
    ·
    edit-2
    4 months ago

    Now make mammograms not $500 and not have a 6 month waiting time and make them available for women under 40. Then this’ll be a useful breakthrough

      • ALoafOfBread@lemmy.ml
        link
        fedilink
        English
        arrow-up
        28
        arrow-down
        2
        ·
        edit-2
        4 months ago

        Oh for sure. I only meant in the US where MIT is located. But it’s already a useful breakthrough for everyone in civilized countries

    • Mouselemming@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 months ago

      Better yet, give us something better to do about the cancer than slash, burn, poison. Something that’s less traumatic on the rest of the person, especially in light of the possibility of false positives.

  • cecinestpasunbot@lemmy.ml
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    6
    ·
    4 months ago

    Unfortunately AI models like this one often never make it to the clinic. The model could be impressive enough to identify 100% of cases that will develop breast cancer. However if it has a false positive rate of say 5% it’s use may actually create more harm than it intends to prevent.

    • Vigge93@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      ·
      4 months ago

      That’s why these systems should never be used as the sole decision makers, but instead work as a tool to help the professionals make better decisions.

      Keep the human in the loop!

    • CptOblivius@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      4 months ago

      Breast imaging already relys on a high false positive rate. False positives are way better than false negatives in this case.

      • cecinestpasunbot@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        4 months ago

        That’s just not generally true. Mammograms are usually only recommended to women over 40. That’s because the rates of breast cancer in women under 40 are low enough that testing them would cause more harm than good thanks in part to the problem of false positives.

        • CptOblivius@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          Nearly 4 out of 5 that progress to biopsy are benign. Nearly 4 times that are called for additional evaluation. The false positives are quite high compared to other imaging. It is designed that way, to decrease the chances of a false negative.

          • cecinestpasunbot@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            The false negative rate is also quite high. It will miss about 1 in 5 women with cancer. The reality is mammography is just not all that powerful as a screening tool. That’s why the criteria for who gets screened and how often has been tailored to try and ensure the benefits outweigh the risks. Although it is an ongoing debate in the medical community to determine just exactly what those criteria should be.

    • ???@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      4 months ago

      How would a false positive create more harm? Isn’t it better to cast a wide net and detect more possible cases? Then false negatives are the ones that worry me the most.

      • cecinestpasunbot@lemmy.ml
        link
        fedilink
        English
        arrow-up
        11
        ·
        4 months ago

        It’s a common problem in diagnostics and it’s why mammograms aren’t recommended to women under 40.

        Let’s say you have 10,000 patients. 10 have cancer or a precancerous lesion. Your test may be able to identify all 10 of those patients. However, if it has a false positive rate of 5% that’s around 500 patients who will now get biopsies and potentially surgery that they don’t actually need. Those follow up procedures carry their own risks and harms for those 500 patients. In total, that harm may outweigh the benefit of an earlier diagnosis in those 10 patients who have cancer.

  • yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    8
    ·
    4 months ago

    The most beneficial application of AI like this is to reverse-engineer the neural network to figure out how the AI works. In this way we may discover a new technique or procedure, or we might find out the AI’s methods are bullshit. Under no circumstance should we accept a “black box” explanation.

    • CheeseNoodle@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      3
      ·
      4 months ago

      iirc it recently turned out that the whole black box thing was actually a bullshit excuse to evade liability, at least for certain kinds of model.

        • CheeseNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          4 months ago

          This ones from 2019 Link
          I was a bit off the mark, its not that the models they use aren’t black boxes its just that they could have made them interpretable from the beginning and chose not to, likely due to liability.

      • Tryptaminev@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        It depends on the algorithms used. Now the lazy approach is to just throw neural networks at everything and waste immense computation ressources. Of course you then get results that are difficult to interpret. There is much more efficient algorithms that are working well to solve many problems and give you interpretable decisions.

  • Wilzax@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    1
    ·
    4 months ago

    If it has just as low of a false negative rate as human-read mammograms, I see no issue. Feed it through the AI first before having a human check the positive results only. Save doctors’ time when the scan is so clean that even the AI doesn’t see anything fishy.

    Alternatively, if it has a lower false positive rate, have doctors check the negative results only. If the AI sees something then it’s DEFINITELY worth a biopsy. Then have a human doctor check the negative readings just to make sure they don’t let anything that’s worth looking into go unnoticed.

    Either way, as long as it isn’t worse than humans in both kinds of failures, it’s useful at saving medical resources.

    • Railing5132@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 months ago

      This is exactly what is being done. My eldest child is in a Ph. D. program for human - robot interaction and medical intervention, and has worked on image analysis systems in this field. They’re intended use is exactly that - a “first look” and “second look”. A first look to help catch the small, easily overlooked pre-tumors, and tentatively mark clear ones. A second look to be a safety net for tired, overworked, or outdated eyes.

    • UNY0N@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      4 months ago

      Nice comment. I like the detail.

      For me, the main takeaway doesn’t have anything to do with the details though, it’s about the true usefulness of AI. The details of the implementation aren’t important, the general use case is the main point.

  • Snapz@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    4 months ago

    And if we weren’t a big, broken mess of late stage capitalist hellscape, you or someone you know could have actually benefited from this.

    • unconsciousvoidling@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      4 months ago

      Yea none of us are going to see the benefits. Tired of seeing articles of scientific advancement that I know will never trickle down to us peasants.

      • Telodzrum@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        4 months ago

        Our clinics are already using ai to clean up MRI images for easier and higher quality reads. We use ai on our cath lab table to provide a less noisy image at a much lower rad dose.

          • Telodzrum@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            It’s not diagnosing, which is good imho. It’s just being used to remove noise and artifacts from the images on the scan. This means the MRI is clearer for the reading physician and ordering surgeon in the case of the MRI and that the cardiologist can use less radiation during the procedure yet get the same quality image in the lab.

            I’m still wary of using it to diagnose in basically any scenario because of the salience and danger that both false negatives and false positives threaten.

    • MuchPineapples@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      4 months ago

      I’m involved in multiple projects where stuff like this will be used in very accessible manners, hopefully in 2-3 years, so don’t get too pessimistic.

  • gmtom@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    4 months ago

    This is similar to wat I did for my masters, except it was lung cancer.

    Stuff like this is actually relatively easy to do, but the regulations you need to conform to and the testing you have to do first are extremely stringent. We had something that worked for like 95% of cases within a couple months, but it wasn’t until almost 2 years later they got to do their first actual trial.

  • wheeldawg@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    4
    ·
    edit-2
    4 months ago

    Yes, this is “how it was supposed to be used for”.

    The sentence construction quality these days in in freefall.

  • bluefishcanteen@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    4 months ago

    This is a great use of tech. With that said I find that the lines are blurred between “AI” and Machine Learning.

    Real Question: Other than the specific tuning of the recognition model, how is this really different from something like Facebook automatically tagging images of you and your friends? Instead of saying "Here’s a picture of Billy (maybe) " it’s saying, “Here’s a picture of some precancerous masses (maybe)”.

    That tech has been around for a while (at least 15 years). I remember Picasa doing something similar as a desktop program on Windows.

    • AdrianTheFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      4 months ago

      I’ve been looking at the paper, some things about it:

      • the paper and article are from 2021
      • the model needs to be able to use optional data from age, family history, etc, but not be reliant on it
      • it needs to combine information from multiple views
      • it predicts risk for each year in the next 5 years
      • it has to produce consistent results with different sensors and diverse patients
      • its not the first model to do this, and it is more accurate than previous methods
      • Comment105@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        I don’t care about mean but I would call it inaccurate. Billy is already cancerous, He’s mostly cancer. He’s a very dense, sour boy.

    • pete_the_cat@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      4 months ago

      It’s because AI is the new buzzword that has replaced “machine learning” and “large language models”, it sounds a lot more sexy and futuristic.

    • Lets_Eat_Grandma@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      Everything machine learning will be called “ai” from now until forever.

      It’s like how all rc helicopters and planes are now “drones”

      People en masse just can’t handle the nuance of language. They need a dumb word for everything that is remotely similar.

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    4 months ago

    AI should be used for this, yes, however advertisement is more profitable.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      4 months ago

      It’s worse than that.

      This is a different type of AI that doesn’t have as many consumer facing qualities.

      The ones that are being pushed now are the first types of AI to have an actually discernable consumer facing attribute or behavior, and so they’re being pushed because no one wants to miss the boat.

      They’re not more profitable or better or actually doing anything anyone wants for the most part, they’re just being used where they can fit it in.

      • Hackworth@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        4 months ago

        This type of segmentation is of declining practical value. Modern AI implementations are usually hybrids of several categories of constructed intelligence.

  • earmuff@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    9
    ·
    4 months ago

    Serious question: is there a way to get access to medical imagery as a non-student? I would love to do some machine learning with it myself, as I see lot’s of potential in image analysis in general. 5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour. Similar use case with medical imagery - seeing the things that are not yet detectable by human eyes.

    • 𝓔𝓶𝓶𝓲𝓮@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      6
      ·
      edit-2
      4 months ago

      Honestly with all respect that is really shitty joke. It’s god damn breast cancer, opposite of hot

      I usually just skip them mouldy jokes but like cmon that is beyond the scale of cringe

      • PlantDadManGuy@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        4 months ago

        Terrible things happen to people you love, you have two choices in this life. You can laugh about it or you can cry about it. You can do one and then the other if you choose. I prefer to laugh about most things and hope others will do the same. Cheers.

        • 𝓔𝓶𝓶𝓲𝓮@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          edit-2
          4 months ago

          I mean do whatever you want but it just comes off as repulsive. like a stain of shit on the new shoes.
          This is public space after all, not the bois locker room so that might be embarrassing for you.

          And you know you can always count on me to point stuff out so you can avoid humiliation in the future

  • MonkderVierte@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    4 months ago

    Btw, my dentist used AI to identify potential problems in a radiograph. The result was pretty impressive. Have to get a filling tho.

  • elrik@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 months ago

    Ductal carcinoma in situ (DCIS) is a type of preinvasive tumor that sometimes progresses to a highly deadly form of breast cancer. It accounts for about 25 percent of all breast cancer diagnoses.

    Because it is difficult for clinicians to determine the type and stage of DCIS, patients with DCIS are often overtreated. To address this, an interdisciplinary team of researchers from MIT and ETH Zurich developed an AI model that can identify the different stages of DCIS from a cheap and easy-to-obtain breast tissue image. Their model shows that both the state and arrangement of cells in a tissue sample are important for determining the stage of DCIS.

    https://news.mit.edu/2024/ai-model-identifies-certain-breast-tumor-stages-0722