• ALoafOfBread@lemmy.ml
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    3
    ·
    edit-2
    6 months ago

    Now make mammograms not $500 and not have a 6 month waiting time and make them available for women under 40. Then this’ll be a useful breakthrough

      • ALoafOfBread@lemmy.ml
        link
        fedilink
        English
        arrow-up
        28
        arrow-down
        2
        ·
        edit-2
        6 months ago

        Oh for sure. I only meant in the US where MIT is located. But it’s already a useful breakthrough for everyone in civilized countries

    • Mouselemming@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 months ago

      Better yet, give us something better to do about the cancer than slash, burn, poison. Something that’s less traumatic on the rest of the person, especially in light of the possibility of false positives.

  • cecinestpasunbot@lemmy.ml
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    6
    ·
    6 months ago

    Unfortunately AI models like this one often never make it to the clinic. The model could be impressive enough to identify 100% of cases that will develop breast cancer. However if it has a false positive rate of say 5% it’s use may actually create more harm than it intends to prevent.

    • Vigge93@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      ·
      6 months ago

      That’s why these systems should never be used as the sole decision makers, but instead work as a tool to help the professionals make better decisions.

      Keep the human in the loop!

    • CptOblivius@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      6 months ago

      Breast imaging already relys on a high false positive rate. False positives are way better than false negatives in this case.

      • cecinestpasunbot@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        6 months ago

        That’s just not generally true. Mammograms are usually only recommended to women over 40. That’s because the rates of breast cancer in women under 40 are low enough that testing them would cause more harm than good thanks in part to the problem of false positives.

        • CptOblivius@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          Nearly 4 out of 5 that progress to biopsy are benign. Nearly 4 times that are called for additional evaluation. The false positives are quite high compared to other imaging. It is designed that way, to decrease the chances of a false negative.

          • cecinestpasunbot@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            The false negative rate is also quite high. It will miss about 1 in 5 women with cancer. The reality is mammography is just not all that powerful as a screening tool. That’s why the criteria for who gets screened and how often has been tailored to try and ensure the benefits outweigh the risks. Although it is an ongoing debate in the medical community to determine just exactly what those criteria should be.

    • ???@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      6 months ago

      How would a false positive create more harm? Isn’t it better to cast a wide net and detect more possible cases? Then false negatives are the ones that worry me the most.

      • cecinestpasunbot@lemmy.ml
        link
        fedilink
        English
        arrow-up
        11
        ·
        6 months ago

        It’s a common problem in diagnostics and it’s why mammograms aren’t recommended to women under 40.

        Let’s say you have 10,000 patients. 10 have cancer or a precancerous lesion. Your test may be able to identify all 10 of those patients. However, if it has a false positive rate of 5% that’s around 500 patients who will now get biopsies and potentially surgery that they don’t actually need. Those follow up procedures carry their own risks and harms for those 500 patients. In total, that harm may outweigh the benefit of an earlier diagnosis in those 10 patients who have cancer.

    • Maven (famous)@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      Another big thing to note, we recently had a different but VERY similar headline about finding typhoid early and was able to point it out more accurately than doctors could.

      But when they examined the AI to see what it was doing, it turns out that it was weighing the specs of the machine being used to do the scan… An older machine means the area was likely poorer and therefore more likely to have typhoid. The AI wasn’t pointing out if someone had Typhoid it was just telling you if they were in a rich area or not.

      • Tja@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        That is quite a statement that it still had a better detection rate than doctors.

        What is more important, save life or not offend people?

        • Maven (famous)@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          The thing is tho… It has a better detection rate ON THE SAMPLES THEY HAD but because it wasn’t actually detecting anything other than wealth there was no way for them to trust it would stay accurate.

          • Tja@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            Citation needed.

            Usually detection rates are given on a new set of samples, on the samples they used for training detection rate would be 100% by definition.

            • 0ops@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              6 months ago

              Right, there’s typically separate “training” and “validation” sets for a model to train, validate, and iterate on, and then a totally separate “test” dataset that measures how effective the model is on similar data that it wasn’t trained on.

              If the model gets good results on the validation dataset but less good on the test dataset, that typically means that it’s “over fit”. Essentially the model started memorizing frivolous details specific to the validation set that while they do improve evaluation results on that specific dataset, they do nothing or even hurt the results for the testing and other datasets that weren’t a part of training. Basically, the model failed to abstract what it’s supposed to detect, only managing good results in validation through brute memorization.

              I’m not sure if that’s quite what’s happening in maven’s description though. If it’s real my initial thoughts are an unrepresentative dataset + failing to reach high accuracy to begin with. I buy that there’s a correlation between machine specs and positive cases, but I’m sure it’s not a perfect correlation. Like maven said, old areas get new machines sometimes. If the models accuracy was never high to begin with, that correlation may just be the models best guess. Even though I’m sure that it would always take machine specs into account as long as they’re part of the dataset, if actual symptoms correlate more strongly to positive diagnoses than machine specs do, then I’d expect the model to evaluate primarily on symptoms, and thus be more accurate. Sorry this got longer than I wanted

              • Tja@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 months ago

                It’s no problem to have a longer description if you want to get nuance. I think that’s a good description and fair assumptions. Reality is rarely as black and white as reddit/lemmy wants it to be.

  • yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    8
    ·
    6 months ago

    The most beneficial application of AI like this is to reverse-engineer the neural network to figure out how the AI works. In this way we may discover a new technique or procedure, or we might find out the AI’s methods are bullshit. Under no circumstance should we accept a “black box” explanation.

    • CheeseNoodle@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      3
      ·
      6 months ago

      iirc it recently turned out that the whole black box thing was actually a bullshit excuse to evade liability, at least for certain kinds of model.

        • CheeseNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          6 months ago

          This ones from 2019 Link
          I was a bit off the mark, its not that the models they use aren’t black boxes its just that they could have made them interpretable from the beginning and chose not to, likely due to liability.

      • Tryptaminev@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 months ago

        It depends on the algorithms used. Now the lazy approach is to just throw neural networks at everything and waste immense computation ressources. Of course you then get results that are difficult to interpret. There is much more efficient algorithms that are working well to solve many problems and give you interpretable decisions.

    • CheesyFox@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      good luck reverse-engineering millions if not billions of seemingly random floating point numbers. It’s like visualizing a graph in your mind by reading an array of numbers, except in this case the graph has as many dimensions as the neural network has inputs, which is the number of pixels the input image has.

      Under no circumstance should we accept a “black box” explanation.

      Go learn at least basic principles of neural networks, because this your sentence alone makes me want to slap you.

      • thecodeboss@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        Don’t worry, researchers will just get an AI to interpret all those floating point numbers and come up with a human-readable explanation! What could go wrong? /s

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        Hey look, this took me like 5 minutes to find.

        Censius guide to AI interpretability tools

        Here’s a good thing to wonder: if you don’t know how you’re black box model works, how do you know it isn’t racist?

        Here’s what looks like a university paper on interpretability tools:

        As a practical example, new regulations by the European Union proposed that individuals affected by algorithmic decisions have a right to an explanation. To allow this, algorithmic decisions must be explainable, contestable, and modifiable in the case that they are incorrect.

        Oh yeah. I forgot about that. I hope your model is understandable enough that it doesn’t get you in trouble with the EU.

        Oh look, here you can actually see one particular interpretability tool being used to interpret one particular model. Funny that, people actually caring what their models are using to make decisions.

        Look, maybe you were having a bad day, or maybe slapping people is literally your favorite thing to do, who am I to take away mankind’s finer pleasures, but this attitude of yours is profoundly stupid. It’s weak. You don’t want to know? It doesn’t make you curious? Why are you comfortable not knowing things? That’s not how science is propelled forward.

        • Tja@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          “Enough” is doing a fucking ton of heavy lifting there. You cannot explain a terabyte of floating point numbers. Same way you cannot guarantee a specific doctor or MRI technician isn’t racist.

          • petrol_sniff_king@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 months ago

            A single drop of water contains billions of molecules, and yet, we can explain a river. Maybe you should try applying yourself. The field of hydrology awaits you.

            • Tja@programming.dev
              link
              fedilink
              English
              arrow-up
              0
              ·
              6 months ago

              No, we cannot explain a river, or the atmosphere. Hence weather forecast is good for a few days and even after massive computer simulations, aircraft/cars/ships still need to do tunnel testing and real life testing. Because we only can approximate the real thing in our model.

              • petrol_sniff_king@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                1
                ·
                6 months ago

                You can’t explain a river? It goes down hill.

                I understand that complicated things frieghten you, Tja, but I don’t understand what any of this has to do with being unsatisfied when an insurance company denies your claim and all they have to say is “the big robot said no… uh… leave now?”

  • Wilzax@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    1
    ·
    6 months ago

    If it has just as low of a false negative rate as human-read mammograms, I see no issue. Feed it through the AI first before having a human check the positive results only. Save doctors’ time when the scan is so clean that even the AI doesn’t see anything fishy.

    Alternatively, if it has a lower false positive rate, have doctors check the negative results only. If the AI sees something then it’s DEFINITELY worth a biopsy. Then have a human doctor check the negative readings just to make sure they don’t let anything that’s worth looking into go unnoticed.

    Either way, as long as it isn’t worse than humans in both kinds of failures, it’s useful at saving medical resources.

    • Railing5132@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 months ago

      This is exactly what is being done. My eldest child is in a Ph. D. program for human - robot interaction and medical intervention, and has worked on image analysis systems in this field. They’re intended use is exactly that - a “first look” and “second look”. A first look to help catch the small, easily overlooked pre-tumors, and tentatively mark clear ones. A second look to be a safety net for tired, overworked, or outdated eyes.

    • UNY0N@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      6 months ago

      Nice comment. I like the detail.

      For me, the main takeaway doesn’t have anything to do with the details though, it’s about the true usefulness of AI. The details of the implementation aren’t important, the general use case is the main point.

  • Snapz@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    6 months ago

    And if we weren’t a big, broken mess of late stage capitalist hellscape, you or someone you know could have actually benefited from this.

    • unconsciousvoidling@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      6 months ago

      Yea none of us are going to see the benefits. Tired of seeing articles of scientific advancement that I know will never trickle down to us peasants.

      • Telodzrum@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        6 months ago

        Our clinics are already using ai to clean up MRI images for easier and higher quality reads. We use ai on our cath lab table to provide a less noisy image at a much lower rad dose.

          • Telodzrum@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            It’s not diagnosing, which is good imho. It’s just being used to remove noise and artifacts from the images on the scan. This means the MRI is clearer for the reading physician and ordering surgeon in the case of the MRI and that the cardiologist can use less radiation during the procedure yet get the same quality image in the lab.

            I’m still wary of using it to diagnose in basically any scenario because of the salience and danger that both false negatives and false positives threaten.

    • MuchPineapples@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      6 months ago

      I’m involved in multiple projects where stuff like this will be used in very accessible manners, hopefully in 2-3 years, so don’t get too pessimistic.

  • gmtom@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    6 months ago

    This is similar to wat I did for my masters, except it was lung cancer.

    Stuff like this is actually relatively easy to do, but the regulations you need to conform to and the testing you have to do first are extremely stringent. We had something that worked for like 95% of cases within a couple months, but it wasn’t until almost 2 years later they got to do their first actual trial.

  • wheeldawg@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    4
    ·
    edit-2
    6 months ago

    Yes, this is “how it was supposed to be used for”.

    The sentence construction quality these days in in freefall.

  • bluefishcanteen@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    6 months ago

    This is a great use of tech. With that said I find that the lines are blurred between “AI” and Machine Learning.

    Real Question: Other than the specific tuning of the recognition model, how is this really different from something like Facebook automatically tagging images of you and your friends? Instead of saying "Here’s a picture of Billy (maybe) " it’s saying, “Here’s a picture of some precancerous masses (maybe)”.

    That tech has been around for a while (at least 15 years). I remember Picasa doing something similar as a desktop program on Windows.

    • AdrianTheFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      6 months ago

      I’ve been looking at the paper, some things about it:

      • the paper and article are from 2021
      • the model needs to be able to use optional data from age, family history, etc, but not be reliant on it
      • it needs to combine information from multiple views
      • it predicts risk for each year in the next 5 years
      • it has to produce consistent results with different sensors and diverse patients
      • its not the first model to do this, and it is more accurate than previous methods
    • pete_the_cat@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      6 months ago

      It’s because AI is the new buzzword that has replaced “machine learning” and “large language models”, it sounds a lot more sexy and futuristic.

      • Comment105@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 months ago

        I don’t care about mean but I would call it inaccurate. Billy is already cancerous, He’s mostly cancer. He’s a very dense, sour boy.

    • Lets_Eat_Grandma@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 months ago

      Everything machine learning will be called “ai” from now until forever.

      It’s like how all rc helicopters and planes are now “drones”

      People en masse just can’t handle the nuance of language. They need a dumb word for everything that is remotely similar.

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    6 months ago

    AI should be used for this, yes, however advertisement is more profitable.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      6 months ago

      It’s worse than that.

      This is a different type of AI that doesn’t have as many consumer facing qualities.

      The ones that are being pushed now are the first types of AI to have an actually discernable consumer facing attribute or behavior, and so they’re being pushed because no one wants to miss the boat.

      They’re not more profitable or better or actually doing anything anyone wants for the most part, they’re just being used where they can fit it in.

      • Hackworth@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        6 months ago

        This type of segmentation is of declining practical value. Modern AI implementations are usually hybrids of several categories of constructed intelligence.

    • 𝓔𝓶𝓶𝓲𝓮@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      6
      ·
      edit-2
      6 months ago

      Honestly with all respect that is really shitty joke. It’s god damn breast cancer, opposite of hot

      I usually just skip them mouldy jokes but like cmon that is beyond the scale of cringe

      • PlantDadManGuy@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        6 months ago

        Terrible things happen to people you love, you have two choices in this life. You can laugh about it or you can cry about it. You can do one and then the other if you choose. I prefer to laugh about most things and hope others will do the same. Cheers.

        • 𝓔𝓶𝓶𝓲𝓮@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          edit-2
          6 months ago

          I mean do whatever you want but it just comes off as repulsive. like a stain of shit on the new shoes.
          This is public space after all, not the bois locker room so that might be embarrassing for you.

          And you know you can always count on me to point stuff out so you can avoid humiliation in the future

  • earmuff@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    9
    ·
    6 months ago

    Serious question: is there a way to get access to medical imagery as a non-student? I would love to do some machine learning with it myself, as I see lot’s of potential in image analysis in general. 5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour. Similar use case with medical imagery - seeing the things that are not yet detectable by human eyes.

  • MonkderVierte@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    6 months ago

    Btw, my dentist used AI to identify potential problems in a radiograph. The result was pretty impressive. Have to get a filling tho.

  • elrik@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 months ago

    Ductal carcinoma in situ (DCIS) is a type of preinvasive tumor that sometimes progresses to a highly deadly form of breast cancer. It accounts for about 25 percent of all breast cancer diagnoses.

    Because it is difficult for clinicians to determine the type and stage of DCIS, patients with DCIS are often overtreated. To address this, an interdisciplinary team of researchers from MIT and ETH Zurich developed an AI model that can identify the different stages of DCIS from a cheap and easy-to-obtain breast tissue image. Their model shows that both the state and arrangement of cells in a tissue sample are important for determining the stage of DCIS.

    https://news.mit.edu/2024/ai-model-identifies-certain-breast-tumor-stages-0722