ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • Obinice@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 year ago

    Well, it’s a good thing absolutely no clinician is using it to figure out how to treat their patient’s cancer… then?

    I imagine it also struggles when asked to go to the kitchen and make a cup of tea. Thankfully, nobody asks this, because it’s outside of the scope of the application.

    • clutch@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      The fear is that hospital administrators equipped with their MBA degrees will think about using it to replace expensive, experienced physicians and diagnosticians

      • Obinice@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        If that were legal, I’d absolutely be worried, you make a good point.

        Even Doctor need special additional qualifications to do things like diagnose illnesses via radiographic imagery, etc. Specialised AI is making good progress in aiding these sorts of things, but a generalised and very poor AI like ChatGPT will never be legally certified to do this sort of thing.

        Once we have a much more effective generalised AI, things will get more interesting. It’ll have to prove itself thoroughly though, before being certified, so it’ll still be a few years after it appears before we see it used in clinical applications.