A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography, highlighting the danger and ubiquity of generative AI being used for nefarious reasons.

Phillip Michael McCorkle was arrested last week while he was working at a movie theater in Vero Beach, Florida, according to TV station CBS 12 News. A crew from the TV station captured the arrest, which made for dramatic video footage due to law enforcement leading away the uniform-wearing McCorkle from the theater in handcuffs.

  • MataVatnik@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    16
    ·
    edit-2
    4 months ago

    Pretty sure the training data sets are CSAM.

    Edit, to those downvoting me and not reading the article:

    A 2023 study from Stanford University also revealed that hundreds of child sex abuse images were found in widely-used generative AI image data sets.

    “The content that we’ve seen, we believe is actually being generated using open source software, which has been downloaded and run locally on people’s computers and then modified,” Internet Watch Foundation chief technology officer Dan Sexton told The Guardian last year. “And that is a much harder problem to fix.”

    • Cryophilia@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      4 months ago

      If that’s the basis for making it illegal, then all AI is illegal.

      Which…eh maybe that’s not such a bad idea

    • ContrarianTrail@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      4 months ago

      One doesn’t need to browse AI generated images for longer than 5 seconds to realize it can generate a ton of stuff that you for absolute certainty can know wasn’t on the training data. I don’t get why people insist on the narrative that it can only output copies of what it has already seen. What’s generative about that?

      • MataVatnik@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        3
        ·
        4 months ago

        If you took a minute to read the article:

        A 2023 study from Stanford University also revealed that hundreds of child sex abuse images were found in widely-used generative AI image data sets.

        “The content that we’ve seen, we believe is actually being generated using open source software, which has been downloaded and run locally on people’s computers and then modified,” Internet Watch Foundation chief technology officer Dan Sexton told The Guardian last year. “And that is a much harder problem to fix.”

        So not only do the online models have CSAM, but people are downloading open source software and I’d be very surprised if they weren’t feeding it CSAM

        • ContrarianTrail@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          4 months ago

          That doesn’t dispute my argument; generative AI can create images that are not in the training data. It doesn’t need to know what something looks like as long as the person using it does and can write the correct prompt for it. The corn dog I posted below is a good example. You can be sure that wasn’t in the training data yet it was still able to generate it.

        • Blaster M@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          Online models since that discovery have scrubbed the offending sources and retrained, as well as added safeguards to their models to try and prevent it.

    • Blaster M@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 months ago

      Since that study, every legit AI model has removed said images from their datasets and all models trained afterwards no longer include knowledge about those source images.

      I know one AI model has specifically not included photos of underage people at all, to minimize the possibility this can happen even on accident. Making CSAM from an AI model is something anyone determined and patient enough can do with a good model trainer and a dataset of source images that have the features they want, even if the underage images are completely clean.

      Making CSAM with an AI model is a deliberate act in almost every case… and in this case, he was arrested for distributing these images, which is super illegal for obvious reasons.