• not_that_guy05@lemmy.world
    link
    fedilink
    arrow-up
    42
    arrow-down
    1
    ·
    10 months ago

    Fuck that guy first of all.

    What makes me think is, what about all that cartoon porn showing cartoon kids? What about hentai showing younger kids? What’s the difference if all are fake and being distributed online as well?

    Not defending him.

  • 0x0001@sh.itjust.works
    link
    fedilink
    arrow-up
    33
    arrow-down
    2
    ·
    10 months ago

    One thing to consider, if this turned out to be accepted, it would make it much harder to prosecute actual csam, they could claim “ai generated” for actual images

    • theherk@lemmy.world
      link
      fedilink
      arrow-up
      23
      arrow-down
      1
      ·
      10 months ago

      I get this position, truly, but I struggle to reconcile it with the feeling that artwork of something and photos of it aren’t equal. In a binary way they are, but with more precision they’re pretty far apart. But I’m not arguing against it, I’m just not super clear how I feel about it yet.

      • JovialMicrobial@lemm.ee
        link
        fedilink
        arrow-up
        3
        ·
        10 months ago

        I’m a professional artist and have no issue banning ai generated CSAM. People can call it self expression if they want, but that doesn’t change the real world consequences of it.

        Allowing ai generated CSAM basically creates camouflage for real CSAM. As ai gets more advanced it will become harder to tell the difference. The scum making real CSAM will be emboldened to make even more because they can hide it amongst the increasing amounts of ai generated versions, or simply tag it as AI generated. Now authorities will have to sift through all of it trying to decipher what’s artifical and what isn’t.

        The liklihood of them being able to identify, trace, and convict child abusers will become even more difficult as more and more of that material is generated and uploaded to various sites with real CSAM mixed in.

        Even with hyper realistic paintings you can still tell it’s a painting. Anime loli stuff can never be mistaken for real CSAM. Do I find that sort of art distasteful? Yep. But it’s not creating an environment where real abusers can distribute CSAM and have a higher possibility of getting away with it.

      • Madison420@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        10 months ago

        So long as the generation is without actual model examples that are actual minors there’s nothing technically illegal about having sexual material of what appears to be a child. They would then have a mens rea question and a content question, what actual defines in a visual sense a child? Could those same things equally define a person of smaller stature? And finally could someone like tiny texie be charged for producing csam as she by all appearance or of context looks to be a child.

        • Fungah@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          10 months ago

          It is illegal in Canada to have sexual depictions of a child whether its a real image or you’ve just sat down and drawn it yourself. The rationale being that behavior escalated, and looking at images goes to wanting more

          It borders on thought crime which I feel kind of high about but only pedophiles suffer which I feel great about. There’s no legitimate reason to have sexualized image of a child whether computer geneerate, hand drawn, or whatever.

          • Madison420@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            10 months ago

            This article isn’t about Canada homeboy.

            Also that theory is not provable and never will be, morality crime is thought crime and thought crime is horseshit. We criminalize criminal acts not criminal thoughts.

            Similarly, you didn’t actually offer a counterpoint to any of my points.

      • Corkyskog@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        arrow-down
        5
        ·
        10 months ago

        It’s not a difficult test. If a person can’t reasonably distinguish it from an actual child, then it’s CSAM.

        • Madison420@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          10 months ago

          This would also outlaw “teen” porn as they are explicitly trying to look more childlike as well as models that only appear to be minors.

          I get the reason people think it’s a good thing but all censorship has to be narrowly tailored to content lest it be too vague or overly broad.

          • Corkyskog@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            10 months ago

            And nothing was lost…

            But in seriousness, as you said they are models who are in the industry, verified, etc. It’s not impossible to have a white-list of actors, and if anything there should be more scrutiny on the unknown “actresses” portraying teenagers…

            • Madison420@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              10 months ago

              Except jobs dude, you may not like their work but it’s work. That law ignores verified age, that’s a not insignificant part of my point…

  • eating3645@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    2
    ·
    10 months ago

    I find it interesting that the relabeling of CP to CSAM weakens their argument here. “CP generated by AI is still CP” makes sense, but if there’s no abusee, it’s just CSM. Makes me wonder if they would have not rebranded if they knew about the proliferation of AI pornography.

    • Stovetop@lemmy.world
      link
      fedilink
      arrow-up
      19
      arrow-down
      5
      ·
      10 months ago

      The problem is that it abets the distribution of legitimate CSAM more easily. If a government declares “these types of images are okay if they’re fake”, you’ve given probable deniability to real CSAM distributors who can now claim that the material is AI generated, placing the burden on the legal system to prove it to the contrary. The end result will be a lot of real material flying under the radar because of weak evidence, and continued abuse of children.

      Better to just blanket ban the entire concept and save us all the trouble, in my opinion. Back before it was so easy to generate photorealistic images, it was easier to overlook victimless CP because illustrations are easy to tell apart from reality, but times have changed, and so should the laws.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        3
        ·
        10 months ago

        Not necessarily. There’s been a lot of advances in watermarking AI outputs.

        As well, there’s the opposite argument.

        Right now, pedophile rings have very high price points to access CSAM or require users to upload original CSAM content, adding a significant motivator to actually harm children.

        The same way rule 34 artists were very upset with AI being able to create what they were getting commissions to create, AI generated CSAM would be a significant dilution of the market.

        Is the average user really going to risk prison, pay a huge amount of money or harm a child with an even greater prison risk when effectively identical material is available for free?

        Pretty much overnight the CSAM dark markets would lose the vast majority of their market value and the only remaining offerings would be ones that could demonstrate they weren’t artificial to justify the higher price point, which would undermine the notion of plausible deniability.

        Legalization of AI generated CSAM would decimate the existing CSAM markets.

        That said, the real question that needs to be answered from a social responsibility perspective is what the net effect of CSAM access by pedophiles has on their proclivity to offend. If there’s a negative effect then it’s an open and shut case that it should be legalized. If it’s a positive effect than we should probably keep it very much illegal, even if that continues to enable dark markets for the real thing.

        • solrize@lemmy.world
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          10 months ago

          Not necessarily. There’s been a lot of advances in watermarking AI outputs.

          That presumes that the image generation is being done by some corporation or government entity that adds the watermarks to AI outputs and doesn’t add them to non-AI outputs. I’m not thrilled that AI of this sort exists at all, but given that it does, I’d rather not have it controlled by such entities. We’re heading towards a world where we can all run that stuff on our own computers and control the watermarks ourselves. Is that good or bad? Probably bad, but having it under the exclusive control of megacorps has to be even worse.

          • Grandwolf319@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            10 months ago

            How about any photo realistic image without a watermark is illegal? And the watermark kind of has to be traced back to author so you can’t just add it to real CP?

              • Grandwolf319@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                10 months ago

                Well the watermark would be a kind of signature that leads back to a registered artist.

                I think it makes sense to enforce this for all AI art, basically label it in a way that can be traced back to who produced it.

                And if you don’t want people to know you produced it, then you probably shouldn’t share it

        • HereToLurk@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          Is the average user really going to risk prison, pay a huge amount of money or harm a child with an even greater prison risk when effectively identical material is available for free?

          Average users aren’t pedophiles and it would appear that yes they would considering he did exactly that. He had access to tools that generated the material for free, which he then used to entice boys.

      • Thorny_Insight@lemm.ee
        link
        fedilink
        arrow-up
        7
        arrow-down
        3
        ·
        10 months ago

        placing the burden on the legal system to prove it to the contrary.

        That’s how it should be. Everyone is innocent until proven otherwise.

        • Stovetop@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          10 months ago

          Right, but what I am suggesting is that laws should be worded to criminalize any sexualized depiction of children, not just ones with a real victim. It is no longer as simple to prove a photograph or video is actual CSAM with a real victim, making it easier for real abuse to avoid detection.

          • Thorny_Insight@lemm.ee
            link
            fedilink
            arrow-up
            4
            arrow-down
            3
            ·
            10 months ago

            This same “think about the children” -argument is used when advocating for stuff such as banning encryption aswell which in it’s current form enables the easy spreading of such content AI generated or not. I do not agree with that. It’s a slippery slope despite the good intentions. We’re not criminalizing fictional depictions of violence either. I don’t see how this is any different. I don’t care what people are jerking off to as long as they’re not hurting anyone and I don’t think you should either. Banning it haven’t gotten rid of actual CSAM content and it sure wont work for AI generated stuff either. No one benefits from the police running after people creating/sharing fictional content.

            • Stovetop@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              10 months ago

              I think you’re painting a false equivalency. This isn’t about surveillance or incitement or any other pre-crime hypotheticals, but simply adjusting what material is considered infringing in light of new developments which can prevent justice from being carried out on actual cases of abuse.

              How do you prove what is fictional versus what is real? Unless there is some way to determine with near 100% certainty that a given image or video is AI generated and not real, or even that an AI generated image wasn’t trained on real images of abuse, you invite scenarios where real images of abuse get passed off as “fictional content” and make it easier for predators to victimize more children.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        arrow-down
        2
        ·
        10 months ago

        Better to just blanket ban the entire concept and save us all the trouble, in my opinion.

        That’s the issue though, blindly banning things that can be victimless crimes never ends, like prohibition.

        • Stovetop@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          10 months ago

          Well, you don’t hear many people decrying the places that already have. Canada many US states, parts of Europe too have outlawed sexual imagery of children, real or fake.

          I am just proposing that that should be the standard approach going forward, for the sole fact that the fake stuff is identical to the real stuff and real stuff can be used to make more convincing “fake” stuff.

          • Grandwolf319@sh.itjust.works
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            10 months ago

            Isn’t Canada’s law based on age and not if they “look like children”, so all they have to say is that the subject isn’t human and is over 18 years of age?

            My entire point was that things like this become a game of wack o mole.

            I don’t think that’s a good standard, reminds me of 0 tolerance policies and war on drugs.

    • CaptPretentious@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      10 months ago

      Have to agree. Because I have no clue what CSAM is. My first glance at the title made me think it was CSPAN (the TV channel)… So CP is better identifier, as of at least recognize the initialism.

      If we could stop turning everything, and especially important things, into acronyms and initialisms that’d be great.

    • xmunk@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      arrow-down
      12
      ·
      10 months ago

      A generative AI could not generate CSAM without access to CSAM training data. Abuse was a necessary step in the generation.

  • prettydarknwild@lemmy.world
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    10 months ago

    oh man, i love the future, we havent solved world hunger, or reduce carbon emissions to 0, and we are on the brink of a world war, but now we have AI’s that can generate CSAM and fake footage on the fly 💀

    • Dasus@lemmy.world
      link
      fedilink
      arrow-up
      21
      arrow-down
      1
      ·
      10 months ago

      Technically we’ve solved world hunger. We’ve just not fixed it, as the greedy fucks who hoard most of the resources of this world don’t see immediate capital gains from just helping people.

      Pretty much the only real problem is billionaires being in control.

      • ArchRecord@lemm.ee
        link
        fedilink
        arrow-up
        3
        ·
        10 months ago

        True that. We have the means to fix so many problems, we just have a very very very small few that reeeeally don’t like to do anything good with their money, and instead choose to hoard it, at the expense of everyone else.

        • myliltoehurts@lemm.ee
          link
          fedilink
          arrow-up
          3
          ·
          10 months ago

          Oh cmon they don’t hoard the money. They use it to pay each other/politicians to make sure the status quo remains.

    • TheObviousSolution@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      10 months ago

      Honestly not as bad as I would have thought it would be by now with fake propaganda videos, but the quality isn’t there yet I suppose.

  • ocassionallyaduck@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    10 months ago

    The cats out of the bag on this. It’s enforceable for now to try and ban it, maybe. Because the models are mostly online and intensive.

    In 2028 though, when you can train your own model and generate your own local images without burning a server farm? This has to happen for ML to keep growing and catch on.

    welp. Then there is infinite fake child porn. Because you cannot police every device and model.

    Because of how tech companies have handled this technology, this is not an if scenario. This is guaranteed now.

    • TheObviousSolution@lemm.ee
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      10 months ago

      I remember when they tried to do the same with CRISPR. Glad that didn’t take off and remained largely limited to the industry and academia. But then again, Wuhan …

    • wetsoggybread@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      10 months ago

      I read that its more accurate to say “child sexual abuse material” than child porn because it carries the message of just how bad the stuff is better than just calling it porn and it sounds more professional

      • stoly@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        And I suppose it’s also saying that the form it’s in doesn’t matter. Any type of material is the same.

  • xmunk@sh.itjust.works
    link
    fedilink
    arrow-up
    9
    arrow-down
    17
    ·
    10 months ago

    It is amazing how Lemmy can usually be such a well informed audience but for some reason when it comes to AI people simply refuse to acknowledge that it was trained on CSAM https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse

    And don’t understand how generative AI combines existing concepts to synthesize images - it doesn’t have the ability to create novel concepts.

    • BluesF@lemmy.world
      link
      fedilink
      arrow-up
      17
      arrow-down
      1
      ·
      10 months ago

      AI models don’t resynthesize their training data. They use their training data to determine parameters which enable them to predict a response to an input.

      Consider a simple model (too simple to be called AI but really the underlying concepts are very similar) - a linear regression. In linear regression we produce a model which follows a straight line through the “middle” of our training data. We can then use this to predict values outside the range of the original data - albeit will less certainty about the likely error.

      In the same way, an LLM can give answers to questions that were never asked in its training data - it’s not taking that data and shuffling it around, it’s synthesising an answer by predicting tokens. Also similarly, it does this less well the further outside the training data you go. Feed them the right gibberish and it doesn’t know how to respond. ChatGPT is very good at dealing with nonsense, but if you’ve ever worked with simpler LLMs you’ll know that typos can throw them off notably… They still respond OK, but things get weirder as they go.

      Now it’s certainly true that (at least some) models were trained on CSAM, but it’s also definitely possible that a model that wasn’t could still produce sexual content featuring children. It’s training set need only contain enough disparate elements for it to correctly predict what the prompt is asking for. For example, if the training set contained images of children it will “know” what children look like, and if it contains pornography it will “know” what pornography looks like - conceivably it could mix these two together to produce generated CSAM. It will probably look odd, if I had to guess? Like LLMs struggling with typos, and regression models being unreliable outside their training range, image generation of something totally outside the training set is going to be a bit weird, but it will still work.

      None of this is to defend generating AI CSAM, to be clear, just to say that it is possible to generate things that a model hasn’t “seen”.

    • grue@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      10 months ago

      it was trained on CSAM

      In that case, why haven’t the people who made the AI models been arrested?

      • xmunk@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        arrow-down
        5
        ·
        10 months ago

        Dunno, probably because they didn’t knowingly train it on CSAM - maybe because it’s difficult to prove what actually goes into neural network configuration so it’s unclear how strongly weighted it is… and lastly, maybe because this stuff is so cloaked in obscurity and proprietaryness that nobody is confident how such a case would go.

    • GBU_28@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      10 months ago

      Not all models use the same training sets, and not all future models would either.

      Generating images of humans of different ages doesn’t require having images of that type for humans of all ages.

      Like, no one is arguing your link. Some models definitely used training data with that, but your claim that the type of image discussed is “novel” simply isn’t accurate to how these models can combine concepts

    • solrize@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      10 months ago

      And don’t understand how generative AI combines existing concepts to synthesize images - it doesn’t have the ability to create novel concepts.

      Imagine someone asks you to shoop up some pr0n showing Donald Duck and Darth Vader. You’ve probably never seen that combination in your “training set” (past experience) but it doesn’t exactly take creating novel concepts to fulfill the request. It’s just combining existing ones. Web search on “how stable diffusion works” finds some promising looking articles. I read one a while back and found it understandable. Stable Diffusion was the first of these synthesis programs but the newer ones are just bigger and fancier versions of the same thing.

      Of course idk what the big models out there are actually trained on (basically everything they can get, probably not checked too carefully) but just because some combination can be generated in the output doesn’t mean it must have existed in the input. You can test that yourself easily enough, by giving weird and random enough queries.

      • xmunk@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        arrow-down
        5
        ·
        10 months ago

        No, you’re quite right that the combination didn’t need to exist in the input for an output to be generated - this shit is so interesting because you can throw stuff like “A medieval castle but with Iranian architecture with a samurai standing on the ramparts” at it and get something neat out. I’ve leveraged AI image generation for visual D&D references and it’s excellent at combining comprehended concepts… but it can’t innovate a new thing - it excels at mixing things but it isn’t creative or novel. So I don’t disagree with anything you’ve said - but I’d reaffirm that it currently can make CSAM because it’s trained on CSAM and, in my opinion, it would be unable to generate CSAM (at least to the quality level that would decrease demand for CSAM among pedos) without having CSAM in the training set.

        • solrize@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          10 months ago

          it currently can make CSAM because it’s trained on CSAM

          That is a non sequitur. I don’t see any reason to believe such a cause and effect relationship. The claim is at least falsifiable in principle though. Remove whatever CSAM found its way into the training set, re-run the training to make a new model, and put the same queries in again. I think you are saying that the new model should be incapable of producing CSAM images, but I’m extremely skeptical, as your medieval castle example shows. If you’re now saying the quality of the images might be subtly different, that’s the no true Scotsman fallacy and I’m not impressed. Synthetic images in general look impressive but not exactly real. So I have no idea how realistic the stuff this person was arrested for was.

  • over_clox@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    33
    ·
    10 months ago

    Then we should be able to charge AI (the developers moreso) for the same disgusting crime, and shut AI down.