A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography, highlighting the danger and ubiquity of generative AI being used for nefarious reasons.
Phillip Michael McCorkle was arrested last week while he was working at a movie theater in Vero Beach, Florida, according to TV station CBS 12 News. A crew from the TV station captured the arrest, which made for dramatic video footage due to law enforcement leading away the uniform-wearing McCorkle from the theater in handcuffs.
It’s hard to have a nuanced discussion because the article is so vague. It’s not clear what he’s specifically been charged with (beyond “obscenity,” not a specific child abuse statute?). Because any simulated CSAM laws have been, to my knowledge, all struck down when challenged.
I completely get the “lock them all up and throw away the key” visceral reaction - I feel that too, for sure - but this is a much more difficult question. There are porn actors over 18 who look younger, do the laws outlaw them from work that would be legal for others who just look older? If AI was trained exclusively on those over-18 people, would outputs then not be CSAM even if the images produced features that looked under 18?
I’m at least all for a “fruit of the poisoned tree” theory - if AI model training data sets include actual CSAM then they can and should be made illegal. Deepfaking intentionally real under 18 people is also not black and white (looking again to the harm factor), but also I think it can be justifiably prohibited. I also think distribution of completely fake CSAM can be arguably outlawed (the situation here), since it’s going to be impossible to tell AI from real imagery soon and allowing that would undermine enforcement of vital anti-real-CSAM laws.
The real hard case is producing and retaining fully fake people and without real CSAM in training data, solely locally (possession crimes). That’s really tough. Because not only does it not directly hurt anyone in its creation, there’s a possible benefit in that it diminishes the market for real CSAM (potentially saving unrelated children from the abuse flowing from that demand), and could also divert the impulse of the producer from preying on children around them due to unfulfilled desire.
Could, because I don’t think there’s studies that answers whether those are true.
I mostly agree with you, but a counterpoint:
Downloading and possession of CSAM seems to be a common first step in a person initiating communication with a minor with the intent to meet up and abuse them. I’ve read many articles over the years about men getting arrested for trying to meet up with minors, and one thing that shows up pretty often in these articles is the perpetrator admitting to downloading CSAM for years until deciding the fantasy wasn’t enough anymore. They become comfortable enough with it that it loses its taboo and they feel emboldened to take the next step.
CSAM possession is illegal because possession directly supports creation, and creation is inherently abusive and exploitative of real people, and generating it from a model that was trained on non-abusive content probably isn’t exploitative, but there’s a legitimate question as to whether we as a society decide it’s associated closely enough with real world harms that it should be banned.
Not an easy question for sure, and it’s one that deserves to be answered using empirical data, but I imagine the vast majority of Americans would flatly reject a nuanced view on this issue.
CSAM possession is illegal because possession directly supports creation
To expound on this: prior to this point, the creation of CSAM requires that children be sexually exploited. You could not have CSAM without children being harmed. But what about when no direct harms have occurred? Is lolicon hentai ‘obscene’? Well, according to the law and case law, yes, but it’s not usually enforced. If we agree that drawings of children engaged in sexual acts aren’t causing direct harm–that is, children are not being sexually abused in order to create the drawings–then how much different is a computer-generated image that isn’t based off any specific person or event? It seem to me that, whether or not a pedophile might decide that they eventually want more than LLM-generated images is not relevant. Treating a future possibility as a foregone conclusion is exactly the rationale behind Reefer Madness and the idea of ‘gateway’ drugs.
Allow me to float a second possibility that will certainly be less popular.
Start with two premises: first, pedophilia is a characteristic that appears to be an orientation. That is, a true pedophile–a person exclusively sexually attracted to pre-pubescent children–does not choose to be a pedophile, any more than a person chooses to be gay. (My understanding is that very few pedophiles are exclusively pedophilic though, and that many child molesters are opportunistic sexual predators rather than being pedophiles.) Secondly, the rates of sexual assault appear to have decreased as pornography availability has increased. So the question I would have is, would wide availability of LLM-generated CSAM–CSAM that didn’t cause any real, direct harm to children–actually decrease rates of child sexual assault?
With regards to your last paragraph: Pedophiles can indeed by straight, gay or bi. Pedophiles may also not become molesters, and molesters of children may not at all be pedophilic. It’s seems you understand this. I mentioned ITT that I read a newspaper article many years ago that was commissioned to show the access to cp would increase child abuse, it seemed to show the opposite.
If persons could use AI to generate their own porn of their own personal fantasies (whatever those might be) and NOT share that content what then? Canada allows this for text (maybe certain visuals? Audio? IDK). I don’t know about current ‘obscene’ laws in the USA, however, I do recall reading about an art exhibit in NY which featured an upside down urinal that was deemed obscene, than later deemed a work or art. I also recall seeing (via an internet image) a sculpture of what seemed to be a circle of children with penises as noses. Porn? Art? Comedy?My understanding was that ‘pure’ pedophiles–ones that have no interest at all in post-pubescent children or any adults whatsoever–tend to be less concerned with sex/gender, particularly because children don’t have defined secondary sex characteristics. I don’t know if this is actually correct though. I’m not even sure how you could ethically research that kind of thing and end up with valid results.
And honestly, not being able to do solid research that has valid results makes it really fuckin’ hard to find solutions that work to prevent as many children from being harmed as possible. In the US at least research about sex and sexuality in general-much less deviant sexualities–seems to be taboo, and very difficult to get funding for.
Downloading and possession of CSAM seems to be a common first step in a person initiating communication with a minor with the intent to meet up and abuse them.
But this is like the arguments used to say that weed is a “gateway drug” by talking about how people strung out on harder drugs almost always have done weed as well, ignoring everyone who uses only weed. But this is even hazier because we literally have no real idea how many people consume that stuff but don’t ‘escalate’.
I remember reading once in some research out of Japan that child molesters consume less porn overall than the average citizen, which seems counter-intuitive, but may not be, if you consider the possibility that maybe it (in this case, they were talking primarily about manga with anime-style drawings of kids in sexual situations) is actually curbing the incidence of the ‘real thing’, since the ones actually touching kids in the real world are reading those mangas less.
I’m also reminded of people talking about sex dolls that look like kids, and if that’s a possible ‘solution’ for pedophiles, or if it would ‘egg on’ actual molestation.
I think I lean on the side of ‘satiation’, from the limited bits of idle research I’ve done here and there. And if that IS in fact the case, then regardless of if it grosses me out, I can’t in good conscience oppose something that actually reduces the number of children who actually get abused, you know?
It’s less that these materials are like a “gateway” drug and more like these materials could be considered akin to advertising. We already have laws about advertising because it’s so effective, including around cigarettes and prescriptions.
Second, the role that CP plays in most countries is difficult. It is used for blackmail. It is also used to generate money for countries. And it’s used as advertising for actual human trafficking organizations. And similar organizations exist for snuff and gore btw. And ofc animals. And any combination of those 3. Or did you all forget about those monkey torture videos, or the orangutan who was being sex trafficked? Or Daisy’s Destruction and Peter Scully?
So it’s important to not allow these advertisers to combine their most famous monkey torture video with enough AI that they can say it’s AI generated, but it’s really just an ad for their monkey torture productions. And they do that with CP, rape, gore, etc, too.
People, please don’t just downvote with no comment. Why is this being downloaded? The comparisons to advertisements have validity. And, if you disagree, be productive and tell us why.
Because a huge percentage of Lemmy is sexist and I am openly a woman. You’ll know because this comment will get nuked also.
Fellow female here. I support your right to contribute on Lemmy.
but there’s a legitimate question as to whether we as a society decide it’s associated closely enough with real world harms that it should be banned.
Why should that be a question at all? If it causes harm, ban it. If not, don’t. Being “associated with” should never be grounds for a legal statute.
generally a very good point, however i feel it’s important to point out some important context here:
the pedophiles you’re talking about in your comment are almost always members of tight knit communities that share CSAM, organize distribution, share sources, and most importantly, indulge their fantasies/desires together.
i would think that the correlation that leads to molestation is not primarily driven by the CSAM itself, but rather the community around it.
we clearly see this happening in other similarly structured and similarly isolated communities: nazis, incels, mass shooters, religious fanatics, etc.
the common factor in radicalization and development of extreme views in all these groups is always isolation and the community they end up joining as a result, forming a sort of parallel society with it’s own rules and ideals, separate from general society. over time people in these parallel societies get used to seeing the world in a way that aligns with the ideals of the group.
nazis start to see anyone not part of their group as enemies, incels start to see “females” instead of women, religious fanatics see sinners…and pedophiles see objects that exist solely for their gratification instead of kids…
I don’t see why molesters should be any different in this aspect, and would therefore argue that it’s the communal aspect that should probably be the target of the law, i.e.: distribution and organization (forums, chatrooms, etc.)
the harder it is for them to organize, the less likely these groups are to produce predators that cause real harm!
if on top of that there is a legally available outlet where they can indulge themselves in a safe manner without harming anyone, I’d expect rates of child molestation to drop significantly, because, again, there’s precedence from similar situations (overdoses in drug addicts, for example)
i think it is a potentially fatal mistake to think of pedophiles as “special” cases, rather than just another group of outcasts, because in nearly all cases of such pariahs the solutions that prove to work best in the real world are the ones that make these groups feel less like outcasts, which limits avenues of radicalization.
i thought these parallels are something worth pointing out.
I don’t know if it’s still a thing, but I’m reminded of some law or regulation that was passed a while back in Australia, iirc, that barred women with A-cup busts from working in porn, the “reasoning” being that their flatter chests made them look too similar to prepubescent girls, lol…
Not only stupid but also quite insulting to women, imo.
Because any simulated CSAM laws have been, to my knowledge, all struck down when challenged.
To the best of my knowledge, calling drawn works obscene has been upheld in courts, most often because the artist(s) lack the financial ability to fight the charges effectively. The artist for the underground comic “Boiled Angel” had his conviction for obscenity upheld–most CSAM work falls under obscenity laws–and ended up giving up the fight to clear his name.
Oh, for sure. I’m talking about laws specifically targeted to minors. “Obscenity” is a catch-all that is well-established, but if you are trying to protect children from abuse, it’s a very blunt instrument and not as effective as targeted abuse and trafficking statutes. The statutory schemes used to outlaw virtual CSAM have failed to my knowledge.
For example: https://en.wikipedia.org/wiki/Ashcroft_v._Free_Speech_Coalition
That case was statutorily superseded in part by the PROTECT Act, which attempted to differentiate itself by…relying on an obscenity standard. So it’s a bit illusory that it does anything new.
The PROTECT Act has been, so far, found to be constitutional, since it relies on the obscenity standard in regards to lolicon hentai. Which is quite worrisome. It seems like it’s a circular argument/tautology; it’s obscene for drawn art to depict child sexual abuse because drawings of child sexual abuse are obscene.
simulated CSAM
When I used this phrase, someone told me it described a nonexistent concept, and that the CSAM term existed in part to differentiate between content where children were harmed to make it versus not. I didn’t wanna muddy any waters but do you have an opposing perspective?
Deepfaking intentionally real under 18 people is also not black and white
Interesting. Sounds real bad. See what you mean about harm factor though.
I’m at least all for a “fruit of the poisoned tree” theory - if AI model training data sets include actual CSAM then they can and should be made illegal.
Now all AI is illegal. It’s trained via scraping the internet, which will include CP as well as every other image.
There’s “copywrite illegal,” and then there’s “cp illegal.” Those are two very different things.
I don’t understand the relevance of your statement.
Could this be considered a harm reduction strategy?
Not that I think CSAM is good in any way, but if it saves a child would it be worthwhile? Like if these pedos were to use AI images instead of actual CSAM would that be any better?
I’ve read that CSAM sites on the dark web number into the hundreds of thousands. I just wonder if it would be a less harmful thing since it’s such a problem.
Many years ago (about 25) I read an article in a newspaper (idk the name, but it may have been the The Computer Paper, which is archived on line someplace}. This article noted that a study had been commissioned to show that cp access increases child abuse. The study seemed to show the opposite.
Here’s the problem with even AI generated cp: It might lower abuse in the beginning, but with increased access it would ‘normalise’ the perception of such conduct. This would likely increase abuse over time, even involving persons who may not have been so inclined otherwise.
This is all a very complex. A solution isn’t simple. Shunning things in anyway won’t help though, and that seems to be the current most popular way to deal with the issue.
“Normalized” violent media doesn’t seem to have increased the prevalence of real world violence.
That makes sense. I don’t know what a better answer is, just thinking out loud.
You would think so but you basically are making a patch work version of the illicit actual media so it’s a dark dark gray area for sure.
Hmm ok. I don’t know much about AI.
I guess my question is does access to regular porn make people not want to have real sex with another person? Does it ‘scratch the itch’ so to speak? Could they go the rest of their life with only porn to satisfy them?
It depends on the person. I feel like most people would be unsatisfied with only porn, but that’s just anecdotal.
I honestly think ai generated csam isn’t something the world needs to be produced. It’s not contributing to society in any meaningful ways and pedophiles who don’t offend or hurt children need therapy, and the ones who do need jailtime(and therapy, but Im in the US so thats a whole other thing). They don’t ‘need’ porn.
My own personal take is that giving pedophiles csam that’s AI generated is like showing alcohol ads to alcoholics. Or going to the strip club if you’re a sex addict. It’s probably not going to lead to good outcomes.
deleted by creator
You definitely have a good point. I was just thinking hopefully to reduce harm but obviously I don’t want it to be legal.
“Because fuck that” is not a great argument.
deleted by creator
by the same metric, i wonder why not let convicts murderers and psichopaths work at Slaughterhouses
On the other hand, are people who work at slaughterhouses more likely to be murderers and psychopaths?
perhaps, but I said convicted.
Do we know that AI child porn is bad? I could believe it would get them in the mood for the real thing and make them do it more, and I could believe it would make them go “ok, itch scratched”, and tank the demand for the real stuff.
Depending on which way it goes, it could be massively helpful for protecting kids. I just don’t have a sense for what the effect would be, and I’ve never seen any experts weigh in.
Do we know that AI child porn is bad? I could believe it would get them in the mood for the real thing and make them do it more, and I could believe it would make them go “ok, itch scratched”, and tank the demand for the real stuff.
From bits/articles I’ve seen here and there over the years about other things that are kind of in the same category (porn comics with child characters in them, child-shaped sex dolls), the latter seems to be more the case.
I’m reminded of when people were arguing that when Internet porn became widespread, the incidence of rape would go through the roof. And then literally the opposite happened. So…that pushes me toward hypothesizing that the latter is more likely to be the case, as well.
In Australia cartoon child porn is enforced in the same way as actual child porn. Not that it answers your question but it’s interesting.
I’d imagine for your question “it depends”, some people who would have acted on their urges may get their jollies from AI child porn, others who have never considered being pedophiles might find the AI child porn (assuming legal) and realise it’s something they were into.
I guess it may lower the production of real child porn which feels like a good thing. I’d hazard a guess that there are way more child porn viewers than child abusers.
I seem to remember Sweden did a study on this, but I don’t really want to google around to find it for you. Good luck!
Real question: “do we care if AI child porn is bad?” Based on most countries’ laws, no.
I’d like to know what psychologists think about it. My assumption is the former, it escalates their fantasizing about it and makes them more likely to attack a child.
There seems to be no way to conduct that experiment ethically, though.
deleted by creator
deleted by creator
In Canada even animated cp is treated as the real deal
There’s like a lot of layers to it.
- For some, it might actually work in the opposite direction, especially if paried with the wrong kind of community around it. I used to moderate anime communities, the amount of loli fans wanting to lower the age of consent to 12 or even lower was way too high, but they only called people opposed to loli as “real predators”, because they liked their middle-school tier arguments (which just further polarized the fandom when the culture wars started).
- Even worse might be the more realistic depictions might actually work against that goal, while with (most) loli stuff, at least it’s obvious it’s drawn.
- An often overseen issue is, data laundering. Just call your real CP AI generated, or add some GAI artifacts to your collection. Hungary bans too realistic drawings and paintings of that kind, because people even did that with traditional means, by creating as realistic tracings as possible (the calling CP “artistic nudes” didn’t work out here at least).
Wikipedia seems to suggest research is inconclusive whether consuming CSAM increases the likelihood of committing abuse.
deleted by creator
You’re missing the point. They don’t care what’s more or less effective for helping kids. They want to punish people who are different. In this case nobody is really going to step up to defend the guy for obvious reasons. But the motivating concept is the same for conservatives.
Depending on which way it goes, it could be massively helpful for protecting kids
Weeeelll, only until the AI model needs more training material…
That’s not how it works. The “generative” in “generative AI” is there for a reason.
You need more training material to train a new AI. Once the AI is there, it produce as many pictures as you want. And you can get good results even with models that can be run locally on a regular computer.
I’m not sure if that is how it would work? But this is exactly the kind of thinking we need. Effects: intended plus unintended equals ???
Hey, remember that terrible thing everyone said would happen?
It’s happening.
Lolicon fans in absolute shambles.
CANNED IN BANADA
If this thread (and others like it) have taught me aulnything is that facts be damned, people are opinionated either way. Nuance means nothing and it’s basically impossible to have a proper discussion when it comes to wedge issues or anything that can be used to divide people. Even if every study 100% said Ai generated csam always led to a reduction in actual child harm and reduced recidivism and never needed any actual real children to be used as training material, the comments would still pretty much look the same. If the studies showed the exact opposite, the comments would also be the same. Welcome to the internet. I hope you brought aspirin.
My man. Go touch some grass. This place is no good. Not trying to insult you but it’s for your mental health. These Redditors aren’t worth it.
Actually. I needed that. Thanks. Enough internet for me today.
A lot of the places I’ve been to start conversation have been hostile and painful. If there is one thing that stands out that’s holding Lemmy back it’s the shitty culture this place can breed.
I’m convinced that a lot can be inferred from the type of reactions and the level of hostility one might receive by trying to present a calm and nuanced argument to a wedge topic. Even if it’s not always enjoyable. At the very least it also shows others that they may not be interacting rational actors when one gets their opponents to go full mask-off.
Agreed. And I’ve had my share of “being a dick” on the Internet here. But by the end of the interaction I try to at least jest. Or find a middle ground…I commented on a Hexbear instance by accident once…
Lol, my secondary account is a hexbear account, but I won’t hold it against you.
This time.
a single tear falls from my eye I don’t understand this place lol
I was hoping to comment on this post multiple times today after I initially lost track of It and now I see you’ve covered about 75% of what I wanted to say. I’ll post the rest elsewhere out of politeness. Thank you
To be clear, I am happy to see a pedo contained and isolated from society.
At the same time, this direction of law is something that I don’t feel I have the sophistication to truly weigh in on, even though it invokes so many thoughts for me.
I hope we as a society get this one right.
We never do.
He wasn’t arrested for creating it, but for distribution.
If dude just made it and kept it privately, he’d be fine.
I’m not defending child porn with this comment.
deleted by creator
Now I’m imagining you making child porn
Show me multiple (let’s say 3+) small-scale independent academic studies or 1-2 comprehensive & large academic studies that support one side or another and I may be swayed, Otherwise I think all that is being accomplished is that one guys life is getting completely ruined for now and potentially forever over some fabrications and as a result he may or may not get help, but I doubt he’ll be better off.
—My understanding was that csam has it’s legal status specifically because there are victims that are hurt by these crimes and possession supports a broader market that faciltates said harm to these victims. It’s not as easy to make a morality argument (especially a good one) for laws that effect everybody when there are no known victims.
Dude is gonna get fucked, but someone had to be the test case. Hopefully this gets some legal clarity.
Are you stupid? Something has to be in the training model for any generation to be possible. This is just a new way to revitalise kids
So are you suggesting they can get an unaltered facial I.D. of the kids in the images? —Because that makes it regular csam with a specific victim (as mentioned), not an ai generative illustration.
No, I am telling you csam images can’t be generated by an algorithm that hasn’t trained on csam
That’s patently false.
I’m not going to continue to entertain this discussion but instead I’m just going to direct you to the multiple other people who have already effectively disproven this argument and similar arguments elsewhere in this post’s discusion. Enjoy.
Also if you’d like to see how the corn dog comment is absurd and wrong. Go look up my comment.
Sure thing bud. Sure thing 🙄
Not necessarily, AI can do wild things with combined attributes.
That said, I do feel very uncomfortable with the amount of defense of this guy, he was distributing this to people. If he was just generating fake images of fake people using legal training data in his own house for his own viewing, that would be a different story. The amount of people jumping in front of the bullet for this guy when we don’t really know the details is the larger problem.
I must admit, amount of comments that are defending AI images as not child porn is truly shocking.
In my book, sexual images of children are not okay, AI generated or otherwise. Pedophiles need help, counseling and therapy. Not images that enable something I think is not acceptable in society.
I truly do believe that AI images should be subject to same standards as regular images in what content we deem appropriate or not.
Yes, this can be used to wrongfully prosecute innocent people, but it does not mean that we should freely allow AI-CP.
I generally think if something is not causing harm to others, it shouldn’t be illegal. I don’t know if “generated” CSAM causes harm to others though. I looked it up and it appears the research on whether CSAM consumption increases the likelihood of a person committing child abuse is inconclusive.
You’re not kidding.
The only possible way I could see a defense if it were something like “AI CSAM results in a proven reduction of actual CSAM”.
But. The defenses aren’t even that!
They’re literally saying that CSAM is okay. I’m guessing a lot of these same comments would argue that deepfakes are okay as well. Just a completely fucked up perspective.
Cant speak for others but I agree that AI-CP should be illegal.
The question is how do we define the crime with our current laws? It does seem like we need a new law to address AI images. Both for things like AI-CP, revenge porn, and slanderous/misleading photos. (The Communist Harris and Trump with black people photos)
Where do we draw the line?
How do we regulate it?
Forced watermarks/labels on all tools?
Jail time? Fines?
Forced correction notices? (Doesn’t work for the news!)This is all a slippery slope but what I can say is I hope this goes to court. He looses. Appeals. Then it goes all the way up to federal so we can have a standard to point to.
The shit wrong.
Step one in fixing shit.Iirc he was prosecuted under a very broad “obscenity” law, which should terrify everyone.
the number of people willing to bat for this on Lemmy is truly disturbing. what do they think these ai models are trained on?
No necessarily is trained on CP, could be trained with images of children (already fuck up, who gave them that permission?) and pornography.
The article pointed out that stable diffusion was trained using a dataset containing CSAM
If no children were involved in the production of porn, how is it pedophilic? That’s like claiming a picture of water has the same properties as water.
It’s always the florida men
Futurism - News Source Context (Click to view Full Report)
Information for Futurism:
MBFC: Pro-Science - Credibility: High - Factual Reporting: Mostly Factual - United States of America
Wikipedia about this sourceSearch topics on Ground.News
Good. I do not think society owes pedos a legal means to create CSAM.
Pretty sure the training data sets are CSAM.
Edit, to those downvoting me and not reading the article:
A 2023 study from Stanford University also revealed that hundreds of child sex abuse images were found in widely-used generative AI image data sets.
“The content that we’ve seen, we believe is actually being generated using open source software, which has been downloaded and run locally on people’s computers and then modified,” Internet Watch Foundation chief technology officer Dan Sexton told The Guardian last year. “And that is a much harder problem to fix.”
One doesn’t need to browse AI generated images for longer than 5 seconds to realize it can generate a ton of stuff that you for absolute certainty can know wasn’t on the training data. I don’t get why people insist on the narrative that it can only output copies of what it has already seen. What’s generative about that?
If you took a minute to read the article:
A 2023 study from Stanford University also revealed that hundreds of child sex abuse images were found in widely-used generative AI image data sets.
“The content that we’ve seen, we believe is actually being generated using open source software, which has been downloaded and run locally on people’s computers and then modified,” Internet Watch Foundation chief technology officer Dan Sexton told The Guardian last year. “And that is a much harder problem to fix.”
So not only do the online models have CSAM, but people are downloading open source software and I’d be very surprised if they weren’t feeding it CSAM
That doesn’t dispute my argument; generative AI can create images that are not in the training data. It doesn’t need to know what something looks like as long as the person using it does and can write the correct prompt for it. The corn dog I posted below is a good example. You can be sure that wasn’t in the training data yet it was still able to generate it.
Online models since that discovery have scrubbed the offending sources and retrained, as well as added safeguards to their models to try and prevent it.
If that’s the basis for making it illegal, then all AI is illegal.
Which…eh maybe that’s not such a bad idea
Since that study, every legit AI model has removed said images from their datasets and all models trained afterwards no longer include knowledge about those source images.
I know one AI model has specifically not included photos of underage people at all, to minimize the possibility this can happen even on accident. Making CSAM from an AI model is something anyone determined and patient enough can do with a good model trainer and a dataset of source images that have the features they want, even if the underage images are completely clean.
Making CSAM with an AI model is a deliberate act in almost every case… and in this case, he was arrested for distributing these images, which is super illegal for obvious reasons.