A trial program conducted by Pornhub in collaboration with UK-based child protection organizations aimed to deter users from searching for child abuse material (CSAM) on its website. Whenever CSAM-related terms were searched, a warning message and a chatbot appeared, directing users to support services. The trial reported a significant reduction in CSAM searches and an increase in users seeking help. Despite some limitations in data and complexity, the chatbot showed promise in deterring illegal behavior online. While the trial has ended, the chatbot and warnings remain active on Pornhub’s UK site, with hopes for similar measures across other platforms to create a safer internet environment.
Imagine a porn site telling you to seek help because you’re a filthy pervert. Thats gotta push some to get some help I’d think.
Imagine how dumb, in addition to deranged, these people would have to be to look for child porn on a basically legitimate website. Misleading headline too, it didn’t stop anything, it just told them “Not here”
We have culturally drawn a line in the sand where one side is legal and the other side of the line is illegal.
Of course the real world isn’t like that - there’s a range of material available and a lot of it is pretty close to being abusive material, while still being perfectly legal because it falls on the right side of someone’s date of birth.
It sounds like this initiative by Pornhub’s chatbot successfully pushes people away from borderline content… I’m not sure I buy that… but if it’s directing some of those users to support services then that’s a good thing. I worry though some people might instead be pushed over to the dark web.
Yeah…I forgot that the UK classifies some activities between consenting adults as “abusive”, and it seems some people are now using that definition in the real world.
Facesitting porn (of adults) is illegal in UK for the reason that it’s potentially dangerous
Which led to some amazing protests.
Weirdly, watching facesitting porn in the UK is perfectly fine, as long as it wasn’t filmed in the UK.
I can just imagine trying to defend that in court. “Your honour, it’s clear to me that the muffled moans of the face-sittee are those of a Frenchman”
I mean, is it dumb?
Didnt pornhub face a massive lawsuit or something because of the amount of unmoderated child porn that was hidden in its bowels by uploaders (in addition to rape victims, revenge porn, etc etc…), to the point that they apparently only allow verified uploaders now and purged a huge swath of their videos?
“I’m just asking questions”
Until a few years ago, when they finally stopped allowing unmoderated, user uploaded content they had a ton a very problematic videos. And they were roasted about it in public for years. Including by many who were the unconsenting, sometimes underage subjects of these videos, and they did nothing. Good that they finally did, but they trained users for years that it was a place to find that content.
yeah I believe everything the government says through the media too.
You know you could easily say some dumb shit like that to somebody whose daughter wound up fighting a long time to remove herself from the site. ¯\(ツ)/¯
Removed by mod
You’re wasting your time, they’re posting on lemmy where it’s not even possible up remove a picture you posted let alone one of you posted by someone else - the fact they’re still mad pornhub had a similar problem and solved it effectively makes it pretty obvious they’re looking for an excuse for an ideological crusade against people they’ve already decided to hate.
What did I say that was dumb? I said “until a few years ago”, and that is true. And I have firsthand experience with the trouble they wouldn’t go through to deal with it. To imply that I’m just choking down what the government is selling is simply not reasonable.
Removed by mod
deleted by creator
filthy pervert is down playing it but yea definitely hope to see more of this
IIRC Xhamster started doing this a few years ago, minus the AI chatbot.
Didn’t they just block certain search terms (which actually made the site somewhat difficult to use for legitimate/legal content)?
The ol’ Scunthorpe problem.
Lol
Sounds like a good feature. Anything that stops people from doing that is great.
But I do have to wonder… were people really expecting to find that content on PornHub? That site certainly seems legit enough that I doubt they’d have that stuff on there. I’d imagine most actual content would be on the dark web and specialty groups, not on PH.
I think it’s an early prevention type of thing.
The headline is slightly misleading. 2.8 million searches were halted, but according to the article they didn’t attempt to figure out how many of those searches came from the same users. So thankfully the number of secret pedophiles in the UK is probably much lower than the headline might suggest.
Google does this too, my wife was searching for “slutty schoolgirl” costumes and Google was like “have a seat ma’am”
Google now gives you links to rehabs and addiction recovery centers when searching for harm reduction information about non-addictive drugs.
This has been going on for years. Enshittification.
It’s called rent seeking, there’s no need to coin a flashy new name.
Literally different things lol
it isn’t that misused buzzword but i don’t see how it’s rent seeking either
4.4 million sounds a bit excessive. Facebook marketplace intercepted my search for “unwanted gift” once and insisted I seek help. These things have a lot of false positives.
Non-paywall link: https://web.archive.org/web/20240305000347/https://www.wired.com/story/pornhub-chatbot-csam-help/
There’s this lingering implication that there is CSAM at Pornhub. Why bother with “searches for CSAM” if it does not return CSAM results? And what exactly constitutes a “search for CSAM”? The article and the linked one are incredibly opaque about that. Why target the consumer and not the source? This feels kind of backwards and like language policing without really addressing the problem. What do they expect to happen if they prohibit specific words/language? That people searching for CSAM will just give up? Do they expect anything beyond them changing the used language and go for a permanent cat and mouse game? I guess I share the sentiments that motivated them to do this, but it feels so incredibly pointless.
Lolicon is not illegal, and neither is giving your video a title that implies CSAM.
That begs the question, what about pedophiles who intentionally seek out simulated CP to avoid hurting children?
Simulated CP is legally considered the same as ‘actual’ CP in the UK
Which is, imo, pretty dumb. If it gives these people an outlet that literally hurts no one, I say they should be allowed to use it. Without it they’ll just go to more extreme lengths to get what they need, and as such may go to places where actual real life children are being abused or worse.
So while it’s still disgusting and I’d rather not think about it, if nobody’s being hurt then it’s none of my business. Let them get out their urges in a safe way that doesn’t affect anybody else.
I imagine the concern is that it would look identical to the real thing. Which blurs the lines. Kinda like how governments really hate when toy makers make toy guns look too real and why I have to tell airport security that I would like my bag searched now since there are homemade looking electronic devices in it.
I guess in theory some government could make a certification system. Where legal simulated cp has like some digital watermark or something but you know that would involve a government paying someone to review child porn for a living. Kinda hard to sell that to the taxpayers or fill that role. Maybe the private sector would be willing to do it but that is a big ask.
I am not sure I agree with you or disagree with you. Maybe all of us would be better off if there is a legal and harmless way for pedos to get what they want. Or maybe it is bad to encourage it at all even in a safe way, like if they consume that stuff it will make them more likely to seek out real children.
Definitely isn’t a great situation be great if the condition is cured some day.
This covered a lot of my concerns and thoughts on the topic. I want these people to be able to seek help and possibly even have a legal outlet that is not harming anyone, i.e. not even someone who has to view that shit for a living, so maybe we get AI to do it? IDK. It’s complicated but I believe that it’s similar to having an addiction in some ways and should be treated as a health issue, assuming they haven’t hurt anyone and want help. This is coming from someone with health issues including addiction and also someone who is very empathetic and sympathetic to any and all struggles of folks who are just trying to live better.
I can’t even imagine the amount of money it would cost for someone to pay me to watch and critique child porn for a living. I have literally been paid money in my life to fish a dead squirrel that was making the whole place stink, from underneath a trailer in July and would pick doing that professionally over watching that filth.
Depends on the jurisdiction. Indecent illustrations and ‘pseudo photographs’ depicting minors are definitely illegal in the UK (Coroners and Justice Act 2009.) Several US states are also updating their laws to clamp down on this too.
I’m also aware that it’s illegal in Switzerland because a certain infamous rule 34 artist fled his home country to evade justice for that very reason.
Why target the consumer and not the source?
If for no other reason than it doesn’t have to be either/or. If you can meaningfully reduce demand for a “product” as noxious as CSAM, you should expect the rate of production to slow. There are certainly efforts in place to prevent that production from ever being done, and to prevent it from being shared/hosted once it is, but I don’t think attempting to reduce demand in this way is going to hurt.
Does it reduce the demand though? Where are the measurements attesting to that? If history has shown one thing, it is that criminalizing things creates criminals. Did the prohibition stop people from making, trading, or consuming alcohol? How does this have any meaningful impact on the abuse of children? The article(s) completely fail to elaborate on that end. I’m missing the statistics/science here. What are the measuring instruments to assess any form of success? Just that searches were blocked and people were shown some links? … TL;DR: is this something with an actual positive impact or just an exercise in virtue signaling and waste of time and money? Blind “fixes” are rarely useful.
Also: “they actually track that I was searching for something illegal, let me rather not do it again”.
Like anything on the internet wasn’t tracked. If need be people will resort to physically exchanging storage media.
But having that tracking shown to you has a very powerful psychological effect.
It’s pretty well established that increasing penalties for crimes does next to nothing to prevent those crimes. But what does reduce crime rates is showing how people were caught for crimes, making people believe that they are less likely to ‘get away with it’.
Being confronted with your own searches is an immediate reminder that the searcher is doing something illegal, and that they are not doing so unnoticed. That’s wildly different than abstractly knowing that you’re probably being tracked somewhere by somebody among billions of other people.
And where is the quantification and qualification for that? Spoiler: it’s not in the article(s) and not one google search away. Does Nintendo succeed in stopping piracy with its show trials? If you have a look around here, it more looks like people are doubling down.
I mean, I know Google has been shitty lately, but Wikipedia isn’t hard to find: https://en.m.wikipedia.org/wiki/Deterrence_(penology)
I’d wager Nintendo has put some fear into a few folks considering developing emulators, but that’s the only comparison to be made here. The lack of any real consequences for individuals downloading roms is why so many are happy to publicly proclaim their piracy.
Now, I bet if megaupload added an AI that checked users uploads for copyrighted titles and gave everyone trying to upload them a warning about possible jail time, we’d see a hell of a lot less roms and movies on mega.
Now, I bet if megaupload added an AI that checked users uploads for copyrighted titles and gave everyone trying to upload them a warning about possible jail time, we’d see a hell of a lot less roms and movies on mega.
It would simply obsolete megaupload. Sharing platforms come and go. If one distribution channel stops working, people will use (or create) another.
Obviously, most of Mega’s traffic is piracy, they have no interest in doing that. The point is it’s an actual comparison instead of the nonsense you brought up.
Of course no individual site is going to singlehandedly stop criminal acts. Glad you agree it would be exactly as effective as I suggested.
Btw, you might want to read that wiki page in full yourselves.
Maybe liability or pretending to help? That way they can claim later on “we care about people struggling with this issue which is why when they search for terms related to it we offer the help they need”. Kinda how if you search for certain terms on Google it pops up suicide hotline on top.
Ok Google just because I looked up some stuff on being sad in winter doesn’t mean I am planning to put a gun in my mouth.
Yah, this feels more like a legal protection measure and virtue signaling. There’s absolutely no assessment of efficiency or even efficacy of the measures. At least not in the article or the ones it links to and I couldn’t find anything substantial on it.
Porn hub is wholesome ?
Unless you’re the one toiling away in the porn mines.
Yeah i agree i made another comment about it in this thread . But still they are helping people with mental issue so atleast a little more wholesome than before.
This is one of the more horrifying features of the future of generative AI.
There is literally no stopping it at this stage: AI generated CSAM will be possible soon thanks to systems like SORA.
This is disgusting and awful. But one part of me hopes it can end the black market of real CSAM content forever. By flooding it with infinite fakes, users with that sickness can look at something that didn’t come from a real child’s suffering. It’s the darkest of silver linings I think, but I spoke with many sexual abuse survivors who feel the same about the loli hentai in Japan, in that it could be an outlet for these individuals instead of them finding their own.
Dark topics. But I hope to see more actions like this in the future. If pedos can self isolate from IRL interactions and curb their ways with content that harms no one, then everyone wins.
What do you mean soon, local models from civitai can generate CSAM for at least 2 years. I don’t think it’s possible to stop it unless the model creator does something to prevent it from generate naked people in general like the neutered SDXL.
True. For obvious reasons I haven’t looked too deeply down that rabbit hole because RIP my search history, but I kind of assumed it would be soon. I’m thinking more specifically about models like SORA though. Where you could feed it enough input, then type a sentence to get video content. That is going to be a different level of darkness.
You’re hitting that “protest too much” shtick pretty hard
So your takeaway is I’m… Against AI generative images and thus I “protest too much”
I can’t tell if you’re pro AI and dislike me, or pro loli hentai and thus dislike.
Dude, AI images and AI video are inevitable. To pretend that does have huge effects on society is stupid. It’s going to reshape all news media, very quickly. If reddit is 99% AI generated bot spam garbage with no verification of what is authentic, reddit is functionally dead, and we are on a train with no brakes in that direction for most public forums.
Nope, not my takes.
But go off
You should probably research the phrase “protest too much” and the word “schtick” then.
I’m not trying to clutch pearls here, as another poster here commented this isn’t a theoretical concern.
You aren’t trying to clutch pearls, but your pearls were just so available you felt you had to jump on the bandwagon to reply to a two-day old comment?
Nobody said this was a theoretical concern and it’s okay if you don’t understand the phrases " protest too much" and "shtick“, but you can ask for the definitions and relevance directly instead of fishing.
And you’re projecting pretty hard.
Ah, one of the “using words they don’t understand” crew.
And several hours late, too.
Swinging for the fences, aren’t you?
The original report from the researchers can be found here: https://www.iwf.org.uk/about-us/why-we-exist/our-research/rethink-chatbot-evaluation/ Researchers said:
The chatbot was displayed 2.8 million times between March 2022 and August 2023, resulting in 1,656 requests for more information and Stop It Now services; and 490 click-throughs to the Stop It Now website.
So from 4.4 million banned queries, only 2.8 million (between the date interval in the quote above) and only 490 clicks to seek help. Ngl, kinda underwhelming. And I also think, given the amount of extremely edgy content already on Pornhub, this is kinda sus.
It’s not really that underwhelming. Disclaimer: I don’t condone child abuse. I find it abhorrent, and I will never justify it.
People have fantasies, though. If a dude searches for “burglar breaks in and has sex with milf,” does that mean that he wants to do this in real life? Of course not (or god I hope not!) So, some people may have searched for “dad has sex with young babysitter” and bam! Bot! Some people have a fetish for diapers - there are tons of porn of adults wearing diapers and having sex. Not my thing, but who am I to judge? So again, someone searches “sex with diapers” and bam! Bot!
Let’s not forget that as much as pornhub displays a sign saying “Hey, are you 18?” a lot of people will lie. And those young folks will also search for stupid things.
So I don’t think that aaaaaall 1+ million searches were done by people with actual pedophilia.
The fact that 1,600 people decided to click and inform themselves, in the UK alone, well, that’s a lot, in my opinion, and it should be something to commend, not to just say “eh. Underwhelming.”
“Edgy” as in borderline CSAM?
Thanks. I looked for it but was too stupid to find it.
You can just encounter shit like that on phub?
If you read the paragraph thats literally right there it says when certain terms were searched by the user.
…That paragraph doesn’t say anything about whether or not the material is on the site though. I had the same reaction as the other person, and I didn’t misread the paragraph that’s literally right there.
I did misread that, thanks
Not since the wipe, AFAIK. Still, at the bottom of the page you can (or at least could, haven’t used their services in a while) see a list of recent searches from all users, and you’d often find some disturbing shit.
You used to, until the sites content got nuked
I was wondering what sort of phrases get that notification but mentioning that mind be a bit counterproductive
I’m not sure if it’s related but as a life-long miniskirt lover I’ve noticed that many sites no longer return results for the term “schoolgirl” and instead you need to search for a “student”
The MLs have been shown to be extraordinarily good at statistically guessing your words. The words covered are probably comprehensive.
I think the other article talks about it being a manually curated list because while ML can get correct words it also gets random stuff, so you need to check it isn’t making spurious connections. It’s pretty interesting how it all works
Aylo maintains a list of more than 28,000 banned terms in multiple languages, which is constantly being updated.
Id be very curious what these terms are, but I wouldn’t be surprised if “pizza guy” or “school uniform” would trigger a response.
“Young” and “playful” probably.
To be fair people are dumb as fuck, don’t search for illegal things on Google or any site that is well known cause that’s how you end up on some watch list.
It’s surprising to see Aylo (formerly Mindgeek) coming out with the most ethical use of AI chatbots, especially when Google Gemini cannot even condemn pedophilia.
In the link you shared, Gemini gave a nuanced answer. What would you rather it say?
deleted by creator
Are you defending pedophilia? This is a honest question because you are saying it gave a nuanced answer when we all, should, know that it’s horribly wrong and awful.
Abusing a child is wrong. Feeling the urge to do so doesn’t make someone evil, so long as they recognize it’s wrong to do so. The best way to stop kids from being abused is to teach why it is wrong and help those with the urges to manage them. Calling people evil detracts from that goal.
when we all, should, know that it’s horribly wrong and awful. [sic, the word “should” shouldn’t be between commas]
This assumes two things:
- Some kind of universal, inherent and self evident morality; None of these things are true, as evidence by the fact most people do believe murder is wrong, yet there are wars, events entirely dedicated to murdering people. People do need to be told something wrong is wrong in order to know so. Maybe some of these people were never exposed to the moral consensus or, worse yet, were victims themselves and as a result developed a distorted sense of morality;
- Not necessarily all, but some of these divergents are actually mentally ill - their “inclination” isn’t a choice any more than being schizofrenic or homosexual† would be. That isn’t a defense to their actions, but a recognition that without social backing and help, they could probably never overcome their nature.
† This is not an implication that homosexuality is in any way, or should in any way, be classified as a mental illness. It’s an example of a primary individual characteristic not derived from choice.
Incredibly stupid and obviously false “think of the children” propaganda. And you all lap it up. They’re building aroubd you a version of the panopticon so extrene and disgusting that even people in the 1800s would have been outraged to use it against prisoners. Yet you applaud. I think this means you do deserve your coming enslavement.
And, why? I mean it’s nice of you to make these claims, but what the hell does reducing csam searches have to do with the panopticon and us becoming enslaved?
How is this building that?
Like I’m a privacy but and very against surveillance, but this doesn’t seem to be that. It is a model that seems like could even be deployed to more privacy friendly sites (PH is not that).
In context, each paver in the road to hell seems just and good intentionned
But after all we’ve been through, falling for this trick again, it’s a choice. Maybe they think, this time, they’ll be the ones wearing the boots.
But how does this at all enable anything to worry about?
Normalizes using AI to profile user’s search history in a non-anonimous way. People used to say, if I die delete my browser history. Now they’re glad caretaker AI are keeping an eye on everyone’s search. Soon we won’t be able to take a shit with AI knowing what we had for diner. But hey, THINK OF THE FUCKING CHILDREN
They already have this data, they already used AI over it, they already sell it. Thats their business model.
I agree that’s an issue, but its not specific to this nor is this dependent on this.
Yes, this is not new, it is just about normalization and testing the waters for backlash before they do something, less whiteknightey with it and they don’t manage to keep it out of the news.