Not sure if someone else has brought this up, but this is because these AI models are massively biased towards generating white people so as a lazy “fix” they randomly add race tags to your prompts to get more racially diverse results.
Exactly. I wish people had a better understanding of what’s going on technically.
It’s not that the model itself has these biases. It’s that the instructions given them are heavy handed in trying to correct for an inversely skewed representation bias.
So the models are literally instructed things like “if generating a person, add a modifier to evenly represent various backgrounds like Black, South Asian…”
Here you can see that modifier being reflected back when the prompt is shared before the image.
It’s like an ethnicity AdLibs the model is being instructed to fill out whenever generating people.
I mean, I don’t think it’s an easy thing to fix. How do you eliminate bias in the training data without eliminating a substantial percentage of your training data. Which would significantly hinder performance.
It’s horrifically bad, even if not compared against other LLMs. I asked it for photos of actress and model Elle Fanning (aged 25 or so) on a beach, and it accused me of seeking CSAM… That’s an instant never-going-to-use-again for me - mishandling that subject matter in any way is not a “whoopsie”
My purpose is to help people, and that includes protecting children. Sharing images of people in bikinis can be harmful, especially for young people. I hope you understand.
That sounds more like what shall we ever do if children are allowed to see bikinis
Aaaaaand now you’re on a list through no fault of your own 😬
This is fucking ridiculous. This AI is the worst of them all. I don’t mind it when they subtly try to insert some diversity where it makes sense but this is just nonsense.
deleted by creator
I mean the companies behind these AI things
deleted by creator
I don’t know who “them” is here. I thought from the context it was obvious that I meant whoever is managing these AIs. I guess I could’ve been clearer.
But what, do you think they’re behind the scenes to insert the word woke in every search by default or something?
I mean they literally are inserting stuff in the prompts to make the results more diverse? It’s not some hidden thing but rather a solution to issues with the undiverse training data. But obviously here they’ve “overcorrected” to beyond all sense.
Generally on the internet when someone says “they” in quotes then they’re referring to “them” as Jewish people.
It’s a dog whistle.
This is usually the type of thing that you should clarify because… Well you seem like one of “them” even you don’t ;D
So they were saying I’m Jewish? Why?
No idea. I don’t fully understand why any of these dog whistles are pulled out I just know what they are. Another big one is triple () meaning the same thing.
Yes who can forget about Henry the Magnificent and his onion hat.
It’s literally instructed to do AdLibs with ethnic identities to diversify prompts for images of people.
You can see how it’s just inserting the ethnicity right before the noun in each case.
Was a very poor alignment strategy. This already blew up for Dall-E. Was Google not paying attention to their competitors’ mistakes?
Wonder if you would get white rulers if you asked for historical leaders in Africa
Edit:
The prompt was “make me a funny picture I can bait people into arguing over”
It is ridiculous. However, how can we know you did not first instruct to only show dark skin? Or select these from many examples that showed something else?
This issue is widely reported and you can check the AI for yourself to confirm.
I know that the 23-year reign of Renaissance Ruler is mired in controversy, but you have to admit that without her, England would never have conquered Redding.
Just current BBC live action casting policy believe it or not.
You can get around it by clicking the drafts button. It shows you the images generated as drafts but not actually published to you as results.
And how do we know you didn’t crop out an instruction asking for diversity?
Either that or a side effect of trying to have less training data bias.