• gmtom@lemmy.world
    link
    fedilink
    arrow-up
    92
    ·
    10 months ago

    Not sure if someone else has brought this up, but this is because these AI models are massively biased towards generating white people so as a lazy “fix” they randomly add race tags to your prompts to get more racially diverse results.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      edit-2
      10 months ago

      Exactly. I wish people had a better understanding of what’s going on technically.

      It’s not that the model itself has these biases. It’s that the instructions given them are heavy handed in trying to correct for an inversely skewed representation bias.

      So the models are literally instructed things like “if generating a person, add a modifier to evenly represent various backgrounds like Black, South Asian…”

      Here you can see that modifier being reflected back when the prompt is shared before the image.

      It’s like an ethnicity AdLibs the model is being instructed to fill out whenever generating people.

    • Marcbmann@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      10 months ago

      I mean, I don’t think it’s an easy thing to fix. How do you eliminate bias in the training data without eliminating a substantial percentage of your training data. Which would significantly hinder performance.

  • Pendulum@lemmy.world
    link
    fedilink
    arrow-up
    68
    arrow-down
    2
    ·
    edit-2
    10 months ago

    It’s horrifically bad, even if not compared against other LLMs. I asked it for photos of actress and model Elle Fanning (aged 25 or so) on a beach, and it accused me of seeking CSAM… That’s an instant never-going-to-use-again for me - mishandling that subject matter in any way is not a “whoopsie”

    My purpose is to help people, and that includes protecting children. Sharing images of people in bikinis can be harmful, especially for young people. I hope you understand.

  • Kusimulkku@lemm.ee
    link
    fedilink
    arrow-up
    53
    arrow-down
    7
    ·
    10 months ago

    This is fucking ridiculous. This AI is the worst of them all. I don’t mind it when they subtly try to insert some diversity where it makes sense but this is just nonsense.

          • Kusimulkku@lemm.ee
            link
            fedilink
            arrow-up
            6
            ·
            10 months ago

            I don’t know who “them” is here. I thought from the context it was obvious that I meant whoever is managing these AIs. I guess I could’ve been clearer.

            But what, do you think they’re behind the scenes to insert the word woke in every search by default or something?

            I mean they literally are inserting stuff in the prompts to make the results more diverse? It’s not some hidden thing but rather a solution to issues with the undiverse training data. But obviously here they’ve “overcorrected” to beyond all sense.

            https://www.bbc.com/news/business-68364690

            • Maven (famous)@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              10 months ago

              Generally on the internet when someone says “they” in quotes then they’re referring to “them” as Jewish people.

              It’s a dog whistle.

              • Kusimulkku@lemm.ee
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                10 months ago

                This is usually the type of thing that you should clarify because… Well you seem like one of “them” even you don’t ;D

                So they were saying I’m Jewish? Why?

                • Maven (famous)@lemmy.world
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  10 months ago

                  No idea. I don’t fully understand why any of these dog whistles are pulled out I just know what they are. Another big one is triple () meaning the same thing.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      It’s literally instructed to do AdLibs with ethnic identities to diversify prompts for images of people.

      You can see how it’s just inserting the ethnicity right before the noun in each case.

      Was a very poor alignment strategy. This already blew up for Dall-E. Was Google not paying attention to their competitors’ mistakes?

  • Eddyzh@lemmy.world
    link
    fedilink
    arrow-up
    19
    arrow-down
    8
    ·
    10 months ago

    It is ridiculous. However, how can we know you did not first instruct to only show dark skin? Or select these from many examples that showed something else?

    • Kusimulkku@lemm.ee
      link
      fedilink
      arrow-up
      23
      arrow-down
      1
      ·
      edit-2
      10 months ago

      This issue is widely reported and you can check the AI for yourself to confirm.

  • Flying Squid@lemmy.worldM
    link
    fedilink
    arrow-up
    10
    ·
    10 months ago

    I know that the 23-year reign of Renaissance Ruler is mired in controversy, but you have to admit that without her, England would never have conquered Redding.

    • Llamadramas@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      10 months ago

      You can get around it by clicking the drafts button. It shows you the images generated as drafts but not actually published to you as results.

  • Amaltheamannen@lemmy.ml
    link
    fedilink
    arrow-up
    12
    arrow-down
    12
    ·
    10 months ago

    And how do we know you didn’t crop out an instruction asking for diversity?

    Either that or a side effect of trying to have less training data bias.