• Clbull@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      1
      ·
      edit-2
      1 year ago

      So they paid Kenyan workers $2 an hour to sift through some of the darkest shit on the internet.

      Ugh.

    • GenesisJones@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 year ago

      This reminds me of an NPR podcast from 5 or 6 years ago about the people who get paid by Facebook to moderate the worst of the worst. They had a former employee giving an interview about the manual review of images that were CP andrape related shit iirc. Terrible stuff

        • SacrificedBeans@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I’m sure there’s some loophole there, maybe between countries’ laws. And if there isn’t, Hey! We’ll make one!

        • Meowoem@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          They could be working with the governments of relevant countries to develop filters and detection systems.

        • Clbull@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Isn’t CSAM classed as images and videos which depict child sexual abuse? Last time I checked written descriptions alone did not count, unless they were being forced to look at AI generated image prompts of such acts?

        • aidan@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          IIRC there are a few legitimate and legal reasons to seek CSAM, such as journalism, and definitely developing methods to prevent it’s spread.

        • smooth_tea@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          1 year ago

          I really find this a bit alarmist and exaggerated. Consider the motive and the alternative. You really think companies like that have any other options than to deal with those things?

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Very much yes police authorities have CSAM databases. If what you want to do with it really is above board and sensible they’ll let you access that stuff.

            I don’t doubt anything that OpenAI could do with that stuff can be above board, but sensible is another question: Any model that can detect something can be used to train a model which can generate it. As such those models are under lock and key just like their training sets, (social) media platforms which have a use for these things and the resources run them, under the watchful eye of the authorities. Think faceboogle. OpenAI could, in principle, try to get into the business of selling companies at that scale models they can, and have, trained themselves, I don’t really see that making sense from the business POV, either.