• Zak@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    1 year ago

    I think the design of media products around maximally addictive individually targeted algorithms in combination with content the platform does not control and isn’t responsible for is dangerous. Such an algorithm will find the people most susceptible to everything from racist conspiracy theories to eating disorder content and show them more of that. Attempts to moderate away the worst examples of it just result in people making variations that don’t technically violate the rules.

    With that said, laws made and legal precedents set in response to tragedies are often ill-considered, and I don’t like this case. I especially don’t like that it includes Reddit, which was not using that type of individualized algorithm to my knowledge.

    • rambaroo@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      Reddit is the same thing. They intentionally enable and cultivate hostility and bullying there to drive up engagement.

    • deweydecibel@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      6
      ·
      1 year ago

      Attempts to moderate away the worst examples of it just result in people making variations that don’t technically violate the rules.

      The problem then becomes if the clearly defined rules aren’t enough, then the people that run these sites need to start making individual judgment calls based on…well, their gut, really. And that creates a lot of issues if the site in question could be held accountable for making a poor call or overlooking something.

      The threat of legal repercussions hanging over them is going to make them default to the most strict actions, and that’s kind of a problem if there isn’t a clear definition of what things need to be actioned against.

      • rambaroo@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        3
        ·
        1 year ago

        Bullshit. There’s no slippery slope here. You act like these social media companies just stumbled onto algorithms. They didn’t, they designed these intentionally to drive engagement up.

        Demanding that they change their algorithms to stop intentionally driving negativity and extremism isn’t dystopian at all, and it’s very frustrating that you think it is. If you choose to do nothing about this issue I promise you we’ll be living in a fascist nation within 10 years, and it won’t be an accident.

      • VirtualOdour@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        5
        ·
        1 year ago

        It’s the chilling effect they use in China, don’t make it clear what will get you in trouble and then people are too scared to say anything

        Just another group looking to control expression by the back door

        • rambaroo@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          2
          ·
          edit-2
          1 year ago

          There’s nothing ambiguous about this. Give me a break. We’re demanding that social media companies stop deliberately driving negativity and extremism to get clicks. This has fuck all to do with free speech. What they’re doing isn’t “free speech”, it’s mass manipulation, and it’s very deliberate. And it isn’t disclosed to users at any point, which also makes it fraudulent.

          It’s incredibly ironic that you’re accusing people of an effort to control expression when that’s literally what social media has been doing since the beginning. They’re the ones trying to turn the world into a dystopia, not the other way around.