This could be a tool that works across the entire internet but in this case I’m mostly thinking about platforms like Lemmy, Reddit, Twitter, Instagram etc. I’m not necessarily advocating for such thing but mostly just thinking out aloud.

What I’m imagining is something like a truly competent AI assistant that filters out content based on your preferences. As content filtering by keywords and blocking users/communities is quite a blunt weapon, this would be the surgical alternative which lets you be extremely specific in what you want filtered out.

Some examples of the kind of filters you could set would for example be:

  • No political threads. Applies only to threads but not comments. Filters out memes aswell based on the content of the media.
  • No political content whatsoever. Hides also political comments from non-political threads.
  • No right/left wing politics. Self explainatory.
  • No right/left wing politics with the exception of good-faith arguments. Filters out trolls and provocateurs but still exposes you to good-faith arguments from the other side.
  • No mean, hateful or snide comments. Self explainatory.
  • No karma fishing comments. Filters out comments with no real content.
  • No content from users that have said/done (something) in the past. Analyzes their post history and acts accordingly. For example hides posts from people that have said mean things in the past.

Now obviously with a tool like this you could build yourself the perfect echo chamber where you’re never exposed to new ideas which probably is not optimal but it’s also not obvious to me why this would be a bad thing if it’s something you want. There’s way too much content for you to pay attention to all of it anyway so why not just optimize your feed to only have stuff you’re interested in? With a tool like this you could quite easily take a platform that’s an absolute dumpster fire like Twitter or Reddit and just clean it up and all of a sudden it’s useable again. This could possibly also discourage certain type of behaviour online because it means that trolls for example could no longer reach the people they want to troll.

  • 9point6@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    10 months ago

    The problem with filtering political content is people are pretty bad at identifying their blind spots. It’s a pretty common trap that some people fall into where what they want as non-political conversation is actually just conversation which doesn’t challenge their political view.

    The same people will then conclude that the tool is making “political” choices in what it’s hiding from them.

    You’re also focusing a lot on left vs right. What about “third way” or centrist politics? What about fringe groups that people don’t really consider left or right? How does this work with different countries having different ideas of what’s left and what’s right (Overton window)? For example, I’d say the US doesn’t have a left and right, it has a centre-right and a far-right party.

    Finally plenty of people are happy in their echo chambers, despite them being terrible for a person. Challenging and reflecting on the way you fundamentally think about the world (basically what politics boils down to) is hard and sometimes unpleasant. It’s easy to see how many people go down the easy road where they double down on existing opinions and seek out echo chambers.

    • Thorny_Insight@lemm.eeOP
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      10 months ago

      I used the term truly competent AI because obviously something like “no politics” is quite a broad guideline and the AI has to then figure out what you actually mean by this. Non-competent AI would filter out discussions about stuff like vegan food then aswell but obviously this is not what you meant. This is just a thought experiment about what it would be like if it actually worked as intented.

      What I’m imagining is something that also studies your own behaviour on the platform to learn what you’re actually into and if it’s not sure it could either ask you or do some sort of A-B testing to see what you engage with and what was that engagement like. This would make it possible to have a platform where the unfiltered experience would be a true wild west but which you then optimize to your liking yourself.

      • The Stoned Hacker@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        I think the idea that we need to be more efficient in consuming content is quite dystopic. I agree that not only should we be trying to reduce echo chamber, but content consumption as a whole. As a chronically online person in cybersecurity, I do not see a tenable future where humans continue to consume content at the rate they are. There needs to be a reduction in internet integration and online consumption. You’re right that there’s too much content for one person to reasonably sift through; the reasonable decision then is to reduce the amount of content rather than try to create a sieve. The amount of information that we try to consume on the internet is dangerous and harmful to us, and is destroying the foundations of society. I’m not some traditionalist nut or conspiracy theorist; it’s just easy to see that the benefits we get from globalized information sharing are very heavily offset by the constant influx of shit. I think people should have easy and free access to information and knowledge; I also think the current hierarchy of the internet was a mistake and that the majority of people do not need and in fact should not have computers.

        Also what you’re asking for is an incredibly invasive AI that is used for massive data collection and aggregation to track and serve you the content that is most addictive for you. I see no reasonable world where that is a good thing. It is only a good idea in our current world, which I do not believe is reasonable.

        • Thorny_Insight@lemm.eeOP
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          10 months ago

          Personally the way I think about is that since I’m going to spend a certain amount of time online anyways then why not atleast enjoy that time. For example I like discussing/debating ideas on platforms like Lemmy and Reddit but too often I find myself wasting time with someone whose not doing it in a good faith; they’re not open to have their mind changed and they’re not putting any effort into trying to change my mind either. They just want to dunk on what they deem as a stupid idea and more often than not they’re performing for their imagined audience. It would probably be better for us both if I wouldn’t engage with people like that to begin with. I really don’t need more than one decent person in order to have an interesting discussion. If there’s 20 others shouting insults into the void because my content filtering has blocked them I think that’s better than me relying on sheer will-power to resist the urge to reply back to these people.

    • perpetually_fried@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      11
      ·
      edit-2
      10 months ago

      Idk why people expect viewers want to have their ideas challenged when they’re just wanting a general idea of the happenings in the world.

      Like I want to know when the Houthis have hit another ship, causing an environmental disaster in the Red Sea (fertilizer). I DO NOT want to know about some bullshit laws being passed in a no name jurisdiction by some no name judge in a no name state that will get overturned in a month. It doesn’t affect me.

      attabit.com is a good example of this AI summary done right.

      But let me make myself clear. Nobody wants to be subjected to your ideology and echo chambers are fine. It’s not my responsibility to open up my attention to whatever it is you think is socially important at the time.

  • Lvxferre@mander.xyz
    link
    fedilink
    arrow-up
    6
    ·
    10 months ago

    My reasons are probably atypical, but there’s no way that I’d use it.

    My issues are not competence or the fact that it’s AI; it’s transparency. I want to know exactly which rules are being used to curate my posts and comments, and I don’t trust other people or a filtering algorithm to do it. (Except if I’m the one creating said filtering algorithm out of simple rules).

    • z00s@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      10 months ago

      The AI would definitely develop an implicit bias, as it has in many implementations already.

      Plus, while I understand the motivation, its good to be exposed to dissenting opinions now and then.

      We should be working to decrease echo chambers, not facilitate them

      • Lvxferre@mander.xyz
        link
        fedilink
        arrow-up
        3
        ·
        10 months ago

        OP is talking on hypothetical grounds of a “competent AI”. As such, let’s say that “competence” includes the ability to avoid and/or offset biases.

        • z00s@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          10 months ago

          Assuming that was possible, I would probably still train mine to remove only extremist political views on both sides, but leave in dissenting but reasonable material.

          But if I’m training it, how is it any different than me just skipping articles I don’t want to read?

          • Lvxferre@mander.xyz
            link
            fedilink
            arrow-up
            2
            ·
            10 months ago

            Even if said hypothetical AI would require training instead of simply telling it what you want to remove, it would be still useful because you could train it once and use it forever. (I’d still not use it.)

  • Presi300@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    While it does sound like a good idea, I feel like most o people would use it to make an echo chamber.

  • otp@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    10 months ago

    We get enough people using adbockers, then companies started blurring the line between ads and content.

    I think the line between memes and content have already been blurred quite a bit. Politics and content too.

  • Markimus@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    10 months ago

    Consider another market: businesses looking to identify current topics of interests / discussions that are relevant to what they are doing.

    The AI could summarise the posts and offer suggestions on what to post, when to post, where to post, etc., with references to the posts / threads that they’re basing this information on.

    This is all bundled as an online marketing tool, targeted towards small businesses focused on growth.

  • theywilleatthestars@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    10 months ago

    AI doesn’t do anything perfectly, it’s all based on statistical trends. The most control we could have over our feeds would be chronological displays of the stuff we choose to follow.

  • perpetually_fried@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    10
    ·
    edit-2
    10 months ago

    Sounds good. Go build it.

    no mean, hateful, or snide comments

    I think your idea actually sucks. Too bad my comment would be removed in your fairy tail website.