• weedazz@lemmy.world
    link
    fedilink
    arrow-up
    105
    arrow-down
    2
    ·
    edit-2
    1 year ago

    My mind immediately went to a horizon zero dawn like dystopia where the Mozilla AI is the only thing left protecting humans from various malevolent AIs bent on consuming the human race

    • clanginator@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Imagining the Mozilla AI as a personified Firefox and Thunderbird fighting off Cortana, some BARD (sorry) and a bunch of generic evil corporate AIs just makes me excited that Mozilla would be the one fending everyone off.

  • kingthrillgore@lemmy.ml
    link
    fedilink
    arrow-up
    38
    arrow-down
    1
    ·
    1 year ago

    I want to give them the benefit of the doubt. I really do. I am going to watch this with a critical eye, however.

  • Weeby_Wabbit@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    2
    ·
    1 year ago

    I’ll believe it when I see it.

    I’m so goddamn tired of “open source” turning into subscription models restricting use cases because the company wants to appease conservative investors.

    • blind3rdeye@lemm.ee
      link
      fedilink
      arrow-up
      24
      ·
      1 year ago

      Mozilla has a very strong track-record though. They’ve been around for a very long time, and have stuck to free open-source principles the whole time.

    • BetaDoggo_@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 year ago

      That’s basically only OpenAI, maybe some obscure startups as well. Mozzila is far too old and niche to get away with that anyway.

  • 👁️👄👁️@lemm.ee
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    24
    ·
    1 year ago

    As much as I love Mozilla, I know they’re going to censor it (sorry, the word is “alignment” now) the hell out of it to fit their perceived values. Luckily if it’s open source then people will be able to train uncensored models

    • DigitalJacobin@lemmy.ml
      link
      fedilink
      English
      arrow-up
      51
      arrow-down
      10
      ·
      1 year ago

      What in the world would an “uncensored” model even imply? And give me a break, private platforms choosing to not platform something/someone isn’t “censorship”, you don’t have a right to another’s platform. Mozilla has always been a principled organization and they have never pretended to be apathetic fence-sitters.

      • Doug7070@lemmy.world
        link
        fedilink
        English
        arrow-up
        27
        arrow-down
        2
        ·
        1 year ago

        This is something I think a lot of people don’t get about all the current ML hype. Even if you disregard all the other huge ethics issues surrounding sourcing training data, what does anybody think is going to happen if you take the modern web, a huge sea of extremist social media posts, SEO optimized scams and malware, and just general data toxic waste, and then train a model on it without rigorously pushing it away from being deranged? There’s a reason all the current AI chatbots have had countless hours of human moderation adjustment to make them remotely acceptable to deploy publicly, and even then there are plenty of infamous examples of them running off the rails and saying deranged things.

        Talking about an “uncensored” LLM basically just comes down to saying you’d like the unfiltered experience of a robot that will casually regurgitate all the worst parts of the internet at you, so unless you’re actively trying to produce a model to do illegal or unethical things I don’t quite see the point of contention or what “censorship” could actually mean in this context.

        • underisk@lemmy.ml
          link
          fedilink
          arrow-up
          10
          arrow-down
          2
          ·
          1 year ago

          It means they can’t make porn images of celebs or anime waifus, usually.

        • 👁️👄👁️@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          That’s not at all how a uncensored LLM is. That sounds like an untrained model. Have you actually tried an uncensored model? It’s the same thing as regular, but it doesn’t attempt to block itself for saying stupid stuff, like “I cannot generate a scenario where Obama and Jesus battle because that would be deemed offensive to cultures”. It’s literally just removing the safeguard.

        • Spzi@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I’m from your camp but noticed I used ChatGPT and the like less and less over the past months. I feel they became less and less useful and more generic. In Februar or March, they were my go to tools for many tasks. I reverted back to old-fashioned search engines and other methods, because it just became too tedious to dance around the ethics landmines, to ignore the verbose disclaimers, to convince the model my request is a legit use case. Also the error ratio went up by a lot. It may be a tame lapdog, but it also lacks bite now.

          • Doug7070@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I’ve found a very simple expedient to avoid any such issues is just to not use things like ChatGPT in the first place. While they’re an interesting gadget, I have been extremely critical of the massive over-hyped pitches of how useful LLMs actually are in practice, and have regarded them with the same scrutiny and distrust as people trying to sell me expensive monkey pictures during the crypto boom. Just as I came out better of because I didn’t add NFTs to my financial assets during the crypto boom, I suspect that not integrating ChatGPT or its competitors into my workflow now will end up being a solid bet, given that the current landscape of LLM based tools is pretty much exclusively a corporate dominated minefield surrounded by countless dubious ethics points and doubts on what these tools are even ultimately good for.

      • 👁️👄👁️@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        18
        ·
        1 year ago

        Anything that prevents it from my answering my query. If I ask it how to make me a bomb, I don’t want it to be censored. It’s gathering this from public data they don’t own after all. I agree with Mozilla’s principles, but also LLMs are tools and should be treated as such.

          • 👁️👄👁️@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            11
            ·
            edit-2
            1 year ago

            Do gun manufacturers get in trouble when someone shoots somebody?

            Do car manufacturers get in trouble when someone runs somebody over?

            Do search engines get in trouble if they accidentally link to harmful sites?

            What about social media sites getting in trouble for users uploading illegal content?

            Mozilla doesn’t need to host an uncensored model, but their open source AI should be able to be trained to uncensored. So I’m not asking them to host this themselves, which is an important distinction I should have made.

            Which uncensored LLMs exist already, so any argument about the damage they can cause is already possible.

            • Spzi@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Do car manufacturers get in trouble when someone runs somebody over?

              Yes, if it can be shown the accident was partially caused by the manufacturer’s neglect. If a safety measure was not in place or did not work properly. Or if it happens suspiciously more often with models from this brand. Apart from solid legal trouble, they can get into PR trouble if many people start to think that way, no matter if it’s true.

                • Spzi@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Then let me spell it out: If ChatGPT convinces a child to wash their hands with self-made bleach, be sure to expect lawsuits and a shit storm coming for OpenAI.

                  If that occurs, but no liability can be found on the side of ChatGPT, be sure to expect petitions and a shit storm coming for legislators.

                  We generally expect individuals and companies to behave in society with peace and safety in mind, including strangers and minors.

                  Liabilities and regulations exist for these reasons.

        • Doug7070@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          1 year ago

          My brother in Christ, building a bomb and doing terrorism is not a form of protected speech, and an overwrought search engine with a poorly attached ability to hold a conversation refusing to give you bomb making information is not censorship.

    • VonCesaw@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      4
      ·
      1 year ago

      If ‘censored’ means that underpaid workers in developing countries don’t need to sift through millions of images of gore, violence, etc, then I’m for it

  • CaptKoala@lemmy.ml
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    7
    ·
    edit-2
    1 year ago

    Couldn’t give a fuck, there’s already far too much bad blood regarding any form of AI for me.

    It’s been shoved in my face, phone and computer for some time now. The best AI is one that doesn’t exist. AGI can suck my left nut too, don’t fuckin care.

    Give me livable wages or give me death, I care not for anything else at this point.

    Edit: I care far more about this for privacy reasons than the benefits provided via the tech.

    The fact these models reached “production ready” status so quickly is beyond concerning, I suspect the companies are hoping to harvest as much usable data as possible before being regulated into (best case) oblivion. It really no longer seems that I can learn my way out of this, as I’ve been doing since the beginning, as the technology is advancing too quickly for users, let alone regulators to keep it in check.

  • bahmanm@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    3
    ·
    1 year ago

    Something that I’ll definitely keep an eye on. Thanks for sharing!

    • blind3rdeye@lemm.ee
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      No? The code for the model can be open-source - and that’s pretty valuable. The training data can be made openly available too - and that’s perhaps even more valuable. And the post-training weights for the model can be made open too.

      Each of those things is very meaningful and useful. If those things are open, then the AI can be used and adjusted for different contexts. It can be run offline; it can be retrained or tweaked. It can be embedded into other software. etc. It is definitely meaningful to open source that stuff.

  • Boring@lemmy.ml
    link
    fedilink
    arrow-up
    2
    arrow-down
    14
    ·
    1 year ago

    Coming from a company the preaches about privacy and rates privacy respecting businesses, while collecting telemetry and accepting 500M/ year to from google to promote their search engine… I’ll take this as the puff up piece that is is.

    • DigitalJacobin@lemmy.ml
      link
      fedilink
      arrow-up
      12
      ·
      1 year ago
      1. The very little, basic telemetry Firefox collects can be easily disabled[1].
      2. What alternative do you suggest to Mozilla? Reject the $500M and blowup everything they’ve worked so hard for decades to build? I feel like users having to click, at most, a whole 5 times to change their search engine (if they want) isn’t that big of a sacrifice to have a major privacy-oriented, non-profit player in the tech sphere.

      1. https://support.mozilla.org/en-US/kb/telemetry-clientid ↩︎

      • Boring@lemmy.ml
        link
        fedilink
        arrow-up
        1
        arrow-down
        8
        ·
        1 year ago

        Its more so the principle. Many people that download Firefox are doing so to escape google, and if they are not born as cyber security experts they may download Firefox and continue with no real improvement to their privacy.

        Secondly, the main thing you should look for is where a company gets its funding. If Mozilla gets almost 100% of its funding through google… How much do you really expect them to push back against the data collection of their userbase?

        I rank Mozilla with the likes of ExpressVPN, NordVPN, etc. They preach privacy and security against surveillance… But its just theatre to make money in specific demographics.

        • DigitalJacobin@lemmy.ml
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          It is extremely simple and easy to change your search engine and disable telemetry in Firefox. I would agree if Mozilla showed any favoritism towards Google, but they don’t. Maintaining and developing an entirely independent browser is not cheap.

          I really hope you’re not about to suggest Brave as an alternative when 100% of their funds come from a dying crypto scam, is for-profit, and is owned by a far-right, anti-gay reactionary. Not to mention that Brave’s browser is entirely reliant on Chromium code from Google.

          Perfect is the enemy of good.

        • blind3rdeye@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Mozilla’s Firefox is essentially the only competitor to Google’s Chrome. So to say that Mozilla is pro-google is kind of weird. Almost every other browser uses Chrome’s engine, and thus is enforcing Google’s view of the internet. Firefox and Safari are the only significant holdouts. (And Safari is obviously backed by one of the largest companies in the world, with its own reasons.)

          • Boring@lemmy.ml
            link
            fedilink
            arrow-up
            1
            arrow-down
            4
            ·
            1 year ago

            Yea, it does seem weird… But money doesn’t lie. Its very easy to search online how Mozilla has enough money to lay for all their weird projects.

            They even cost cut their nonprofit products like Firefox and Thunderbird so they have more money to burn on other hobbies.

            They’re like a giant corporate MLM where users are encouraged to sell “privacy” to their friends and the profits syphon up to Mozilla where they cash out to google.