• finitebanjo@piefed.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    12
    ·
    5 days ago

    Unfortunately, an LLM lies about 1 in 5 to 1 in 10 times: 80% to 90% accuracy, with a proven hard limit by OpenAI and Deepmind research papers that state even with infinite power and resources it would never approach human language accuracy. Add on top of that the fact that the model is trained on human inputs which themselves are flawed, so you multiply an average person’s rate of being wrong.

    In other words, you’re better off browsing forums and asking people, or finding books on the subject, because the AI is full of shit and you’re going to be one of those idiot sloppers everybody makes fun of, you won’t know jack shit and you’ll be confidently incorrect.

    • architect@thelemmy.club
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      6 hours ago

      No way the vast majority of people are getting things right more than 80% of the time. On their owned trained tasks, sure, but random knowledge? Nope. The AI holds a more intelligent conversation than most of humanity. It says a lot about humanity.

      • finitebanjo@piefed.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        You literally don’t understand.

        The human statements are the baseline, right or wrong, and the AI struggles to maintain numbers over 80% of that baseline.

        Take however often a person is wrong and multiply it: that’s AI. They like to call it “hallucination” and it will never, ever, go away: in fact it will get worse as it has already polluted its own datasets which it will pull from and produce even worse output like noise coming from an amp in a feedback loop.

        • Cabbage_Pout61@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          2
          ·
          edit-2
          4 days ago

          How would I search something I don’t know how it’s called? As I explained the AI is just responsible to tell me “hey this thing X exists”, and after that I go look for it on my own.

          Why am I a moron? Isn’t it the same as asking another person and then doing the heavy lifting yourself?

          ^(edit: typo)

          • finitebanjo@piefed.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            7
            ·
            edit-2
            4 days ago

            “Hey AI, I want to do this very specific thing but I don’t really know what it is called, can you help me?”

            That was your previous example. You had a very specific thing in mind, meaning you knew what to search for from reputable sources. There are tons of ways to discover new previously unknown things, all of which are better than being a filthy stupid slopper.

            “Hey AI, can you please think for me? Please? I need it, idk what to do.”

              • finitebanjo@piefed.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 hour ago

                If you think a 2:7 ratio after insulting a bunch of net negative slopper subhumans is enough to change my mind then welcome to the internet, my friend. That’s a figure of speech btw, I am not some dirty slopper’s friend.

            • Cabbage_Pout61@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              arrow-down
              2
              ·
              4 days ago

              Jesus, I don’t know who hurt you or why you are so salty like that.

              I’ll stop the conversation here, funnily enough, the way you keep repeating the same thing in a different way is just like AI.

              My best wishes to you.