• Zron@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 day ago

    Tech journalists don’t know a damn thing. They’re people that liked computers and could also bullshit an essay in college. That doesn’t make them an expert on anything.

        • TimewornTraveler@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          5
          ·
          17 hours ago

          that is such a ridiculous idea. Just because you see hate for it in the media doesn’t mean it originated there. I’ll have you know that i have embarrassed myself by screaming at robot phone receptionists for years now. stupid fuckers pretending to be people but not knowing shit. I was born ready to hate LLMs and I’m not gonna have you claim that CNN made me do it.

          • Melvin_Ferd@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            6
            ·
            16 hours ago

            Search AI in Lemmy and check out every article on it. It definitely is media spreading all the hate. And like this article is often some money yellow journalism

            • TimewornTraveler@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              3
              ·
              10 hours ago

              all that proves is that lemmy users post those articles. you’re skirting around psychotic territory here, seeing patterns where there are none, reading between the lines to find the cover-up that you are already certain is there, with nothing to convince you otherwise.

              if you want to be objective and rigorous about it, you’d have to start with looking at all media publications and comparing their relative bias.

              then you’d have to consider their reasons for bias, because it could just be that things actually suck. (in other words, if only 90% of media reports that something sucks when 99% of humanity agrees it sucks, maybe that 90% is actually too low, not too high)

              this is all way more complicated than media brainwashing.

            • Log in | Sign up@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              11 hours ago

              I think it’s lemmy users. I see a lot more LLM skepticism here than in the news feeds.

              In my experience, LLMs are like the laziest, shittiest know-nothing bozo forced to complete a task with zero attention to detail and zero care about whether it’s crap, just doing enough to sound convincing.

              • someacnt@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 hours ago

                Wdym, I have seen researchers using it to aid their research significantly. You just need to verify some stuff it says.

                • Log in | Sign up@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  5 hours ago

                  Verify every single bloody line of output. Top three to five are good, then it starts guessing the rest based on the pattern so far. If I wanted to make shit up randomly, I would do it myself.

                  People who trust LLMs to tell them things that are right rather than things that sound right have fundamentally misunderstood what an LLM is and how it works.

                  • someacnt@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    5 hours ago

                    It’s not that bad, the output isn’t random. Time to time, it can produce novel stuffs like new equations for engineering. Also, verification does not take that much effort. At least according to my colleagues, it is great. Also works well for coding well-known stuffs, as well!

              • Melvin_Ferd@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                9 hours ago

                😆 I can’t believe how absolutely silly a lot of you sound with this.

                LLM is a tool. It’s output is dependent on the input. If that’s the quality of answer you’re getting, then it’s a user error. I guarantee you that LLM answers for many problems are definitely adequate.

                It’s like if a carpenter said the cabinets turned out shit because his hammer only produces crap.

                Also another person commented that seen the pattern you also see means we’re psychotic.

                All I’m trying to suggest is Lemmy is getting seriously manipulated by the media attitude towards LLMs and these comments I feel really highlight that.

                • Log in | Sign up@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  8 hours ago

                  If that’s the quality of answer you’re getting, then it’s a user error

                  No, I know the data I gave it and I know how hard I tried to get it to use it truthfully.

                  You have an irrational and wildly inaccurate belief in the infallibility of LLMs.

                  You’re also denying the evidence of my own experience. What on earth made you think I would believe you over what I saw with my own eyes?

                  • Melvin_Ferd@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    3 hours ago

                    Why are you giving it data. It’s a chat and language tool. It’s not data based. You need something trained to work for that specific use. I think Wolfram Alpha has better tools for that.

                    I wouldn’t trust it to calculate how many patio stones I need to build a project. But I trust it to tell me where a good source is on a topic or if a quote was said by who ever or if I need to remember something but I only have vague pieces like old timey historical witch burning related factoid about villagers who pulled people through a hole in the church wall or what was a the princess who was skeptic and sent her scientist to villages to try to calm superstitious panic .

                    Other uses are like digging around my computer and seeing what processes do what. How concepts work regarding the think I’m currently learning. So many excellent users. But I fucking wouldn’t trust it to do any kind of calculation.