• SpaceNoodle@lemmy.world
    link
    fedilink
    English
    arrow-up
    60
    ·
    7 months ago

    … aren’t representative of most people’s experiences.

    Every AI “answer” I’ve gotten from Google is factually incorrect, often ludicrously so.

    • shiiiiiiiiiiiiiiiiit@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      7 months ago

      Yep, same here. Whereas ChatGPT and Perplexity would tell me it didn’t know the answer to my question, Bard/Gemini would confidently hallucinate some bullshit.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        13
        ·
        7 months ago

        Really? Like what? I’ve always had ChatGPT give confident answers. I haven’t tried to stump it with anything really technical though.

          • DominusOfMegadeus@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            7 months ago

            I’ve asked moderately technical questions and was confidently given wrong information. That said, it’s right far more often than copilot. I haven’t used Google for quite some time

        • best_username_ever@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          ·
          7 months ago

          I try ChatGPT and others once every month to see if they improve my programming experience. Yesterday I got fake functions that do no exist, again. I’ll try next month.

          • Ohi@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            7 months ago

            You’re doing it wrong IMO. ChatGPT 4.0 is freakin’ amazing at helping on coding task, you just need to learn what to ignore and how to adjust the prompt when you’re not getting the results you want. Akin to the skillet of googling for programming solutions (or any solution), it gets easier with practice.

            • JustAPenguin@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 months ago

              I hate to say it but, I have to agree. GPT4 is a significant improvement over GPT3. I needed to use a Python library for something that was meant to be a small, simple CLI app. It turned into something bigger and accumulated technical debt. Eventually, I was having problems that were niche and hard to trace, even with logging and all the other approaches.

              I eventually said fuck it, and so I threw a shit tonne of my code into it, explaining what I was doing, how I was doing it, why I wasn’t doing it another way, and what I expected vs the actual result. Sometimes it suggests something that is on the right path or is entirely spot on. Other times, it thinks it knows better than you, to which you tell yourself it isn’t, because you tried all its suggestions, and then you realise something that would technically allow GPT to say, “I told you so”, but out of spite you just close the tab until the next issue.

              For practical tasks, GPT has come pretty far. For technical ones, it is hit or miss, but it can give you some sound advice in place of a solution, sometimes.

              I had another issue involving Matplotlib, converting to and from coordinate systems, and having plots that had artifacts due to something not quite right. The atan2 function catches many people out, but I’m experienced enough to know better… Well, normally. In this particular case, it was a complex situation and I could not reason why the result was distorted. Spending hours with GPT4 lead me in circles. Sometimes it would tell me to do things I just said I did, or that I said don’t work. Then, I say to it, “what if we represent this system of parametric equations as a single complex-valued function, instead of dealing with Cartesian to polar conversations?”. Then it would zip up a whole lot of math (related to my problem). The damn thing handed me a solution and a half. In theory, it was a great solution. In practice, my code is illiterate, so it doesn’t care.

              All in all, while it failed to help me solve my issue, it was able to reason and provide feedback to a wide range of challenges. Sometimes it needed prompting to change the trajectory it intends to follow, and this is the part you need to learn as a skill. Until these LLMs are more capable of thinking for themselves. Give it time.

        • shiiiiiiiiiiiiiiiiit@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          I asked about a plot point that I didn’t understand in a TV series old enough to be in an LLM’s knowledge. Chatgpt and Perplexity both said they couldn’t find any discussions or explanations online for my particular question.

          Bard/Gemini gave several explanations, all of them featuring characters, locations, and situations from the show, but confidently bullshit and definitely impossible in the story’s world.

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      First I was surprised they rolled it out already, then of how bad it was. I knew of Google’s AI blunders from their faked reveals but I didn’t think they‘d actually roll them out in this state. They really just want to turn the internet into the next TV where you don‘t really get to choose when you get to see what exactly and they‘re willing to crash and burn themselves by doing so if they must. Insanity.