• Mountain_Mike_420@lemmy.ml
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    7 months ago

    You just haven’t gaslighted your ai into saying the glue thing. If you keep trying by saying things like “what about non-toxic glue” or “aren’t there glues designed for humans” the ai will finally give in and recommend the glue. Don’t give up. Glue is good for us.

  • Ghostalmedia@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    2
    ·
    7 months ago

    I imagine Google was quick to update the model to not recommend glue. It was going viral.

    • Franklin@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      7 months ago

      Main issue is Gemini traditionally uses it’s training data and the version answering your search is summarising search results, which can vary in quality and since it’s just a predictive text tree it can’t really fact check.

      • Balder@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 months ago

        Yeah when you use Gemini, it seems like sometimes it’ll just answer based on its training, and sometimes it’ll cite some source after a search, but it seems like you can’t control that. It’s not like Bing that will always summarize and link where it got that information from.

        I also think Gemini probably uses some sort of knowledge graph under the hoods, because it has some very up to date information sometimes.

        • Petter1@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          7 months ago

          I think copilot is way more usable than this hallucination google AI…

    • efstajas@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      7 months ago

      You can’t just “update” models to not say a certain thing with pinpoint accuracy like that. Which one of the reasons why it’s so challenging to make AI not misbehave.

  • istanbullu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    7 months ago

    These are statistical models, meaning that you’ll get a different answer each time, also different answers based on context.

    • BradleyUffner@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      7 months ago

      Not exactly. The answers would be exactly the same given the exact same inputs if they didn’t intentionally and purposefully inject some random jitter into the algorithm each time specifically to avoid getting the same answer each time

      • EmoDuck@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        7 months ago

        That jitter is automatically present because different people will get different search results, so it’s not really intentional or purposeful

  • Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    7 months ago

    I’m almost sure that they use the same model for Gemini and for the A"I" answers, so patching the “put glue on pizza” answer for one also patches it for another.

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      Nope it’s because on Search it was summarizing the first results, the “pure Gemini” isn’t doing a search at that time, it’s just answering based on what it knows.

  • Retiring@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    7 months ago

    Ask it five times if it is sure. You can usually get it to say outrageous things this way

  • IsThisAnAI@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    24
    ·
    edit-2
    7 months ago

    Y’all losing your mind intentionally misunderstanding what happened with the glue. Y’all are becoming anti ai lemons just looking for rage bait.

    The AI doesn’t need to be perfect. Just better than the average person. That why the shitty Tesla said driving has such good accident rates despite the fuck ups everyone loves to rage about in the news cycle.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      The average person isn’t going to recommend putting glue in pizza.