• MotoAsh@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      10 months ago

      The more I talk to people the more I realize how low that bar is. If AI doesn’t take over soon, we’ll kill ourselves anyways.

    • Dasus@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I mean, I could argue that it learned not to piss off stupid people by showing them how math the stoopids didn’t understand.

  • Limeey@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    3
    ·
    10 months ago

    It all comes down to the fact that LLMs are not AGI - they have no clue what they’re saying or why or to whom. They have no concept of “context” and as a result have no ability to “know” if they’re giving right info or just hallucinating.

  • UnRelatedBurner@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    10
    ·
    10 months ago

    Kind of a clickbait title

    “In March, GPT-4 correctly identified the number 17077 as a prime number in 97.6% of the cases. Surprisingly, just three months later, this accuracy plunged dramatically to a mere 2.4%. Conversely, the GPT-3.5 model showed contrasting results. The March version only managed to answer the same question correctly 7.4% of the time, while the June version exhibited a remarkable improvement, achieving an 86.8% accuracy rate.”

    source: https://techstartups.com/2023/07/20/chatgpts-accuracy-in-solving-basic-math-declined-drastically-dropping-from-98-to-2-within-a-few-months-study-finds/

    • angrymouse@lemmy.world
      link
      fedilink
      English
      arrow-up
      35
      arrow-down
      3
      ·
      10 months ago

      Not everything is a click bait. Your explanation is great but the tittle is not lying, is just an simplification, titles could not contain every detail of the news, they are still tittles, and what the tittle says can be confirmed in your explanation. The only think I could’ve made different is specified that was a gpt-4 issue.

      Click bait would be “chat gpt is dying” or so.

  • shiroininja@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    10 months ago

    Originally, it was people answering the questions. Now it’s the actual tech doing it Lmao

    • Omega_Haxors@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      10 months ago

      AI fudging is notoriously common. Just ask anyone who lived in the 3rd world what working was like in their country and they’ll animate with stories of how many times they were approached by big tech companies to roleplay as an AI.

  • helpImTrappedOnline@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    10 months ago

    Perhaps this AI thing is just a sham and there are tiny gnomes in the servers answering all the questions as fast as they can. Unfortuanlty, there are not enough qualified tiny gnomes to handle the increased work load. They have begun to outsource to the leprechauns who run the random text generators.

    Luckily the artistic hypersonic orcs seem to be doing fine…for the most part

  • Omega_Haxors@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    10 months ago

    This is a result of what is known as oversampling. When you zoom in really close and make one part of a wave look good, it makes the rest of the wave go crazy. This is what you’re seeing; the team at OpenAI tried super hard to make a good first impression and nailed that, but then once some time started to pass things started to quickly fall apart.

  • EarMaster@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    10 months ago

    I am wondering why it adds up to exactly 100%. There has to be some creative data handling happened with these numbers.

    • Gabu@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      10 months ago

      People like you are why Mt. Everest had two feet added to its actual height so as to not seem too perfect.