Amazing, it’s getting closer to human intelligence all the time!
The more I talk to people the more I realize how low that bar is. If AI doesn’t take over soon, we’ll kill ourselves anyways.
I mean, I could argue that it learned not to piss off stupid people by showing them how math the stoopids didn’t understand.
It all comes down to the fact that LLMs are not AGI - they have no clue what they’re saying or why or to whom. They have no concept of “context” and as a result have no ability to “know” if they’re giving right info or just hallucinating.
Hey, but if Sam says it might be AGI he might get a trillion dollars so shut it /s
ChatGPT went from high school student to boomer brain in record time.
Kind of a clickbait title
“In March, GPT-4 correctly identified the number 17077 as a prime number in 97.6% of the cases. Surprisingly, just three months later, this accuracy plunged dramatically to a mere 2.4%. Conversely, the GPT-3.5 model showed contrasting results. The March version only managed to answer the same question correctly 7.4% of the time, while the June version exhibited a remarkable improvement, achieving an 86.8% accuracy rate.”
Not everything is a click bait. Your explanation is great but the tittle is not lying, is just an simplification, titles could not contain every detail of the news, they are still tittles, and what the tittle says can be confirmed in your explanation. The only think I could’ve made different is specified that was a gpt-4 issue.
Click bait would be “chat gpt is dying” or so.
Tittles are the little dots above i and j, that’s why you weren’t autocorrected. You’re looking for “title” though.
Thanks for pointing out, I actually learned something.
I think that’s title not tittle
Mmmmm, titt les
Oversimplified to the point of lying you could say
Originally, it was people answering the questions. Now it’s the actual tech doing it Lmao
AI fudging is notoriously common. Just ask anyone who lived in the 3rd world what working was like in their country and they’ll animate with stories of how many times they were approached by big tech companies to roleplay as an AI.
It’s often still people in developing countries answering the questions.
Perhaps this AI thing is just a sham and there are tiny gnomes in the servers answering all the questions as fast as they can. Unfortuanlty, there are not enough qualified tiny gnomes to handle the increased work load. They have begun to outsource to the leprechauns who run the random text generators.
Luckily the artistic hypersonic orcs seem to be doing fine…for the most part
This is a result of what is known as oversampling. When you zoom in really close and make one part of a wave look good, it makes the rest of the wave go crazy. This is what you’re seeing; the team at OpenAI tried super hard to make a good first impression and nailed that, but then once some time started to pass things started to quickly fall apart.
The AI feels good, much slower than before
I am wondering why it adds up to exactly 100%. There has to be some creative data handling happened with these numbers.
Maybe the article/stats were generated using another LLM?
People like you are why Mt. Everest had two feet added to its actual height so as to not seem too perfect.
No I’m not. Why would I use feet to measure a mountain’s height?
Peak XV (measured in feet) was calculated to be exactly 29,000 ft (8,839.2 m) high, but was publicly declared to be 29,002 ft (8,839.8 m) in order to avoid the impression that an exact height of 29,000 feet (8,839.2 m) was nothing more than a rounded estimate.