Some AI models get more accurate at maths if you ask them to respond as if they are a Star Trek character, ML engineers say::Researchers asking a chatbot to optimize its own prompts found it was best at solving grade-school math when acting like it was on Star Trek.
AI models get more dishonest if you ask them to respond as if they were a pirate: https://www.circusscientist.com/2023/11/13/i-hired-a-pirate-to-take-orders-for-my-entertainment-business/
“Answer as if you’re a tribble.“
But then all it can do is multiply.
That’s troubling.
Reverse the polarity!
It is only logical that an algorithm trained on the ways of a Vulcan, is precise and accurate in it operation and communication. Vastly more fascinating are the result when you ask it to behave like a human.
Doh. This says to have the AI write the prompt for you, but it doesn’t give any examples of doing that.
I don’t want to get into a rabbit hole looking up examples from the wide internet.
Its because it dosent try to answer right, but more what it thinks toy want to see/read 🤷
LLM detractors hate this one weird trick
We’ve finally figured out how to trick the computer that’s bad at math into being less bad at math.