

This seems to me like just a semantic difference though. People will say the LLM is “making shit up” when they’re outputting something that isn’t correct, and that happens (according to my knowledge) usually because the information you’re asking wasn’t represented enough in the training data to guide the answer always to that information.
In any case, there is an expectation from users that LLMs can somehow be deterministic when they’re not at all. They’re a deep learning model that’s so complicated that’s impossible to predict what effect a small change in the input will have on the output. So it could give an expected answer for a certain question and give a very unexpected one just by adding or changing some word on the input, even if that appears irrelevant.
If that’s all you need, I’d say most countries 😆
Salaries outside the US aren’t gonna be as high, but the cost of living is also much much lower. As a Brazilian, I actually save more than 50% of my salary (while having a middle class lifestyle), and was able to buy an apartment without any debt at 30.
Also if you go anywhere in Latin America you’ll see the average diet is quite healthy.