It’s not just a predictive text program. That’s been around for decades. That’s a common misconception.
As I understand it, it uses statistics from the whole text to create new text. It would be very rare to output “cats have feathers” because that phrase doesn’t ever appear in the training data. Both words “have feathers” never follow “cats”.
But the fact remains that it doesn’t know what a cat or a feather is. All of this is still based purely on statistical frequency and not at all on actual meanings.
because that phrase doesn’t ever appear in the training data.
Eh but LLMs abstract. It has seen “<animal> have feathers” and “<animal> have fur” quite a lot of times. The problem isn’t that LLMs can’t reason at all, the problem is that they do employ techniques used in proper reasoning, in particular tracking context throughout the text (cross-attention) but lack techniques necessary for the whole thing, instead relying on confabulation to sound convincing regardless of the BS they spout. Suffices to emulate an Etonian but that’s not a high standard.
This isn’t really accurate either. At the moment of generation, an LLM only has context for the input string and the network of text tokens it’s been assigned. It pulls from a “pool” of these tokens based on what it’s already output and the input context, nothing more.
Most LLMs have what are called “Top P”, “Top K” etc, these are the number of tokens that it ends up selecting from based on the previous token, alongside the input tokens. It then randomly chooses one based on temperature settings.
It’s why if you turn these models’ temperature settings really high they output pure nonsense both conceptually and grammatically, because the tenuous thread linking the previous token’s context to the next token has been widened enough that it completely loses any semblance of cohesiveness.
It’s not just a predictive text program. That’s been around for decades. That’s a common misconception.
As I understand it, it uses statistics from the whole text to create new text. It would be very rare to output “cats have feathers” because that phrase doesn’t ever appear in the training data. Both words “have feathers” never follow “cats”.
But the fact remains that it doesn’t know what a cat or a feather is. All of this is still based purely on statistical frequency and not at all on actual meanings.
Eh but LLMs abstract. It has seen “<animal> have feathers” and “<animal> have fur” quite a lot of times. The problem isn’t that LLMs can’t reason at all, the problem is that they do employ techniques used in proper reasoning, in particular tracking context throughout the text (cross-attention) but lack techniques necessary for the whole thing, instead relying on confabulation to sound convincing regardless of the BS they spout. Suffices to emulate an Etonian but that’s not a high standard.
This isn’t really accurate either. At the moment of generation, an LLM only has context for the input string and the network of text tokens it’s been assigned. It pulls from a “pool” of these tokens based on what it’s already output and the input context, nothing more.
Most LLMs have what are called “Top P”, “Top K” etc, these are the number of tokens that it ends up selecting from based on the previous token, alongside the input tokens. It then randomly chooses one based on temperature settings.
It’s why if you turn these models’ temperature settings really high they output pure nonsense both conceptually and grammatically, because the tenuous thread linking the previous token’s context to the next token has been widened enough that it completely loses any semblance of cohesiveness.
Your “ackshyually” is missing the point.