Okay, what is your definition of AI then, if nothing burned onto silicon can count?
If LLMs aren’t AI, then absolutely nothing up to this point probably counts either.
Okay, what is your definition of AI then, if nothing burned onto silicon can count?
If LLMs aren’t AI, then absolutely nothing up to this point probably counts either.
If a seagull is stealing chips from someone, odds are there are plenty of other seagulls around to witness their compatriot getting merked.
Seagulls understand that stealing from humans is risky - that’s why they generally do it very quickly. The ones who fail suffer consequences for their failure, same as stealing food from any other creature. It’s the risk/reward calculation any scavenger has to make.
Sometimes they calculate incorrectly. They get forcibly removed from the gene pool.
Of course, it’s also illegal in a lot of countries to harm seagulls, so in that sense, he was in the wrong anyways.
No. Artificial Intelligence has to be imitating intelligent behavior - such as the ghosts imitating how, ostensibly, a ghost trapped in a maze and hungry for yellow circular flesh would behave, and how CS1.6 bots imitate the behavior of intelligent players. They artificially reproduce intelligent behavior.
Which means LLMs are very much AI. They are not, however, AGI.
It would be fine if it had more ways to differentiate elements from each other - darkening around the edges of windows, buttons that actually look raised so they aren’t identical to a text box, scroll bars that aren’t SO FUCKING TINY that it’s clear MS is embarrassed that they exist in the first place, etc. etc.
but but but how are the corporations supposed to make money off of our data if they can’t harvest it? Think of the poor corporations!!
The problem is that before LLMs, they had to actually put forward some effort to produce content on the internet, which at least kept the amount of thoughtless content down somewhat. Now the barrier to entry is practically zero, all while thieving people’s hard work without compensation and burning ridiculous amounts of resources to do so.
It is super interesting tech though.
It has nothing to do with that, and much more to do with people on 4chan being willing to call each other out. Without toxic behavior you can’t have examples on how to deal with toxic behavior.
No, it’s still accurate - the straight line goes through the center of the Earth. Only in coordinate systems where ‘straight’ is defined as following the curvature of a surface are there infinite lines between the North and South Poles… and that would be non-Euclidean geometry.
…what?
In order for that to be true, the entire dataset would need to be contained within the LLM. Which it is not. If it were, a model wouldn’t have to undergo training.
You seem to be mistaking ‘intelligence’ for ‘human-like intelligence’. This is not how AI is defined. AI can be dumber than a gnat, but if it’s capable of making decisions based on stimulus without each set of stimulus and decision being directly coded into it, then it’s AI. It’s the difference between what is ACTUALLY called AI, and when a sci-fi show or novel talks about AI.