LLMs are very useful for synthesizing information, e.g. sumamrizing long texts. Yet every company is actually pushing to use it to create more text, which as you say is at least partly nonsense.
It shows against the difference of what users need (quick access to accurate information) vs what these companies eant for us (glue your eyeballs to the screen for the longest possible time by e.g. overwhelming you with information, regardless of the quality)
Well it can be great at making text too, but the usecase has to be very good. Right now lots of companies in the B2B space are using LLMs as a middle layer to chat bots and navigation systems to enhance how they function. They are also being used to create unique lists and inputs for certain systems. However on the consumer side the usecase is pretty mixed with a lot of big companies just muddying their offerings instead of bringing any real value.
Consciousness is not a computer program. Neurons don’t use binary. I’d love it if we had computers that could do squirrel things perfectly but we don’t even have that.
I can appreciate that contemporary neural networks are very different from organic intelligence, but consciousness is most definitely equivalent to a computer program. There are two things preventing us from reproducing it:
We don’t know nearly enough about how the human mind (or any mind, really) actually works, and
Our computers do not have the capacity to approximate consciousness with any meaningful degree of accuracy. Floating point representations of real numbers are not an issue (after all, you can always add more bits), but the sheer scale and complexity of the brain is a big one.
Also, for what it’s worth, most organic neurons actually do use binary (“one bit”) activation, while artificial “neurons” use a real-valued activation function for a variety of reasons, the biggest two being that (a) training algorithms require differentiable models, and (b) binary activation functions do not yield a lot of information per neuron while requiring effectively the same amount of memory.
Stop plugging LLMs into everything! They are designed to make up plausible sounding nonsense.
Seems like Facebook is the right place for them than.
They said “plausible.”
LLMs are very useful for synthesizing information, e.g. sumamrizing long texts. Yet every company is actually pushing to use it to create more text, which as you say is at least partly nonsense.
It shows against the difference of what users need (quick access to accurate information) vs what these companies eant for us (glue your eyeballs to the screen for the longest possible time by e.g. overwhelming you with information, regardless of the quality)
Well it can be great at making text too, but the usecase has to be very good. Right now lots of companies in the B2B space are using LLMs as a middle layer to chat bots and navigation systems to enhance how they function. They are also being used to create unique lists and inputs for certain systems. However on the consumer side the usecase is pretty mixed with a lot of big companies just muddying their offerings instead of bringing any real value.
There is a time and place for nonsense, and this isn’t it. I guess it being plausible sounding is the issue.
no different from what human brains do
Aside from knowledge, context, ability to reason, and spatial awareness.
All of those are just products of the same learning algorithm
Consciousness is not a computer program. Neurons don’t use binary. I’d love it if we had computers that could do squirrel things perfectly but we don’t even have that.
I can appreciate that contemporary neural networks are very different from organic intelligence, but consciousness is most definitely equivalent to a computer program. There are two things preventing us from reproducing it:
Also, for what it’s worth, most organic neurons actually do use binary (“one bit”) activation, while artificial “neurons” use a real-valued activation function for a variety of reasons, the biggest two being that (a) training algorithms require differentiable models, and (b) binary activation functions do not yield a lot of information per neuron while requiring effectively the same amount of memory.
Binary neurons are still neurons