The race to have the magic box that tells you lies that you want to hear while also consuming incredible amounts of resources…why is this a race again?
The telling lies part is not good, but I think the dream of AI is a servant (or slave) with unlimited potential that can solve, until now, unsolvable problems. Cure for cancer, sure that will be $10k a pill. Eternal life? Sure that will be 1 million dollars a years for all eternity. Robot army to protect you? Top of the list.
Question I have is, is the AI we see the same AI the teck bros see? Is there a public interface that is made to appear a little buffoonish so the masses can laugh it off, but the real interface is much much better?
Those things are being solved by other forms of AI, not LLMs. AlphaFold is about the most useful thing AI has done so far and it’s not a chatbot.
We get access to entertainment AI, but there could be different forms of AI in use in medical science that have nothing to do with image or text generation.
AlphaFold’s success seems to be largely linked to its use of attention-based architecture, similar to GPT, i.e. the architecture used by LLMs. Beyond that, they are both building on work in machine learning and statistics, so I don’t think they are nearly as independent as you are making out.
Yeah, but LLM innovation now is not in more clever architectures, but rather larger and larger models with more training data.
I don’t hate the existence of LLMs but rather how they’re being shoehorned everywhere and how much power is being spent for just a little bit better results.
I really don’t understand this perspective. I truly don’t.
You see a new technology with flaws and just assume that those flaws will always be there and the technology will never progress.
Like. Do you honestly think this is the one technology that researchers are just going to say “it’s fine as-is, let’s just stop improving it”?
You don’t understand the first thing about how it works but people like you are SO certain that the way it is now is how it will always be, and that because there are flaws developing it further is pointless.
There’s a lot of indication that LLMs are peaking. It’s taking exponentially more compute and data to get incremental improvements. A lot of people are saying OpenAI’s new model is a regression (I don’t know, I haven’t really played with the new model much). More foundational breakthroughs need to be made, and these kinds of breakthroughs are often the result of “eureka” moments which can’t be manifested by just throwing more money at the problem. It’s possible it will take decades before someone discovers a major breakthrough (or it could be tomorrow).
Right. You don’t get it. You hear people talk about a new technology but actually they haven’t talked about anything, they are trying to sell you snake oil, but you convince yourself that you understand what they mean, and that it’s somehow meaningful.
We could talk about the history of AI in software development, you know it goes back decades, and there are legitimate areas of research. But the bubble that people are riding right now, they are throwing LLMs at the general public and pretending those LLMs are good enough to replace large swaths of the current workforce, but that’s not going to happen because it won’t work, because that’s not how those models are designed. And then the snake oil salesman, they do classic bait and switch, and they start talking about expert systems and minor improvements to them, as if that is something new.
But even if my prediction is wrong, what that actually means is that people shouldn’t need to work full-time jobs anymore.
To be fair, if your argument is that some day AI research will be legitimate and no longer snake oil, then you could easily be right. But there’s no good reason to think that day is going to be in the next few years, rather than the next few decades or even the next few centuries.
I’ve actually worked professionally in the field for a couple of years since it was interesting to me originally. I’ve built RAG architecture backends for self hosted FOSS LLMs, i’ve fine tuned LLMs with new data, And I’ve even took the opposite approach where I embraced the hallucinations as I thought it could be used for more creative tasks. (I think this area still warrants research). I also enjoy TTS and STT use cases and have FOSS models for those on most of my devices.
I’ll admit that the term AI is extremly vauge. It’s like saying you study medicine, it’s a big field. But I keep coming to the conclusion that LLMs and predictive generative models in general simply do not work for the use cases that it’s being marketed for to consumers, CEOs, and Governments alike.
This " AI race" happened because Deepseek was able to create a model that was more or less equivalent to OpenAI and Anthropic models. It should have been seen as a race between proprietary and open source since deep seek is one of the more open models at that performance level. But it became this weird nationalist talking point on both countries instead.
There are a lot of things the US is actually in a race with China in. Many of which are things that would have immediate impact. Like renewable energy, international respect, healthcare advances, military sufficiency, human rights, food supplies, and afordible housing, just to name a few.
The promise of AI is that it can somehow help in the above categories eventually, and that’s cool. But we don’t need AI to make improvements to them right now.
I think AI is a giant distraction, while the the talk of nationalistic races is just being used for investor buy in.
Have you considered that if the worlds two superpowers are dead certain on this being an important area that they are willing to throw coutless billions of investment into, that they might know more than you do?
Governments fail to implement incredibly obvious, easy, and proven solutions all the time so yeh they can be pretty dumb. Not to mention historical examples of governments (paricularly the UK when it was a world superpower) investing their entire economies in nigerian prince tier scams.
The race to have the magic box that tells you lies that you want to hear while also consuming incredible amounts of resources…why is this a race again?
The telling lies part is not good, but I think the dream of AI is a servant (or slave) with unlimited potential that can solve, until now, unsolvable problems. Cure for cancer, sure that will be $10k a pill. Eternal life? Sure that will be 1 million dollars a years for all eternity. Robot army to protect you? Top of the list.
Question I have is, is the AI we see the same AI the teck bros see? Is there a public interface that is made to appear a little buffoonish so the masses can laugh it off, but the real interface is much much better?
Those things are being solved by other forms of AI, not LLMs. AlphaFold is about the most useful thing AI has done so far and it’s not a chatbot.
We get access to entertainment AI, but there could be different forms of AI in use in medical science that have nothing to do with image or text generation.
AlphaFold’s success seems to be largely linked to its use of attention-based architecture, similar to GPT, i.e. the architecture used by LLMs. Beyond that, they are both building on work in machine learning and statistics, so I don’t think they are nearly as independent as you are making out.
Yeah, but LLM innovation now is not in more clever architectures, but rather larger and larger models with more training data.
I don’t hate the existence of LLMs but rather how they’re being shoehorned everywhere and how much power is being spent for just a little bit better results.
The only answer I need from an AI is 42.
I really don’t understand this perspective. I truly don’t.
You see a new technology with flaws and just assume that those flaws will always be there and the technology will never progress.
Like. Do you honestly think this is the one technology that researchers are just going to say “it’s fine as-is, let’s just stop improving it”?
You don’t understand the first thing about how it works but people like you are SO certain that the way it is now is how it will always be, and that because there are flaws developing it further is pointless.
I just don’t get it.
There’s a lot of indication that LLMs are peaking. It’s taking exponentially more compute and data to get incremental improvements. A lot of people are saying OpenAI’s new model is a regression (I don’t know, I haven’t really played with the new model much). More foundational breakthroughs need to be made, and these kinds of breakthroughs are often the result of “eureka” moments which can’t be manifested by just throwing more money at the problem. It’s possible it will take decades before someone discovers a major breakthrough (or it could be tomorrow).
Right. You don’t get it. You hear people talk about a new technology but actually they haven’t talked about anything, they are trying to sell you snake oil, but you convince yourself that you understand what they mean, and that it’s somehow meaningful.
We could talk about the history of AI in software development, you know it goes back decades, and there are legitimate areas of research. But the bubble that people are riding right now, they are throwing LLMs at the general public and pretending those LLMs are good enough to replace large swaths of the current workforce, but that’s not going to happen because it won’t work, because that’s not how those models are designed. And then the snake oil salesman, they do classic bait and switch, and they start talking about expert systems and minor improvements to them, as if that is something new.
But even if my prediction is wrong, what that actually means is that people shouldn’t need to work full-time jobs anymore.
To be fair, if your argument is that some day AI research will be legitimate and no longer snake oil, then you could easily be right. But there’s no good reason to think that day is going to be in the next few years, rather than the next few decades or even the next few centuries.
I’ve actually worked professionally in the field for a couple of years since it was interesting to me originally. I’ve built RAG architecture backends for self hosted FOSS LLMs, i’ve fine tuned LLMs with new data, And I’ve even took the opposite approach where I embraced the hallucinations as I thought it could be used for more creative tasks. (I think this area still warrants research). I also enjoy TTS and STT use cases and have FOSS models for those on most of my devices.
I’ll admit that the term AI is extremly vauge. It’s like saying you study medicine, it’s a big field. But I keep coming to the conclusion that LLMs and predictive generative models in general simply do not work for the use cases that it’s being marketed for to consumers, CEOs, and Governments alike.
This " AI race" happened because Deepseek was able to create a model that was more or less equivalent to OpenAI and Anthropic models. It should have been seen as a race between proprietary and open source since deep seek is one of the more open models at that performance level. But it became this weird nationalist talking point on both countries instead.
There are a lot of things the US is actually in a race with China in. Many of which are things that would have immediate impact. Like renewable energy, international respect, healthcare advances, military sufficiency, human rights, food supplies, and afordible housing, just to name a few.
The promise of AI is that it can somehow help in the above categories eventually, and that’s cool. But we don’t need AI to make improvements to them right now.
I think AI is a giant distraction, while the the talk of nationalistic races is just being used for investor buy in.
Appreciate you expanding on the earlier comment. All fair points.
Feelings don’t care about logic. It’s that easy.
Because it was a race for simulating more deadly nukes till now. But that got silly, so they need something new to compare their pp.
Have you considered that if the worlds two superpowers are dead certain on this being an important area that they are willing to throw coutless billions of investment into, that they might know more than you do?
Yeah. But then i remembered some history facts and how lobbying and vulture capital works and decided it unlikely.
You think venture capital dictates to the politburo what its priorities are in China?
Governments fail to implement incredibly obvious, easy, and proven solutions all the time so yeh they can be pretty dumb. Not to mention historical examples of governments (paricularly the UK when it was a world superpower) investing their entire economies in nigerian prince tier scams.
Governments are people, and people are stupid.