- Big Tech is lying about some AI risks to shut down competition, a Google Brain cofounder has said.
- Andrew Ng told The Australian Financial Review that tech leaders hoped to trigger strict regulation.
- Some large tech companies didn’t want to compete with open source, he added.
Why do you think Sam Altman is always using FUD to push for more AI restrictions? He already got his data collection, so he wants to make sure "“Open”"AI is the only game in town and prevent any future competition from obtaining the same amount of data they collected.
Still, I have to give Zuck his credit here, the existence of open models like LLaMa 2 that can be fine-tuned and ran locally has really put a damper on OpenAI’s plans.
“Ng said the idea that AI could wipe out humanity could lead to policy proposals that require licensing of AI”
Otherwise stated: Pay us to overregulate and we’ll protect you from extinction. A Mafia perspective.
Right?!?!! Lines are obvious. Only if they thought they could get away with it, and they might, actually, but also what if?!?!
they don’t want to copete with open source. Yeah that is not new.
Lol how? No seriously, HOW exactly would AI ‘wipe out humanity’???
All this fear mongering bollocks is laughable at this point, or it should be. Seriously there is no logical pathway to human extinction by using AI and these people need to put the comic books down.
The only risks AI pose are to traditional working patterns, which have been always exploited to further a numbers game between Billionaires (and their assets).These people are not scared about losing their livelihoods, but losing the ability to control yours. Something that makes life easier and more efficient requiring less work? Time to crack out the whips I suppose?
Working in a corporate environment for 10+ years I can say I’ve never seen a case where large productivity gains turned into the same people producing even more. It’s always fewer people doing the same amount of work. Desired outputs are driven less by efficiency and more by demand.
Let’s say Ford found a way to produce F150s twice as fast. They’re not going to produce twice as many, they’ll produce the same amount and find a way to pocket the savings without benefiting workers or consumers at all. That’s actually what they’re obligated to do, appease shareholders first.
I mean I don’t want an AI to do what I do as a job. They don’t have to pay the AI and food and housing, in a lot of places, aren’t seen as a human right, but a privilege you are allowed if you have money to buy it.
Just wait and see
Remindmebot! 10 years
When Google’s annual revenue from its search engine is estimated to be around $70 to $80 billion, no wonder there is great concern from big tech about the numerous A.I tools out there, that would spell an end to that fire hose of sweet sweet monetization.
Well it’s not like that money will just go away someone’s still going to be monetizing everything.
These dudes are convinced AI is gonna wipe us out despite the fact it can’t even figure out the right number of fingers to give us.
We’re so far away from this being a problem that it never will be, because climate change will have killed us all long before the machines have a chance to.
People may argue that AI is quickly improving on this but it will take a massive leap between a perfect diffusion model an Artificial General Intelligence. Fundamentally, those aren’t even the same kind of thing.
But AI as it is today can already cause a lot of harm simply by taking over jobs that people need to make a living, on the lack of something like UBI.
Some people say this kind of Skynet fearmongering is nothing but another kind of marketing for AI investors. It makes its developments seem much more powerful than they actually are.
I’m not saying it’s not a problem that we will have to deal with, I’m just saying the apocalypse is gonna happen before that, and for different reasons.
Even with the terrible climate-based disasters our recklessness will bring to our future, humanity won’t face complete extermination. I don’t think we get to escape our future issues so easily.
That’s the point, they don’t believe it’s gonna wipe us out, it’s just a convenient story for them
File this under duuuuuuhhhhh
meanwhile, capitalism is setting the world on fire
The way capitalism may use current AI to cut off a lot of people from any chance at a livelihood is much more plausible and immediately concerning than any machine apocalypse.
The tech companies did not invent the AI risk concept. Culturally, it emerged out of 1990s futurism.
Karel Čapek wrote about it in 1920
Yeah, but I mean the AI risk stuff that people like Steve Omohundro and Eliezer Yudkowsky write about.
Not just that, but also that.
The Google Brain cofounder is not Big Tech?
Imo, Andrew Ng is actually a cool guy. He started coursera and deeplearning.ai to teach ppl about machine/deep learning. Also, he does a lot of stuff at Stanford.
I wouldn’t put him in the corporate shill camp.
I took several of his classes. At the very least he’s an excellent instructor.
He really is. He’s one of those rare instructors that can take the very complex and intricate topics and break them down into something that you can digest as a student, while still giving you room to learn and experiment yourself. In essence, an actual master at his craft.
I also agree with the comment that he doesn’t come across as the corporate shill type, much more like a guy that just really loves ML/AI and wants to spread that knowledge.
Same, I went from kind of understanding most of the concepts to grokking a lot of it pretty well. He’s super good at explaining things.
There was a lot of controversy about him exploiting the people working for him and he publicly doubled down on admitting the need to overwork them.
Edito: here is a source where they warn the new hires about 70hours per week being normal unfortunately i did not find another source, it’s from xitter: https://twitter.com/betaorbust/status/908890982136942592
This looks like it’s from the aifund thing he is a part of, but it seems like they took that part out. I have never worked for of those companies so idk 🤷♂️.
deleted by creator
Ok, you know what? I’m in…
If all the crazy people in the world collectively stop spending crazy points on sky wizards and climate skepticism, and put all of their energy into AI doomerism, I legitimately think the world might be a better place.
I’ve read enough scifi to know that AI is a credible risk that we shouldn’t be too laissez-faire with…
Looks like we’re on the gently rising part of the AI vs. time graph. It’s going to explode, seemingly overnight. Not worried about machines literally kicking our ass, but the effects are going to be wild in 100,000 different ways. And wholly unpredictable.
For us Gen Xers who straddled the digital divide, your turn Gen Z. God speed.
Obviously a part of the equation. All of these people with massive amounts of wealth power and influence push for horrific shit primarily because it’ll make them a fuck ton of money and the consequences won’t hit till they’re gone so fuck it
I think AI has the potential to be incredibly beneficial to people. Why do you think it’s “horrific shit?”