Sam Altman, CEO of OpenAI, speaks at the meeting of the World Economic Forum in Davos, Switzerland. (Denis Balibouse/Reuters)
deleted
Worldcoin, founded by US tech entrepreneur Sam Altman, offers free crypto tokens to people who agree to have their eyeballs scanned.
What a perfect sentence to sum up 2023 with.
Considering what we’ve decided to call AI can’t actually make decisions, that’s a no-brainer.
AI term means humans are no brainers
Shouldn’t, but there’s absolutely nothing stopping it, and lazy tech companies absolutely will. I mean we live in a world where Boeing built a plane that couldn’t fly straight so they tried to fix it with software. The tech will be abused so long as people are greedy.
So long as people are rewarded for being greedy. Greedy and awful people will always exist, but the issue is in allowing them to control how things are run.
More than just that, they’re shielded from repercussions. The execs involved with ignoring all the safety concerns should be in jail right now for manslaughter. They knew better and gambled with other people’s lives.
They fixed it with software and then charged extra for the software safety feature. It wasn’t until the planes started falling out of the sky that they decided they would gracefully offer it for free.
Has anyone checked on the sister?
OpenAI went from interesting to horrifying so quickly, I just can’t look.
deleted
deleted by creator
OpenAI went from an interesting and insightful company to a horrible and a weird one in a very little time.
People only thought it was the former before they actually learned anything about them. They were always this way.
I’m tired of dopey white men making the world so much worse.
AI shouldn’t make any decisions
So just like shitty biased algorithms shouldn’t be making life changing decisions on folks’ employability, loan approvals, which areas get more/tougher policing, etc. I like stating obvious things, too. A robot pulling the trigger isn’t the only “life-or-death” choice that will be (is!) automated.
Ummm…no fucking shit. Who was thinking that was a good idea?
I am sure Zergerberg is also claiming that they are not making any life-or-death decisions. Lets see you in a couple years when the military gets involved with your shit. Oh wait they already did but I guess they will just use AI to improve soldiers’ canteen experience.
Fair enough. I do think AI will become a valuable tool for doctors, etc who do make those decisions
Using AI to base a decision on, is different from letting it make decisions
I mean he can have his opinion on this, I do personally agree, but it’s way too late to try and stop now.
We’ve already got automated drones picking targets and killing people in the middle east and last I heard the newest set of US jets has AI integrated so heavily that they can opt to kill their operator in order to perform objectives
that they can opt to kill their operator in order to perform objectives
Source?
deleted
Air force denies actual casualty and claims it was ‘only a simulation’, still problematic, assuming it stopped at a simulation: https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test
The above AI is allegedly the core of what’s being used for these: https://www.wired.com/story/us-air-force-skyborg-vista-ai-fighter-jets/
You didn’t ask for it but these are the drones that pick their own targets: https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d