Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios.
Why it matters: The move comes as regulators around the world are deciding what rules should apply to the fast-growing industry. “Trust is the currency of the AI era, yet, as it stands, our innovation account is dangerously overdrawn,” Edelman global technology chair Justin Westcott told Axios in an email. “Companies must move beyond the mere mechanics of AI to address its true cost and value — the ‘why’ and ‘for whom.’”
Trust in AI is falling because the tools are poor - they’re half baked and rushed to market in a gold rush. AI makes glaring errors and lies - euphemistically called “hallucinations”, they are fundamental flaws which makes the tools largely useless. How do you know if it is telling you a correct answer or hallucinating? Why would you then use such a tool for anything meaningful if you can’t rely on its output?
On top of that, AI companies have been stealing data from across the Web to train tools which essentially remix that data to create “new” things. That AI art is based on many hundreds of works of human artists which have “trained” the algorithm.
And then we have the Gemini debacle where the AI is providing information based around opaque (or pretty obvious) biases baked into the system but unknown to the end user.
The AI gold rush is a nonsense and inflated share prices will pop. AI tools are definitely here to stay, and they do have a lot of potential, but we’re in the early days of a messy rushed launch that has damaged people’s trust in these tools.
If you want examples of the coming market bubble collapse look at Nvidia - it’s value has exploded and it’s making lots of profit. But it’s driven by large companies stock piling their chips to “get ahead” in the AI market. Problem is, no one has managed to monetise these new tools yet. Its all built on assumptions that this technology will eventually reap rewards so “we must stake a claim now”, and then speculative shareholders are jumping in to said companies to have a stake. But people only need so many unused stockpiled chips - Nvidias sales will drop again and so will it’s share price. They already rode out boom and bust with the Bitcoin miners, they will have to do the same with the AI market.
Anyone remember the dotcom bubble? Welcome to the AI bubble. The burst won’t destroy AI but will damage a lot of speculators.
You missed another point : companies shedding employees and replacing them by “AI” bots.
As always, the technology is a great start in what’s to come, but it has been appropriated by the worst actors to fuck us over.
The issue being that when you have a hammer, everything is a nail. Current models have good use cases, but people insist on using them for things they aren’t good at. It’s like using vice grips to loosen a nut and then being surprised when you round it out.
I mean it’s cool and all but it’s not like the companies have given us any reason to trust them with it lol
Good. I hope that once companies stop putting AI in everything because it’s no longer profitable the people who can actually develop some good tech with this can finally do so. I have already seen this play out with crypto and then NFTs, this is no different.
Once the hype around being able to make worse art with plagiarised materials and talking to a chatbot that makes shit up died down companies looking to cash out with the trend will move on.
The difference is that AI has some usefulness while cryptocurrencies don’t
Crypto has usefulness related to data transparency and integrity but not as a speculative investment and scams, just like AI is being used for shitty art and confidently incorrect chatbot.
Blockchain technology =/= Cryptocurrency
But I agree with you, the blockchain technology is amazing for transparency and integrity.
At one point I agreed but not anymore. AI is getting better by the day and already is useful for tons of industries. It’s only going to grow and become smarter. Estimations already expect most energy producted around the world will go to AI in our lifetime.
The current LLM version of AI is useful in some niche industries where finding specific patterns is useful but how it’s currently popularised is the exact opposite of where it’s useful. A very obvious example is how it’s accelerating search engines becoming useless, it’s already hard to find accurate info due the overwhelming amount of AI generated articles with false info.
Also how is it a good thing that most energy will go to AI?
deleted by creator
LLMs should absolutely not be used for things like customer support, that’s the easiest way to give customers wrong info and aggregate them. For reviewing documents LLMs have been abysmally bad.
For gammer it can be useful but what it actually is best for is for example biochemistry for things like molecular analysis and creating protein structures.
I work in an office job that has tried to incorporate AI but so far it has been a miserable failure except for analysing trends in statistics.
A LLM is terrible for molecular analysis, AI can be used but not LLM.
AI doesn’t exist currently, that’s what LLMs are currently called. Also they have been successfully used for this and show great promise so far, unlike the hallucinating chatbot.
AGI Artificial General Intelligence doesn’t exist that is what people think of in sci-fi like Data or Hal. LLM or Large Language Models like CHAT GPT are the hallucinating chat bots, they are just more convincing than the previous generations. There are lots of other AI models that have been used for years to solve large data problems.
deleted by creator
good.
Only an idiot would not have seen this would be stupid at first for a long time.
Anyone past the age of 30 and isn’t skeptical of the latest tech hype cycle should probably get a clue. This has happened before, it’ll happen again.
So people are catching up to the fact that the thing everyone loves to call “AI” is nothing more than just a phone autocorrect on steroids, as the pieces of electronics that can only execute a set of commands in order isn’t going to develop a consciousness like the term implies; and the very same Crypto/NFTbros have been moved onto it so that they can have some new thing to hype as well as in the case of the latter group, can continue stealing from artists?
Good.
I mean, the thing we call “AI” now-a-days is basically just a spell-checker on steroids. There’s nothing to really to trust or distrust about the tool specifically. It can be used in stupid or nefarious ways, but so can anything else.
Took a look and the article title is misleading. It says nothing about trust in the technology and only talks about not trusting companies collecting our data. So really nothing new.
Personally I want to use the tech more, but I get nervous that it’s going to bullshit me/tell me the wrong thing and I’ll believe it.
“Trust in AI” is layperson for “believe the technology is as capable as it is promised to be”. This has nothing to do with stupidity or nefariousness.
basically just a spell-checker on steroids.
I cannot process this idea of downplaying this technology like this. It does not matter that it’s not true intelligence. And why would it?
If it is convincing to most people that information was learned and repeated, that’s smarter than like half of all currently living humans. And it is convincing.
ThE aI wIlL AttAcK HumaNs!! sKynEt!!
Edit: These “AI” can even make a decent waffles recipe and “it will eradicate humankind”… for the gods sake!!
It even isn’t AI at all, just how corps named it Is clickbait.
AI is just a very generic term and always has been. It’s like saying “transportation equipment” which can be anything from roller skates to the space shuttle". Even the old checkers programs were describes as AI in the fifties.
Of course a vague term is a marketeer’s dream to exploit.
At least with self driving cars you have levels of autonomy.
Before chatgpt was revealed, this was under the unbrella of what AI meant. I prefer to use established terms. Don’t change the terms just because you want them to mean something else.
Well sure, why would the world aspire to fully automated luxury communism without the communism? Just fully automated luxury economy for rich people and nothing for everyone else?
The problem is very few people with strong opinions live by them, these people you see hating ai are doing so because it’s threatening capitalism - and yes I know there’s a fundamental misunderstanding about how ‘tech bros own ai’ which leads people to mistakenly think being against ai is fighting against capitalism but that doesn’t stand upto reality.
I make open source software so I actually do work against capitalism in a practical way. Using ai has helped increase the rate and scope of my work considerably, I’m certainly not the only one as the dev communities are full of people taking about how to get the most out of these tools. Like almost all devs in the open source world I create things I want to exist and think can benefit people, the easier this is the more stuff gets created and the more tools exist for others to create.
I want everyone to have design tools that allow them to easily make anything they can imagine, being able to all work together on designing open source devices like washing machines and cars would make the monopoly capitalism model crumble - especially when ai makes it ever easier to transition from CAD to CAM tools plus with sensor and cv quality control we can ensure the quality of the final product to a much higher level than people are used to. You’ll be able to have world class flosh designs the product of thousands of peoples passion fabricated locally by your independent creator of choice or if you have the tooling your own machines.
This is already happening with sites like thingiverse but ai makes the whole process much easier, especially with search and discovery tools which let you say ‘what are my options for adding x’
All the push from people trying to invent crazy rules to ensure only the rich and nation states can have ai are probably affected in part by a campaign by the rich to defend capitalism. Putting a big price barrier on ai training will only stop open source projects, that’s why we need to be wary of good sounding ‘pay creators’ type things - it’s wouldn’t result in any ‘creator’ getting more than five free dollars or any corporate or government ai getting made but it would put another roadblock in the way to open source ai tools.
deleted by creator
There was any trust in (so-called) “AI” to begin with?
That’s news to me.
I don’t get all the negativity on this topic and especially comparing current AI (the LLMs) to the nonsense of NFTs etc. Of course, one would have to be extremely foolish/naive or a stakeholder to trust the AI vendors. But the technology itself is, while not solid, genuinely useful in many many use cases. It is an absolute positive productivity booster in these and enables use cases that were not possible or practical before. The one I have the most experience with is programming and programming-related stuff such as software architecture where the LLMs absolutely shine, but there are others. The current generation can even self-correct without human intervention. In any case, even if this would be the only use case ever, this would absolutely change the world and bring positive boosts in productivity across all industries - unlike NFTs.
I totally agree…hold on I got more to say, but one of those LLMs has been following me for the past two weeks on a toy robot holding a real 🔫 weapon. Must move. Always remember to keep moving.
What’s sad is that t one of the next great leaps in technology could have been something interesting and profound. Unfortunately, capitalism gonna capitalize and companies we’re so thirsty to make a buck off it that we didn’t do anything to properly and carefully roll out or next great leap.
Money really ruins everything.
It’s the opposite for me. The early versions of LLM’s and image generators were obviously flawed but each new version has been better than the previous one and this will be the trend in the future aswell. It’s just a matter of time.
I think that’s kind of like looking at the first versions of Tesla FSD and then concluding that self driving cars are never going to be a thing because the first one wasn’t perfect. Now go look at how V12 behaves.