- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
cross-posted from: https://infosec.pub/post/36262288
Malicious payloads stored on Ethereum and BNB blockchains are immune to takedowns.
cross-posted from: https://infosec.pub/post/36262288
Malicious payloads stored on Ethereum and BNB blockchains are immune to takedowns.
Web 3.0 has always been a joke. AI has more actual uses
They are both useful and both jokes.
Depends what exactly we are talking about.
Utility is born of necessity, and it’s true that every joke needs a punchline.
AI has no use. It only subtracts value and creates liabilities.
AI != chatbots
Just saying.
I think theres a point where you have to realize the topic of discussion is about LLMs like ChatGPT, and that point was around the time we compared it to Web 3.0, something that people hate and associate with tech bros and evil corporations.
The meaning of words change based on context.
Y’all need to understand that AI is coming and is going to replace a lot of things. I don’t know why some of you keep pretending it has no use case. This tech is going to leave you behind if you don’t use it.
LLMs are at a standstill since 2021, I would argue the current models were around in the late 80s they’re just using more compute time now, but it’s being marketed as the future to confuse a billion dopes like you who don’t understand technology. It’s the ultimate ponzi scheme, the companies are making no money but their evaluation keeps rising.
To clarify, OpenAI wrote a paper proving their model would not reach human output accuracy ever. They proved that the costs of gaining the same level of benefit from GPT3 to GPT4 as GPT2 to GPT3 would cost literally EXPONENTIAL amount of resources, which was proven again in practice when they actually did it a couple of years later. To improve it again would cost more power than mankind currently produces total, but the end result will still be hallucinating liability filled garbage because in 2022 Deepmind proved with LITERALLY INFINITE POWER AND TRAINING DATA that it would not reach human output, that the hard limit didn’t even reach the mid-90s.
You are arguing with the AI companies and researchers. Ya’ll need to understand that AI, as it is, is a fucking scam.
The paper from OpenAI: https://arxiv.org/pdf/2001.08361
The followup paper from DeepMind: https://arxiv.org/pdf/2203.15556
There is a point when one can just admit they are wrong, or twist words to convince themselves they were right.
wow thanks for that /s
I’m no AI fan by any means, but it’s really good at pointing directions, or rather, introducing you to topics that you didn’t know how to start researching.
I often find myself asking: “Hey AI, I want to do this very specific thing but I don’t really know what it is called, can you help me?”. And sure enough I get the starting point, so I can close that down and search on my own.
Otherwise, trying to learn anything in depth there is just a footgun.
^(edit: typo)
I’m seconding this and adding to it. AI is terrible for factual information but great at relative knowledge and reframing.
I use it as a starting off point in writing research when I can’t get relevant search results. Most recently, I asked it about urban legends in modern day Louisiana and got a list for more in-depth searches, most were accurate.
It’s good at mocking up accents and patterns of speech relative to a location/time as well.
Unfortunately, an LLM lies about 1 in 5 to 1 in 10 times: 80% to 90% accuracy, with a proven hard limit by OpenAI and Deepmind research papers that state even with infinite power and resources it would never approach human language accuracy. Add on top of that the fact that the model is trained on human inputs which themselves are flawed, so you multiply an average person’s rate of being wrong.
In other words, you’re better off browsing forums and asking people, or finding books on the subject, because the AI is full of shit and you’re going to be one of those idiot sloppers everybody makes fun of, you won’t know jack shit and you’ll be confidently incorrect.
No way the vast majority of people are getting things right more than 80% of the time. On their owned trained tasks, sure, but random knowledge? Nope. The AI holds a more intelligent conversation than most of humanity. It says a lot about humanity.
You literally don’t understand.
The human statements are the baseline, right or wrong, and the AI struggles to maintain numbers over 80% of that baseline.
Take however often a person is wrong and multiply it: that’s AI. They like to call it “hallucination” and it will never, ever, go away: in fact it will get worse as it has already polluted its own datasets which it will pull from and produce even worse output like noise coming from an amp in a feedback loop.
They just explained how to use AI in a way where “truth” isn’t relevant.
And I explained why that makes them a moron.
How would I search something I don’t know how it’s called? As I explained the AI is just responsible to tell me “hey this thing X exists”, and after that I go look for it on my own.
Why am I a moron? Isn’t it the same as asking another person and then doing the heavy lifting yourself?
^(edit: typo)
That was your previous example. You had a very specific thing in mind, meaning you knew what to search for from reputable sources. There are tons of ways to discover new previously unknown things, all of which are better than being a filthy stupid slopper.
“Hey AI, can you please think for me? Please? I need it, idk what to do.”
No you didn’t. You just made a completely irrelevant point about truth.
I was about to say don’t insult web 3.0, but you’re actually right. AI at least has useful applications to begin with