- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
My company is like this. They literally have a feature in the roadmap called AI, and say we have to do something with it because our competitors do.
deleted by creator
Really learning from the Arabs.
Al Project
Al Company
Al ProductAl gebra - ‘the reunion of broken parts’,
AI Qaeda
(I’m so sorry)
What have you done
What xd
The word “al” means “the” in Arabic - for example, “al jazeera” means “the island”. And a lower case L looks like a capital i, so “AI” is visually indistinguishable from “Al”. So the joke is that people who try to shoehorn Artificial Intelligence into everything look like they’re speaking in Arabic.
I love the detail that she put “+ AI” on both sides of the equation so that it’s still technically correct regardless of what the AI stands for.
Sometimes it helps solve an equation by adding zero.
This, this shit is why I would never have made a mathematician.
You say that but every time you make a void function that takes no arguments all the mathmaticians in a 12 mile radius combust.
That’s why in python you should never write 0 when False is clearly the superior choice.
[1, 2, 4, 5][False]
1
Holy shit
Wait until you hear about solving 0/0 formulas in calculus.
Are you talking about L’Hôpital’s rule? Or something else I’m not aware of?
Yep that one. Funny, my courses were in french and we wrote it hospital rule while you just went ahead with the full french word, accent and all.
Le
my funding = 0
= -100M + 100M
= (the money I’ll never made back to the investors) + (the shit I’ll blow on AC and top gaming computers and stuff)
I’m old enough to have gone through a number of these technology bubbles, so much so that I haven’t paid much attention to them for a fair while. This AI bs feels a bit different, though. It seems to me that lots more people have completely lost their minds this time.
Like all bubbles, this too will end up in the same rubbish heap.
Because rich morons think they’ll get free digital slaves out of it. Because they’re rich morons who do not understand anything they ask for.
it’s maybe because chatbots incorporate, accidentally or not, elements of what makes gambling addiction work on humans https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/
the gist:
There’s a book on this — Hooked: How to Build Habit-Forming Products by Nir Eyal, from 2014. This is the how-to on getting people addicted to your mobile app. [Amazon UK, Amazon US]
Here’s Eyal’s “Hook Model”:
First, the trigger is what gets you in. e.g., you see a chatbot prompt and it suggests you type in a question. Second is the action — e.g., you do ask the bot a question. Third is the reward — and it’s got to be a variable reward. Sometimes the chatbot comes up with a mediocre answer — but sometimes you love the answer! Eyal says: “Feedback loops are all around us, but predictable ones don’t create desire.” Intermittent rewards are the key tool to create an addiction. Fourth is the investment — the user puts time, effort, or money into the process to get a better result next time. Skin in the game gives the user a sunk cost they’ve put in. Then the user loops back to the beginning. The user will be more likely to follow an external trigger — or they’ll come to your site themselves looking for the dopamine rush from that variable reward.
Eyal said he wrote Hooked to promote healthy habits, not addiction — but from the outside, you’ll be hard pressed to tell the difference. Because the model is, literally, how to design a poker machine. Keep the lab rats pulling the lever.
chatbots users also are attracted to their terminally sycophantic and agreeable responses, and also some users form parasocial relationships with motherfucking spicy autocomplete, and also chatbots were marketed to management types as a kind of futuristic status symbol that if you don’t use it you’ll fall behind and then you’ll all see. people get mixed gambling addiction/fomo/parasocial relationship/being dupes of multibillion dollar advertising scheme and that’s why they get so unserious about their chatbot use
and also separately core of openai and anthropic and probably some other companies are made from cultists that want to make machine god, but it’s entirely different rabbit hole
like with any other bubble, money for it won’t last forever. most recently disney sued midjourney for copyright infringement, and if they set legal precedent, they might take wipe out all of these drivel making machines for good
and also some users form parasocial relationships with motherfucking spicy autocomplete,
I am officially slain and unironically think this may actually unironically be the beginning of the decline of humanity
Because AI is mostly built for tech outsiders. They literally thought that digital art, composing music on the computer, programming, etc. was literally telling the computer what to do. I remember around 2015 someone asking where they can choose art-styles in Photoshop, and what to tell the PC to draw something. Even I as a child thought that you just had to type “please draw me a car” into the Commodore 64 to draw you a car, without all the pixel-art.
I tend to call these “normie tech”. Tech that is built for non-enthusiasts, which have negative consequences to the others, and even fool the some enthusiasts into worshipping them. If only I foresaw the dangers of overtly centralized social media…
I remember this as a kid being excited to do stuff like creating games, creating musing, video editing only to find out how hard, tedious and labourious it is. From outside it looks like computer do all the work but in reality computer only assists and the artists/programmer do all the work
That’s because there’s a non zero amount of actually functionality. Chatgpt does some useful stuff for normal people. It’s accessible.
Contrast that to crypto, which was only accessible to tech folks and barely useful, or NFT which had no use at all.
Ok, I guess to be fair, the purpose of NFT was to separate chumps from their money, and it was quite good at that.
There are pretty great applications in medicine. AI is an umbrella term that includes working with LLMs, image processing, pattern recognition and other stuff. There are fields where AI is a blessing. The problem is, as JohnSmith mentioned, it’s the “solar battery” of the current day. At one point they had to make and/or advertise everything with solar batteries, even stuff that was better off with… batteries. Or the good ol’ plug. Hopefully, it will settle down in a few year’s time and they will focus on areas where it is more successful. They just need to find out which areas those are.
There are pretty great applications in medicine.
Like what? I discussed just 2 days ago with a friend who works in public healthcare, who is bullish about AI and best he could come up with DeepMind AlphaFold which is yes interesting, even important, and yet in a way “good old fashion AI” as has been the case for the last half century or so, namely a team of dedicated researchers, actual humans, focusing on a hard problem, throwing state of the art algorithms at a problem and some compute resources… but AFAICT there is so significant medicine research that made a significant change through “modern” AI like LLMs.
The first thing that comes to my mind is cancer screening. I had to look it up because I can’t always trust my memory, and I thought there was some AI involved in the RNA sequencing research for the Covid vaccine, but I actually remembered wrong.
Skimmed through the article and I found it surprisingly difficult to pinpoint what “AI” solution they actually covered, despite going as far as opening the supplementary data of the research they mentioned. Maybe I’m missing something obvious so please do share.
AFAICT they are talking about using computer vision techniques to highlight potential problems in addition to bringing the non annotated image.
This… is great! But I’d argue this is NOT what “AI” at the moment is hyped about. What I mean is that computer vision and statistics have been used, in medicine and elsewhere, with great success and I don’t see why it wouldn’t be applied. Rather I would argue the hype at he moment in AI is about LLM and generative AI. AFAICT (but again had a hard time parsing through this paper to get anything actually specific) none of that is using it.
FWIW I did specific in my post tht my criticism was about “modern” AI, not AI as a field in general.
I’m not at that exact company, but a very similar one.
It’s AI because we essentially we just take early scans from people who are later diagnosed with respiratory illnesses and using that to train a neural network to recognise early signs that a human doctor wouldn’t notice.
The actual algorithm we started with and built upon is basically identical to one of the algoriths used in a generative AI models (the one that takes an image, does some maths wizardry on it and tells you how close the image is to the selected prompt). Of course we heavily modify it for our needs so it’s pretty different in the end product, and we’re not using its output to feedback into a denoiser and we have a lot of cognitive layers and some other tricks to bring the reliability up to a point we can actually use it denoise, but it’s still at its core the same algorithm.
What I heard so far was about advanced pattern recognition for scans (MRI, CT etc) to reduce oversights and in documents to detect potential patterns relevant for epidemologists (a use that’s very controversial since it requires all medical documents of citizens to be centralized and available unencrypted). Also some scientists seem to praise purpose-built machine learning technology for specialised tasks (those are not LLMs though).
Yeah that’s what I do for work, it can detect respiratory diseases or even tumours from scans long before even the best human doctor could do reliably, and our work has already saved hundreds of lives and we’re still only just rolling it out. It’s legitimately going to revolutionise medicine.
Please help on https://lemmy.world/post/31304750/17675245
Awesome, truly love to hear that. 🥰
Question out of curiosity, even though that isn’t exactly what you’re working on: Do you think the technology could eventually also be used to detect what might be referred to as “latent cancer cells”, that can’t be destroyed by the body but also didn’t grow into tumors yet due to the body fighting it?
Asking because that’s what happened to me years ago. Had high inflammatory markers for over 1.5 years with no doctors being able to tell what the heck was going on. Then one day an angry Lymphoma appeared that required 4 aggressive chemo cycles and 14 day radio to get rid off, even though it was stage 1. If AI tech could be able to detect those “latent cancer cells” (or some biomarkers caused by them) before tumors appear… that would be phenomenally awesome.
Honestly have no idea, I’m on the programming side, so don’t really have much medical knowledge, but if I remember ill ask someone when I’m in the office on Wednesday.
@[email protected] Remind me 2 days
AI as in “Artificial Intelligence” has existed for decades and is quite useful - and specialized uses of LLMs can extend that. Although AI the buzzword for “generative intelligence” is new, and often wrong, being built to give the form of an answer rather than the reality of one.
Can’t believe I’m doing this… but here I go, actually defending cryptocurrency/blockchain :
… so yes there are some functionalities to AI. In fact I don’t think anybody is saying 100% of it is BS and a scam, rather… just 99.99% of the marketing claims during the last decade ARE overhyped if not plain false. One could say the same for crypto/blockchain, namely that SQLite or a random DB or is enough for most people BUT there are SOME cases where it might actually be somehow useful, ideally not hijacked by “entrepreneurs” (namely VC tools) who only care about making money but not what the technology could actually bring.
Now anyway both AI & crypto use an inconceivable amount of resources (energy, water, GPU and dedicated hardware, real estimate, R&D top talent, human resources for dataset annotation including very VERY gruesome ones, etc) so yes even if in 0.01% they are actually useful one still must ask, is it worth it? Is it OK to burn literally tons of CO2eq … to generate an image that one could have done quite easily another way? Summarize a text?
IMHO both AI & crypto are not entirely useless in theory yet in practice have been :
- hijacked by VCs and grifters or all kinds,
- abused by pretty terrible people, including scammers and spammers,
- absolutely underestimated in terms of resource consumption and thus ecological and societal impact
So… sure, go generate some “stuff” if you want to but please be mindful of what it genuinely costs.
i think you’ve got it backwards. the very same people (and their money) who were deep into crypto went on to new buzzword, which turns out to be AI now. this includes altman and zucc for starters, but there’s more
In programming the AI has real application, i have personally refactored code, designed services all by chatgpt which would take me days to do in hours, its just good at it. For non techies though i can’t say.
Possibly through ignorance or misunderstanding, btu I still think the tech behind NFTs may have some function, but it’s certainly not the weird pictures of badly colored in monkeys speculation market that happened there.
You know I’ve been saying this for years now and not a single post I’ve put up along those lines has EVER been in the positive upvote zone, here or reddit
NFTs are digitally enforcible contracts that can do literally everything a traditional binding legal contract can do and a whole fucktonne of other things on top of it
The whole ‘just pictures on a server somewhere’ is the TINIEST slice of functionality that NFT frameworks provide.
It’s like getting a really well crafted leatherman multitool but only ever using the toothpick for everything
It could potentially work for DRM, in that you can have a key assigned to an identity that can later be transferred and not be dependent on a particular marketplace.
For example, you could buy a copy of whatever next year’s Call of Duty game will be, and have the key added to your NFT wallet. Then you could play it on XBox, Playstation, Steam, or GOG with that single license.
Of course that will never happen because that’d be more consumer friendly than we have now.
There are a fucktonne of applications
Fully automatic rentals where your NFT is your key to access
Protection for small time content creators who want to retain control of their content.
Virtually abuse proof copyright system
Game items and characters that are not bound to the game they originate from
Automatic IP rights assignments
Frictionless software and service licensing
Literally anything a standard contract can do
Basically functioning as a digital proof of purchase.
As a digital proof of purchase that can be frictionlessly traded without the permission of the platform it was purchased from.
I.e. you don’t need the site you bought the ticket’s permission to trade that ticket to someone else
It seems to me that lots more people have completely lost their minds this time
That’s not really an AI thing, that’s just… everything.
The internet did not end up in the trash heap after the dot com bubble burst. Ai too has real world uses that go beyond the current planet wrecking bubble.
My company, while cutting back elsewhere, has dedicated a few million to AI projects over the next couple years. Not “projects to solve X business problem.” Just projects that use AI.
So of course now, anything that is automated in any way is now being touted as AI. Taking data from one system and populating another? That’s AI.
For anyone who thought this wasn’t real, I present to you: https://xnote.ai/
To be fair though, the features of that pen look really useful if you’re into analog note-taking.
The world is curb stomping satire…
Where can I get something like this minus any of the AI?
I would love to take handwritten notes and have them appear on my phone for safekeeping.
Have you tried the Rocketbook?
Lol pretty much
AI is such a loose term that calling anything with if-else statements “AI” wouldn’t be lying (I learned about decision trees in my university machine learning class and those are just giant nested if-else statements)
Taking data from one system and populating another? That’s AI.
Well, it is. You just have to go back enough in time to find the context when people still called it so.
Gotta use those automatic computers full of electronic brains to do all those tasks that used to take years on rooms full of people with chemical brains hired as computers!
cp, now with Al!
AI: “I know you typed cp, but I’m sure you meant rm…”
More like “cp? That’s a violation of my ethical constraints and you have been reported to the authorities.”
Really though, about a quarter of my work related coding queries come back with “redacted to meet responsible AI guidelines”.
It’s an AI specifically for code. Apparently it thinks half the stuff I do is hacking.
Is it Copilot? That thing began to censor stuff a while ago based on the Trump regime’s wordlist. Suddenly it stopped working when it read “trans” (even as part of “transcode” or sth), among others. I’d bet it still has some “anti-DEI” nonsense in their guidelines.
This is exactly what my masters thesis feels like ATM, every attention is on all the AI crap also because the Uni gets grants ont the topic. Everything else just dies
E = mc² + AI
FTFY
E + AI = mc² + AI
That’s an interesting equation, good job for finding that ☺️ You truly are a remarkable scientist, just like Einstein.
📄Would you like me to write a research paper on that equation for you?
So much in this beautiful equation
E+ai=mc2+ai Now good
As always, the solution was adding more AI. bravo
It was equivalent from the start if we assume AI = 0
Except AI is random so we can’t assume that it will offer the same answer on each side? AI causes the normal rules of math (and facts) to break🤪🤯.
deleted by creator
Give me money. AI-AI-AI.
seeking for
- looking for
- seeking
You need to pick a lane, my dude.
multi-track drifting! also if you can understand another person, isn’t that the whole point of communication?
It’s frustrating to translate from what they said to what they mean. It’s more effort on my part and this is my free time, I don’t want to work.
Just communicate as clearly as you can.
I understand, but people also have very different standards of communication clarity. There are a lot of hidden assumptions, even when you’re trying to be 100% clear. Sometimes people can’t put their thoughts into words, or they don’t have the capacity for what you think is clarity. And in this case it’s just a very minor mistake. The person might not be native, or they may have been failed by their education system, or they might just be tired or stressed. There are lots of valid reasons why communication can degrade. I’m a bit autistic and struggle with ambiguous meaning or communication that doesn’t fit patterns I’m used to, sometimes to a truly irrational degree, and I’d like for others to speak my language more so I can understand them better, and I’d like to be able to speak their language more, to make them understand me better, but it’s just sort of the way of life. People are very fluid beings, not at all tied to rigid logic. People are also all very different, and their efforts all come in different forms. They emphasize different things, focus on different things, not just communication efficiency. What I’ve learned too with other autistic people is that everyone’s standards for communication clarity are different. I don’t think you can speak a universal language that everybody understands perfectly 100% of the time. What does happen is that people who talk to each other often learn each other’s language, able to talk more concisely and efficiently, but you can’t really expect that of strangers on the internet. Of course “birds of a feather flock together” as they say. People in the same internet communities might have the same interests, consume the same media, have the same discussions with the same people. But there’s no getting around communication degrading. In the worst case you just have to ask someone what they mean, maybe clearly explain your issue with the ambiguities, and wait for disambiguation. Learning to ask precise questions so as to elicit the best response from someone, to immediately get the answer you seek, is also a lifelong challenge. It’s not worth getting upset about a single instance of degraded communication, if you can even call it that. I’d be more upset with the universe for making us all so very different.
Corrections are how we reduce lingual entropy. Being corrected shouldn’t be embarrassing or shameful, we should welcome corrections so we can be better understood.
Language is collaborative, we’re always working to be better understood and to help each other be better understood.
If no one was ever corrected about anything, language will drift so badly we’d lose the ability to communicate. Try reading Olde English, before standardization people would just do whatever they wanted. It ranges from barely legible to gibberish.
This seems like such a strange take. You make it sound like it’s cost you effort to translate the error, but how are you quantifying that effort? If effort efficiency is something you’re striving for, it doesn’t feel like it makes sense to correct the mistake (which costs effort to do)
The gap between the two - what they said and what they meant - seems so small it probably took more “work” to correct them.
I’d go as far as to say that the work to correct them will never be repaid by the saved effort of not having to encounter this particular mistakes from this particular person ever again.
It’s more effort than a straight read.
I didn’t correct anyone, by the way. I’m just a different person griping about how much it sucks to have to communicate with people who don’t care about being understood.
And you’re right, correcting people is even more work! So on top of the work of translating their stupid post we now have to tell them they were wrong so they don’t do this to us again. If they aren’t ever corrected they’ll just keep being wrong and we’ll have to keep translating their posts.
The alternative is to block them so we never see their posts ever again, which honestly is a better idea. It not like we’re missing out.
But you don’t have to say anything - the mistake is so easily corrected in your own head that the effort expended in correcting/defending corrections massively outweighs any additional effort it took to “translate”
You try to make it sound like you’re being rational about the effort expended here, but without quantifying the effort to translate and the effort to correct, you’ve got no way of knowing whether it’s the right course of action. To me, it’s clear it isn’t.
Also, is there really a need to call the post stupid? Seems like unnecessary effort to me
The mistakes compound. It starts with one, but if no mistakes are ever corrected then it won’t just be this one. I’d rather we don’t create a new dialect. So, let’s just nip it in the bud, correct all simple mistakes and ensure communications remain clear for everyone. It’s not even a big deal, someone just pointed out a minor mistake.
I called it stupid because you made a big deal about this, and I got emotional. It really isn’t! It’s just a small correction and we could have all moved on, but no, you had to die on this molehill and now I’m going to ruin my day being mad at this stupid fucking bullshit.
We should all work together to be understood. It’s good that people help each other communicate more clearly.
Microwave now with AI
I work in actual ML research and even I think it’s stupid
Microwave now with AI
Especially you know that’s stupid:-P
Reminds me of the insane LinkedIn post where a brilliant person was sharing their new equation which was essentially word + buzzword + AI.
deleted by creator
AI in, AI out, simple math
You can’t explain that.
Money me, money me now
Me now money give