I don’t trust OpenAI and try to avoid using them. That being said they have always been one of the more careful ones regarding safety and alignment.
I also don’t need you or openai to tell me that hallucinations are inevitable. Here have a read of this:
Title: Hallucination is Inevitable: An Innate Limitation of Large Language Models, Author: Xu et al., Date: 2025-02-13, url: http://arxiv.org/abs/2401.11817
Regarding resource usage: this is why open weights models like those made by the Chinese labs or mistral in Europe are better. Much more efficient and frankly more innovative than whatever OpenAI is doing.
Ultimately though you can’t just blame LLMs for people committing suicide. It’s a lazy excuse to avoid addressing real problems like how treats neurodivergent people. The same problems that lead to radicalization including incels and neo nazis. These have all been happening before LLM chatbots took off.
Ultimately though you can’t just blame LLMs for people committing suicide.
well that settles it then! you’re apparently such an authority.
pfft.
meanwhile here in reality the lawsuits and the victims will continue to pile up. and your own admitted attempts to make it safer - maybe that’ll stop the LLM associated tragedies.
well that settles it then! you’re apparently such an authority.
I am someone who is paid to research uses and abuses of AI and LLMs in a specific field. So compared to randos on the internet like you, yeah I could be considered an authority.
Chances are though you don’t actually care about any of this. You just want an excuse to hate on something you don’t like and don’t understand and blame it for already well established problems. How about instead you actually take some responsibility for the state of your fellow human beings and do something helpful instead of being a Luddite.
I don’t trust OpenAI and try to avoid using them. That being said they have always been one of the more careful ones regarding safety and alignment.
I also don’t need you or openai to tell me that hallucinations are inevitable. Here have a read of this:
Title: Hallucination is Inevitable: An Innate Limitation of Large Language Models, Author: Xu et al., Date: 2025-02-13, url: http://arxiv.org/abs/2401.11817
Regarding resource usage: this is why open weights models like those made by the Chinese labs or mistral in Europe are better. Much more efficient and frankly more innovative than whatever OpenAI is doing.
Ultimately though you can’t just blame LLMs for people committing suicide. It’s a lazy excuse to avoid addressing real problems like how treats neurodivergent people. The same problems that lead to radicalization including incels and neo nazis. These have all been happening before LLM chatbots took off.
well that settles it then! you’re apparently such an authority.
pfft.
meanwhile here in reality the lawsuits and the victims will continue to pile up. and your own admitted attempts to make it safer - maybe that’ll stop the LLM associated tragedies.
maybe. pfft.
I am someone who is paid to research uses and abuses of AI and LLMs in a specific field. So compared to randos on the internet like you, yeah I could be considered an authority. Chances are though you don’t actually care about any of this. You just want an excuse to hate on something you don’t like and don’t understand and blame it for already well established problems. How about instead you actually take some responsibility for the state of your fellow human beings and do something helpful instead of being a Luddite.
nah, you’re just biased because you’re in on the enormous grift that is AI.
rofls… meanwhile, actual psychologists, people who genuinely care about mental health, are screaming
https://www.psychologytoday.com/us/blog/urban-survival/202509/hidden-mental-health-dangers-of-artificial-intelligence-chatbots
I’m a luddite because your machine that takes gigawatts and tons of water to give kids advice on how to kill themselves isn’t working out?
yeesh. you’re an ai techbro whose disgusting bubble is about to pop, whatever strange headcannon you’re telling yourself is a delusion.
you should seek help.
A techbro? Do you think I work for some big company? I am a PhD student motherfucker.
whoop-de-dooh! good luck with your phd in techbro. and your AI girlfriend.