Record numbers of people are turning to AI chatbots for therapy, reports Anthony Cuthbertson. But recent incidents have uncovered some deeply worrying blindspots of a technology out of control
Naturally the guardrails cannot cover absolutely every possible specific use case, but they can cover most of the known potentially harmful scenarios under the normal, most common circumstances. If the companies won’t do it themselves, then legislation can push them to do it, for example making them liable, if their LLM does something harmful. Regulating AI is not anti-AI.
I feel the guardrails are in place, and that they will be continuously improved. If a person finds a situation where an AI suggests they kill themselves without being prompted, say, during a brainstorm about strawberry cake consistency—if you were dead you wouldn’t have this problem—would be… concerning.
Naturally the guardrails cannot cover absolutely every possible specific use case, but they can cover most of the known potentially harmful scenarios under the normal, most common circumstances. If the companies won’t do it themselves, then legislation can push them to do it, for example making them liable, if their LLM does something harmful. Regulating AI is not anti-AI.
I feel the guardrails are in place, and that they will be continuously improved. If a person finds a situation where an AI suggests they kill themselves without being prompted, say, during a brainstorm about strawberry cake consistency—if you were dead you wouldn’t have this problem—would be… concerning.