Imagine this scenario: you’re worried you may have committed a crime, so you turn to a trusted advisor — OpenAI’s blockbuster ChatGPT, say — to describe what you did and get its advice.
This isn’t remotely far-fetched; lots of people are already getting legal assistance from AI, on everything from divorce proceedings to parking violations. Because people are amazingly stupid, it’s almost certain that people have already asked the bot for advice about enormously consequential questions about, say, murder or drug charges.
According to OpenAI CEO Sam Altman, anyone’s who’s done so has made a massive error — because unlike a human lawyer with whom you enjoy sweeping confidentiality protections, ChatGPT conversations can be used against you in court.
In this case, he’s right.
So? He’s not a lawyer. If this is obvious to non-lawyers just say it. Or do they have to report everything he says on any topic?
“According to OpenAI CEO Sam Altman pizza is good!”.
That’s what he did.