Imagine this scenario: you’re worried you may have committed a crime, so you turn to a trusted advisor — OpenAI’s blockbuster ChatGPT, say — to describe what you did and get its advice.
This isn’t remotely far-fetched; lots of people are already getting legal assistance from AI, on everything from divorce proceedings to parking violations. Because people are amazingly stupid, it’s almost certain that people have already asked the bot for advice about enormously consequential questions about, say, murder or drug charges.
According to OpenAI CEO Sam Altman, anyone’s who’s done so has made a massive error — because unlike a human lawyer with whom you enjoy sweeping confidentiality protections, ChatGPT conversations can be used against you in court.
Using a search engine for an answer now could get you in that same trap then no? Even DDG has an AI answer generated automatically.
We didn’t need AI for Google search queries to show up in murder cases where people searched for how to clean blood / dna with bleach, how to dispose of bodies, and other really stupid questions. From their home computers. Pretty much right around the time that someone close to them went missing. It’s all timestamped.