Imagine this scenario: you’re worried you may have committed a crime, so you turn to a trusted advisor — OpenAI’s blockbuster ChatGPT, say — to describe what you did and get its advice.

This isn’t remotely far-fetched; lots of people are already getting legal assistance from AI, on everything from divorce proceedings to parking violations. Because people are amazingly stupid, it’s almost certain that people have already asked the bot for advice about enormously consequential questions about, say, murder or drug charges.

According to OpenAI CEO Sam Altman, anyone’s who’s done so has made a massive error — because unlike a human lawyer with whom you enjoy sweeping confidentiality protections, ChatGPT conversations can be used against you in court.

  • No_Eponym@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    Very good point. It seems crazy that this keeps happening, right? My current favourite hypothesis is from “Power and Progress” by Daron Acemoglu and Simon Johnson:

    If everybody becomes convinced that artificial-intelligence technologies are needed, then businesses will invest in artificial intelligence, even when there are alternative ways of organizing production that could be more beneficial.

    Add to that sunk cost (these firms invested in this tech, and maybe fired the paralegals that used to do this work, so they need to use the tech), and fundamental attribution error (those other lawyers failed using AI because of something fundamentally a part of their selves, I am only fail when there are external factors getting in the way of my self) and you get a recipe for seemingly irrational behaviour on repeat.