• Boddhisatva@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    2
    ·
    1 year ago

    OpenAI discontinued its AI Classifier, which was an experimental tool designed to detect AI-written text. It had an abysmal 26 percent accuracy rate.

    If you ask this thing whether or not some given text is AI generated, and it is only right 26% of the time, then I can think of a real quick way to make it 74% accurate.

    • Leate_Wonceslace@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      I feel like this must stem from a misunderstanding of what 26% accuracy means, but for the life of me, I can’t figure out what it would be.

    • notatoad@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      it seemed like a really weird decision for OpenAI to have an AI classifier in the first place. their whole business is to generate output that’s good enough that it can’t be distinguished from what a human might produce, and then they went and made a tool to try and point out where they failed.

      • Boddhisatva@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        That may have been the goal. Look how good our AI is, even we can’t tell if its output is human generated or not.