• cheese_greater@lemmy.world
    link
    fedilink
    English
    arrow-up
    75
    arrow-down
    3
    ·
    edit-2
    1 year ago

    I would be in trouble if this was a thing. My writing naturally resembles the output of a ChatGPT prompt when I’m not joke answering or shitposting.

      • sebi@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        1 year ago

        Because generative Neural Networks always have some random noise. Read more about it here

          • PetDinosaurs@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            2
            ·
            1 year ago

            It almost certainly has some gan-like pieces.

            Gans are part of the NN toolbox, like cnns and rnns and such.

            Basically all commercial algorithms (not just nns, everything) are what I like to call “hybrid” methods, which means keep throwing different tools at it until things work well enough.

              • PetDinosaurs@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                1 year ago

                It doesn’t matter. Even the training process makes it pretty much impossible to tell these things apart.

                And if we do find a way to distinguish, we’ll immediately incorporate that into the model design in a GAN like manner, and we’ll soon be unable to distinguish again.

                • stevedidWHAT@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  1 year ago

                  Which is why hardcoded fingerprints/identifications are required to identify the individual as a speaker rather than as an AI vs Human. Which is what we’re ultimately agreeing on here outside of the pedantics of the article and scientific findings:

                  Trying to find the model who is supposed to be human as an AI is counter intuitive. They’re direct opposites if one works, both can’t be exist in this implementation.

                  The hard part will obviously be making sure that such a “fingerprint” wouldn’t be removable which will take some wild math and out of the box thinking I’m sure.

                  Tough problem!

  • Nioxic@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I have to hand in a short report

    I wrote parts of it and asked chatgpt for a conclusion.

    So i read that, adjusted a few points. Added another couple points…

    Then rewrote it all in my own wording. (Chatgpt gave me 10 lines out of 10 pages)

    We are allowed to use chatgpt though. Because we would always have internet access for our job anyway. (Computer science)

    • TropicalDingdong@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      1 year ago

      I found out on the last screen of a travel grant application I needed a coverletter.

      I pasted in the requirements for the cover letter and what I had put in my application.

      I pasted the results in as the cover letter without review.

      I got the travel grant.

        • TropicalDingdong@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          1 year ago

          Exactly. But they still need to exist. That’s what chat gpt is for. Letters, bullshit emails, applications. The shit that’s just tedious.

  • Boddhisatva@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    2
    ·
    1 year ago

    OpenAI discontinued its AI Classifier, which was an experimental tool designed to detect AI-written text. It had an abysmal 26 percent accuracy rate.

    If you ask this thing whether or not some given text is AI generated, and it is only right 26% of the time, then I can think of a real quick way to make it 74% accurate.

    • Leate_Wonceslace@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      I feel like this must stem from a misunderstanding of what 26% accuracy means, but for the life of me, I can’t figure out what it would be.

    • notatoad@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      it seemed like a really weird decision for OpenAI to have an AI classifier in the first place. their whole business is to generate output that’s good enough that it can’t be distinguished from what a human might produce, and then they went and made a tool to try and point out where they failed.

      • Boddhisatva@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        That may have been the goal. Look how good our AI is, even we can’t tell if its output is human generated or not.

  • HelloThere@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 year ago

    Regardless of if they do or don’t, surely it’s in the interests of the people making the “AI” to claim that their tool is so good it’s indistinguishable from humans?

    • stevedidWHAT@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      1 year ago

      Depends if they’re more researchers or a business imo. Scientists generally speaking are very cautious about making shit claims bc if they get called out that’s their career really.

      • BetaDoggo_@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        OpenAI hasn’t been focused on the science since the Microsoft investment. A science focused company doesn’t release a technical report that doesn’t contain any of the specs of the model they’re reporting on.

      • Zeth0s@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Few decades ago probably, nowadays “scientists” make a lot of bs claims to get published. I was in the room when a “scientist” publishing several nature per year asked to her student to write a paper for a research without any result in a way that it looked like it had something important for a relatively good IF publication.

        That day I decided I was done with academia. I had seen enough.

    • pewter@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      Yes, but it’s such a falsifiable claim that anyone is more than welcome to prove them wrong. There’s a lot of slightly different LLMs out there. If you or anyone else can definitively show there’s a machine that can identify AI writing vs human writing, it will either result in better AI writing or it would be an amazing breakthrough in understanding the limits of AI.

      • HelloThere@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        People like to view the problem as a paradox - can an all powerful God create a rock they cannot lift? - but I feel that’s too generous, it’s more marking your own homework.

        If a system can both write text, and detect whether it or another system wrote that text, then “all” it needs to do is change that text to be outside of the bounds of detection. That is to say, it just needs to convince itself.

        I’m not wanting to imply that that is easy, because it isn’t, but it’s a very different thing to convincing someone else, especially a human, that understands the topic.

        There is also a false narrative involved here, that we need an AI to detect AI which again serves as a marketing benefit to OpenAI.

        We don’t, because they aren’t that good, at least, not yet anyway.

  • Matriks404@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 year ago

    Did human-generated content really become so low quality that it is distinguishable from AI-generated content?

  • irotsoma@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    A lot of these relied on common mistakes that “AI” algorithms make but humans generally don’t. As language models are improving, it’s harder to detect.

  • nucleative@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    edit-2
    1 year ago

    We need to embrace AI written content fully. Language is just a protocol for communication. If AI can flesh out the “packets” for us nicely in a way that fits what the receiving humans need to understand the communication then that’s a major win. Now I can ask AI to write me a nice letter and prompt it with a short bulleted list of what I want to say. Boom! Done, and time is saved.

    The professional writers who used to slave over a blank Word document are now obsolete, just like the slide rule “computers” of old (the people who could solve complicated mathematics and engineering problems on paper).

    Teachers who thought a hand written report could be used to prove that “education” has happened are now realizing that the idea was a crutch (it was 25 years ago too when we could copy/paste Microsoft Encarta articles and use as our research papers).

    The technology really just shows us that our language capabilities really are just a means to an end. If a better means asrises we should figure out how to maximize it.