There’s no way for teachers to figure out if students are using ChatGPT to cheat, OpenAI says in new back-to-school guide::AI detectors used by educators to detect use of ChatGPT don’t work, says OpenAI.
My wife teaches at a university. The title is partly bullshit:
For most teachers it couldn’t be more obvious who used ChatGPT in an assignment and who didn’t.
The problem, in most instances, isn’t the “figuring out” part, but the “reasonably proving” part.
And that’s the most frustrating part: you know an assignment was AI-written, there are no tools to prove it and the university gives its staff virtually no guidance or assistance on the subject matter, so you’re almost powerless.
Switch to oral exam and you’ll know fairly quickly who is actually learning the material.
Biggest reason for written exams is bulk processing.
There are many better ways to show competency, ask any engineering or medical school, but few as cheap.
deleted by creator
To add on to the detection issues, international students, students on the spectrum, students with learning disability, … can all be subject to being flagged as “AI generated” by AI detectors. Teachers/professors who have gut feelings should (1) re-consider what biases they have in expected writing styles, and (2), like u/mind says, check in with the students.
I’ve had great success in pasting my writing in to help it write in my “voice”
Talk to them?
But professors are busy doing research. They don’t have time for that.
It makes some sense. If a tool could reliably discern it, the tool would used to train the model to be more indistinguishable from regular text, putting us back to where we are now.
This is literally how a GAN (generative adversarial network) works.
Company advertisers themselves to their customers…
“Company advertises they can help their customers cheat without getting caught” would be the more accurate paraphrasing.
Not a classroom setting, but I recently needed to investigate a software engineer in my team that has allegedly been using ChatGPT to do their work. My company works with critical customer data, so we’re banned from using any generative AI tools.
It’s really easy to tell. The accused engineer cannot explain their own code, they’ve been seen using ChatGPT at work, and they’re stupid enough to submit code with wildly different styling when we dictate the use of a formatter to ensure our code style is consistent. It’s pretty cut and dry, IMO.
I imagine that teachers will also do the same thing. My wife is a teacher, and has asked me about AI tools in the past. Her school hasn’t had any issues, because it’s really obvious when ChatGPT has been used - similarly to how it’s obvious when someone ripped some shit off the internet and paraphrased some parts to get around web searches.
At the core of learning is for students to understand the content being taught. Using tools and shortcuts doesn’t necessarily negate that understanding.
Using chatGPT is no different, from an acidemic evaluation standpoint, than having somebody else do an assignment.
Teachers should already be incorporating some sort of verbal q&a sessions with students to see if their demonstrated in-person comprehension matches their written comprehension. Though from my personal experience, this very rarely happens.
That’s going on the supposition that a person just prompts for an essay and leaves it at that, which to be fair is likely the issue. The thing is, the genie is out of the bottle and it’s not going to go back in. I think at this point it’ll be better to adjust the way we teach children things, and also get to know the tools they’ll be using.
I’ve been using GPT and LLAMA to assist me in writing emails and reports. I provide a foundation, and working with the LLMs I get a good cohesive output. It saves me time, allowing me to work on other things, and whoever needs to read the report or email gets a well-written document/letter that doesn’t meander in the same way I normally do.
I essentially write a draft, have the LLMs write the whole thing, and then there’s usually some back-and-forth to get the proper tone and verbiage right, as well as trim away whatever nonsense the models make up that wasn’t in my original text. Essentially I act as an editor. Writing is a skill I don’t really possess, but now there are tools to make up for this.
Using an LLM in that way, you’re actively working with the text, and you’re still learning the source material. You’re just leaving the writing to someone else.
deleted by creator
Calling it cheating is the wrong way to think about it. If you had a TI 80 whatever in the early 90s, it was practically cheating when everyone else had crap for graphing calculators.
Cat GPT used effectively isn’t any different than a calculator or an electronic typewriter. It’s a tool. Use it well and you’ll do much better work
These hand wringing articles tell us more about the paucity of our approach to teaching and learning than they do about technology.
Meh. You’ll do better if you actually know some math as well. No engineer is going to pull up the calculator to calculate 127+9. I hang around math-wizards all day, and it’s me who need to use the calculator, not them. I’ll tell you that much.
Same goes for writing. Sure, ChatGPT can do amazing things. But if you can’t do them yourself, you’ll struggle to spot the not so amazing things it does.
It’s always easy when you know basic math, writing and reading to say schools are doing it all wrong. But you’re already mostly fluent in what they’re teaching. With that knowledge, you can use ChatGPT as a great tool. Without that knowledge, you couldn’t.
Do you understand what definitions are in place for authorship, citation, and plagiarism in regards to academic honesty policies?
The policies, and more importantly, the pedagogy are out of date and basically irrelevant in an age where machines can and do create better work than the majority of university students. Teachers used to ban certain levels of calculator from their classrooms because it was considered ‘cheating’ (they still might). Those teacher represent a backwards approach towards preparing students for a changing world.
The future isn’t writing essays independent of machine assistance just like the future of calculus isn’t slide rulers.
I think a big challenge or gap here is that writing has a correlation to vocabulary and developing the ability to articulate. It pays off not just for the prose that you write, but your ability to speak and discuss and present ideas. I agree that ai is a tool we will likely be using more in the future. But education is in place to develop skills and knowledge. Does ai help or hinder that goal if a teachers job includes evaluating how much a student has learned and whether they can articulate that?
This AI thing sure did improve society /s
Don’t know why the downvote(s). Like many great technology advancements it can be used for good or for malice. AI definitely can be a great boon to society, but one of the unique aspects of this vs something like the computer or vaccines is that the tech is quite new, organizations and governments are scrambling to regulate it, and almost any fool can get their hands on it.