this post was submitted on 01 Sep 2023
232 points (95.7% liked)
Technology
59207 readers
2845 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
My wife teaches at a university. The title is partly bullshit:
For most teachers it couldn't be more obvious who used ChatGPT in an assignment and who didn't.
The problem, in most instances, isn't the "figuring out" part, but the "reasonably proving" part.
And that's the most frustrating part: you know an assignment was AI-written, there are no tools to prove it and the university gives its staff virtually no guidance or assistance on the subject matter, so you're almost powerless.
Switch to oral exam and you'll know fairly quickly who is actually learning the material.
I agree with you for sure. However if I'm playing devil's advocate ... I think some people will fall under the pressure and perform poorly just because it's oral rather than written.
I generally think that even if that's the case that it's an important skill to teach too, but I'm just thinking of contradictions.
Oral would suck for the transition students. It's a completely different style and skill set of answering questions and no kid would have training or the mental framework on how to do it. It's great if you're the kind of person who can write a mostly perfect draft essay from start to finish no skipping around or back tracking, but if that's not you, it's gonna be a rough learning curve. This is before we ask questions like how does a deaf person take this exam? A mute person? Someone with verbal paraphasia?
You are not wrong. I think the best use of this would be a verification test that had significant impact on your grade but didn't necessarily fail you if you did well in other evaluations.
Think of it as a conversation like a job interview that takes into account the different ways people react in that environment. I do this when I'm interviewing job candidates. I interview people for technical jobs. I value good communicators but if that's the only people I hired, I wouldn't have as good a team. But if I do hire someone who isn't as good as this, I coach them. They get more comfortable. I realize some people have anxiety or other things that make this very difficult, I think that could be taken into account (e.g. more written work but in an observed setting).
Biggest reason for written exams is bulk processing.
There are many better ways to show competency, ask any engineering or medical school, but few as cheap.
To add on to the detection issues, international students, students on the spectrum, students with learning disability, … can all be subject to being flagged as “AI generated” by AI detectors. Teachers/professors who have gut feelings should (1) re-consider what biases they have in expected writing styles, and (2), like u/mind says, check in with the students.
My coven-mate was called in by her college dean, accusing her of faking or plagiarizing her mid-term thesis. (I totally forget what the subject was. This was late 1980s. She wanted to work in national intelligence.)
But the thing is, she could expain every part of her rabbit-hole deep dive (which was a trip to several libraries and locating books themselves rather than tracking leads through the internet.) It was all fresh in her head, and to the shock and awe of her dean and instructor (delight? horror?) it was clear she was just a determined genius doing post-grad quality work because she pushed herself that hard. And yes, she was out of their league and could probably write the thesis again if that was necessary.
In our fucked up society, the US has little respect for teachers or even education so I don't expect anything real to happen, but this would be grounds to reduce classroom size by increasing faculty size so that each teacher is familiar with their fifteen students, their capabilities and ambitions and challenges at home. That way when a kid turns in an AI essay but then can't expain what the essay says, the teacher can use it as a teachable moment: point out that AI is a springboard, a place to start as a foundation for a report, but it's still important for the student to make it their own, and make sure it comes to conclusions they agree with.
I would like to know how you know who's using ChatGPT though. A gut feeling doesn't work for many good reasons.
ChatGPT writes in very distinct style and it's quite easy to tell by anyone who has played around with it. The issue here isn't necessarily being able to tell whose cheating but proving it is the hard part.
Yeah, I use ChatGPT to assist with the grammar in my posts here at times. However, I need to explicitly instruct it to only correct the errors and not make any other changes. Otherwise, it completely rewrites the entire message, and the language ends up sounding unmistakably like ChatGPT. As you mentioned, it's immediately apparent because it has a distinct style, and no typical human writes in that manner. Like you said, it's easy to discern but challenging to confirm. Additionally, with the right prompt, you can probably get it to generate text that sounds more conventional.
Something that can come up is weird notation in math.
As an example photomath, which is an automatic math problem solver, uses a different interval notation (ie
x ≥ 2
is solved for allx ∈ [ 2, ∞ ⟩
), than the one used in my locale (iex ≥ 2
is solved for allx ∈ [ 2, ∞ )
or for allx ∈ ⟨ 2, ∞ )
) which does trick some people up.This is more relevant at highschool level than academic level I'm guessing though.
extra note: chat GPT gets the right notation (in a sample rate of n=1).