this post was submitted on 01 Sep 2023
220 points (96.2% liked)

Technology

73602 readers
3096 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

There's no way for teachers to figure out if students are using ChatGPT to cheat, OpenAI says in new back-to-school guide::AI detectors used by educators to detect use of ChatGPT don't work, says OpenAI.

all 46 comments
sorted by: hot top controversial new old
[–] PixelProf@lemmy.ca 78 points 2 years ago (2 children)

Education has a fundamental incentive problem. I want to embrace AI in my classroom. I've been studying ways of using AI for personalized education since I was in grade school. I wanted personalized education, the ability to learn off of any tangent I wanted, to have tools to help me discover what I don't know so I could go learn it.

The problem is, I'm the minority. Many of my students don't want to be there. They want a job in the field, but don't want to do the work. Your required course isn't important to them, because they aren't instructional designers who recognize that this mandatory tangent is scaffolding the next four years of their degree. They have a scholarship, and can't afford to fail your assignment to get feedback. They have too many courses, and have to budget which courses to ignore. The university holds a duty to validate that those passing the courses met a level of standards and can reproduce their knowledge outside of a classroom environment. They have a strict timeline - every year they don't certify their knowledge to satisfaction is a year of tuition and random other fees to pay.

If students were going to university to learn, or going to highschool to learn, instead of being forced there by societal pressures - if they were allowed to learn at their own pace without fear of financial ruin - if they were allowed to explore the topics they love instead of the topics that are financially sound - then there would be no issue with any of these tools. But the truth is much bleaker.

Great students are using these tools in astounding ways to learn, to grow, to explore. Other students - not bad necessarily, but ones with pressures that make education motivated purely by extrinsic factors than intrinsic - have a perfect crutch available to accidentally bypass the necessary steps of learning. Because learning can be hard, and tedious, and expensive, and if you don't love it, you'll take the path of least resistance.

In game design, we talk about not giving the player the tools to optimize their fun away. I love the new wave of AI, I've been waiting for this level of natural language processing and generation capability for a very long time, but these are the tools for students to optimize the learning away. We need to reframe learning and education. We need to bring learning front and center instead of certification. Employers need to recognize this, universities need to recognize this, highschools and students and parents need to recognize this.

[–] jmp242@sopuli.xyz 0 points 2 years ago (1 children)

If someone can use the tool to do the job successfully, I don't see if that learning was actually necessary. Like I learned to use a phone rather than a telegraph. I learned how to drive a car rather than ride a horse. I learned a calculator rather than a sliderule.

Of course we're still at the stage where you need to double check the tool,but that skill is maybe more like supervising someone rather than directly doing the task.

I can imagine prompt engineering will actually be a thing, and asking the AI to fix parts that don't work is the short term. We already can ask the AI to look over it's own work for mistakes, I have to imagine that's going to be built in soon...

The worse thing is if the student can actually ootimize the learning away with the AI, so too can employers optimize away the potential employees.

[–] PixelProf@lemmy.ca 4 points 2 years ago (1 children)

This is a very output-driven perspective. Another comment put it well, but essentially when we set up our curriculum we aren't just trying to get you to produce the one or two assignments that the AI could generate - we want you to go through the motions and internalize secondary skills. We've set up a four year curriculum for you, and the kinds of skills you need to practice evolve over that curriculum.

This is exactly the perspective I'm trying to get at work my comment - if you go to school to get a certification to get a job and don't care at all about the learning, of course it's nonsense to "waste your time" on an assignment that ChatGPT can generate for you. But if you're there to learn and develop a mastery, the additional skills you would have picked up by doing the hard thing - and maybe having a Chat AI support you in a productive way - is really where the learning is.

If 5 year olds can generate a university level essay on the implications of thermodynamics on quantum processing using AI, that's fun, but does the 5 year old even know if that's a coherent thesis? Does it imply anything about their understanding of these fields? Are they able to connect this information to other places?

Learning is an intrinsic task that's been turned into a commodity. Get a degree to show you can generate that thing your future boss wants you to generate. Knowing and understanding is secondary. This is the fear of generative AI - further losing sight that we learn though friction and the final output isn't everything. Note that this is coming from a professor that wants to mostly do away with grades, but recognizes larger systemic changes need to happen.

[–] jmp242@sopuli.xyz 1 points 2 years ago

I am very pro learning, but I also have basically seen that our society doesn't value it. We're anti expertise to our detriment. I like figuring things out and learning... But I am not sure that that's any more than an opinion I hold. If the learning doesn't help you in life, I have a hard time defending it as more than a preference.

I guess what I'm trying to say is - my values and motivations aren't the only ones, and I can't prove them as the right ones. If someone is primarily motivated by making money, learning is a little correlated with that, but it's not overwhelmingly so. More specifically - writing ChatGPT style essays are something I believe plenty of people have lucrative careers without ever doing.

I not even convinced college has positive ROI anymore. In that context, the output is the issue. In the context of most jobs it is also the issue.

Maybe this analogy will help - do you feel that all the people taking better pictures than ever thanks to AI in their cellphone cameras and automatic post processing have missed an important skill of working out ISO, aperture and shutter speed? Do you think they would mostly agree those skills are useful? Are there a lot of jobs for "camera technicians" where the manual settings are what they're hired for?

Now, I agree that in my analogy - if you know how the settings relate to freezing motion or background blur or whatever, you can take better pictures and likely have a higher hit rate. But I don't think the world prioritizes that, and I am not sure in the bigger picture they are wrong.

[–] tabular@lemmy.world 28 points 2 years ago (1 children)
[–] space@lemmy.dbzer0.com 1 points 2 years ago

But professors are busy doing research. They don't have time for that.

[–] Mane25@feddit.uk 24 points 2 years ago (3 children)

Detecting whether a student used ChatGPT to write an assignment can be challenging, but there are some signs and strategies you can consider:

  • Unusual Language or Style: ChatGPT may produce content that is unusually advanced or complex for a student's typical writing style or ability. Look for inconsistencies in language usage, vocabulary, and sentence structure.

  • Inconsistent Knowledge: ChatGPT's knowledge is based on information up to its last training cut-off in September 2021. If the assignment contains information or references to events or developments that occurred after that date, it might indicate that they used an AI model.

  • Generic Information: If the content of the assignment seems to consist of general or widely available information without specific personal insights or original thought, it could be a sign that ChatGPT was used.

  • Inappropriate Sources: Check the sources cited in the assignment. If they cite sources that are unusual or not relevant to the topic, it may indicate that they generated the content using an AI model.

  • Plagiarism Detection Tools: Use plagiarism detection software, such as Turnitin or Copyscape, to check for similarities between the assignment and online sources. While these tools may not specifically detect AI-generated content, they can identify similarities between the assignment and publicly available text.

  • Interview or Discussion: Consider discussing the assignment topic with the student during a one-on-one interview or discussion. If they struggle to explain or elaborate on the content, it may indicate they didn't personally generate it.

It's important to approach these situations with caution and avoid making accusations without concrete evidence. If you suspect that a student used an AI model to complete an assignment, consider discussing your concerns with the student and offering them the opportunity to explain or rewrite the assignment in their own words.

[–] joao@aussie.zone 66 points 2 years ago (2 children)

This was definitely written by ChatGPT

[–] GenderNeutralBro@lemmy.sdf.org 41 points 2 years ago (1 children)

You can tell because it's grammatically correct but logically incongruous. For example:

ChatGPT's knowledge is based on information up to its last training cut-off in September 2021. If the assignment contains information or references to events or developments that occurred after that date, it might indicate that they used an Al model.

That is the exact opposite of the conclusion you could draw.

[–] SkaveRat@discuss.tchncs.de 5 points 2 years ago

Could have just been a brain fart/typo

[–] jdf038@mander.xyz 7 points 2 years ago

Now sing it as a pirate shanty!

[–] ribboo@lemm.ee 10 points 2 years ago (1 children)

Best tip is to use chatgpt yourself and you learn to spot obvious stuff like this at literally the first sentence!

[–] GBU_28@lemm.ee 3 points 2 years ago

It is tribal to circumvent gpt specific bullet

Like:

train it with snippets of your writing style.

Tell it to use specific sources

[–] T156@lemmy.world 22 points 2 years ago (1 children)

It makes some sense. If a tool could reliably discern it, the tool would used to train the model to be more indistinguishable from regular text, putting us back to where we are now.

[–] PetDinosaurs@lemmy.world 9 points 2 years ago

This is literally how a GAN (generative adversarial network) works.

[–] EndOfLine@lemmy.world 11 points 2 years ago* (last edited 2 years ago) (1 children)

At the core of learning is for students to understand the content being taught. Using tools and shortcuts doesn't necessarily negate that understanding.

Using chatGPT is no different, from an acidemic evaluation standpoint, than having somebody else do an assignment.

Teachers should already be incorporating some sort of verbal q&a sessions with students to see if their demonstrated in-person comprehension matches their written comprehension. Though from my personal experience, this very rarely happens.

[–] dojan@lemmy.world 2 points 2 years ago

That's going on the supposition that a person just prompts for an essay and leaves it at that, which to be fair is likely the issue. The thing is, the genie is out of the bottle and it's not going to go back in. I think at this point it'll be better to adjust the way we teach children things, and also get to know the tools they'll be using.

I've been using GPT and LLAMA to assist me in writing emails and reports. I provide a foundation, and working with the LLMs I get a good cohesive output. It saves me time, allowing me to work on other things, and whoever needs to read the report or email gets a well-written document/letter that doesn't meander in the same way I normally do.

I essentially write a draft, have the LLMs write the whole thing, and then there's usually some back-and-forth to get the proper tone and verbiage right, as well as trim away whatever nonsense the models make up that wasn't in my original text. Essentially I act as an editor. Writing is a skill I don't really possess, but now there are tools to make up for this.

Using an LLM in that way, you're actively working with the text, and you're still learning the source material. You're just leaving the writing to someone else.

[–] EnderMB@lemmy.world 11 points 2 years ago

Not a classroom setting, but I recently needed to investigate a software engineer in my team that has allegedly been using ChatGPT to do their work. My company works with critical customer data, so we're banned from using any generative AI tools.

It's really easy to tell. The accused engineer cannot explain their own code, they've been seen using ChatGPT at work, and they're stupid enough to submit code with wildly different styling when we dictate the use of a formatter to ensure our code style is consistent. It's pretty cut and dry, IMO.

I imagine that teachers will also do the same thing. My wife is a teacher, and has asked me about AI tools in the past. Her school hasn't had any issues, because it's really obvious when ChatGPT has been used - similarly to how it's obvious when someone ripped some shit off the internet and paraphrased some parts to get around web searches.

[–] givesomefucks@lemmy.world 8 points 2 years ago (1 children)

Company advertisers themselves to their customers...

[–] ech@lemm.ee 1 points 2 years ago

"Company advertises they can help their customers cheat without getting caught" would be the more accurate paraphrasing.

[–] Extrasvhx9he@lemmy.today 8 points 2 years ago* (last edited 2 years ago) (1 children)

Havent read the article yet but guess teacher's best option is to go back to paper and do all of their work during class period. Not sure how they'll handle homework though or outside projects

[–] TropicalDingdong@lemmy.world 5 points 2 years ago (2 children)

Calling it cheating is the wrong way to think about it. If you had a TI 80 whatever in the early 90s, it was practically cheating when everyone else had crap for graphing calculators.

Cat GPT used effectively isn't any different than a calculator or an electronic typewriter. It's a tool. Use it well and you'll do much better work

These hand wringing articles tell us more about the paucity of our approach to teaching and learning than they do about technology.

[–] Copernican@lemmy.world 5 points 2 years ago (2 children)

Do you understand what definitions are in place for authorship, citation, and plagiarism in regards to academic honesty policies?

[–] TropicalDingdong@lemmy.world 5 points 2 years ago (1 children)

The policies, and more importantly, the pedagogy are out of date and basically irrelevant in an age where machines can and do create better work than the majority of university students. Teachers used to ban certain levels of calculator from their classrooms because it was considered 'cheating' (they still might). Those teacher represent a backwards approach towards preparing students for a changing world.

The future isn't writing essays independent of machine assistance just like the future of calculus isn't slide rulers.

[–] Copernican@lemmy.world 2 points 2 years ago* (last edited 2 years ago)

I think a big challenge or gap here is that writing has a correlation to vocabulary and developing the ability to articulate. It pays off not just for the prose that you write, but your ability to speak and discuss and present ideas. I agree that ai is a tool we will likely be using more in the future. But education is in place to develop skills and knowledge. Does ai help or hinder that goal if a teachers job includes evaluating how much a student has learned and whether they can articulate that?

[–] Mane25@feddit.uk -1 points 2 years ago

I don't fully agree with OP but I think we could probably do with adjusting some of them. Personally I think with current AI, if somebody composes something by making multiple AI prompts and selects the best result, they should get some kind of authorship because they used a tool to create something.

[–] ribboo@lemm.ee 4 points 2 years ago* (last edited 2 years ago)

Meh. You’ll do better if you actually know some math as well. No engineer is going to pull up the calculator to calculate 127+9. I hang around math-wizards all day, and it’s me who need to use the calculator, not them. I’ll tell you that much.

Same goes for writing. Sure, ChatGPT can do amazing things. But if you can’t do them yourself, you’ll struggle to spot the not so amazing things it does.

It’s always easy when you know basic math, writing and reading to say schools are doing it all wrong. But you’re already mostly fluent in what they’re teaching. With that knowledge, you can use ChatGPT as a great tool. Without that knowledge, you couldn’t.

[–] autotldr@lemmings.world 3 points 2 years ago (1 children)

This is the best summary I could come up with:


OpenAI is preparing teachers for the back-to-school season, releasing a guide on how to use ChatGPT in the classroom, months after educators raised the alarm on students turning to AI for cheating.

Bad news for teachers and professors though: OpenAI says that sites and apps promising to uncover AI-generated copy in students' work are unreliable.

Such content detectors also have a tendency to suggest that work by students who don't speak English as a first language is AI-generated, OpenAI stated, confirming a problem reported earlier by The Markup.

Teachers are concerned however that students are cheating by presenting ideas and phrases from the chatbot as their own, and that they are becoming over-dependent on a tool which remains prone to errors and hallucinations.

Professors began to detect students using ChatGPT to cheat on college essays a little over a month after the chatbot was released in November 2022.

OpenAI also acknowledged that ChatGPT is not free from biases and stereotypes, for instance, so "users and educators should carefully review its content."


The original article contains 360 words, the summary contains 171 words. Saved 52%. I'm a bot and I'm open source!

[–] Sightline@lemmy.ml 4 points 2 years ago

That's cheating.

[–] gonzoknowsdotcom1@monero.town 2 points 2 years ago
[–] xkforce@lemmy.world 1 points 2 years ago (1 children)

This AI thing sure did improve society /s

[–] BrianTheeBiscuiteer@lemmy.world 2 points 2 years ago

Don't know why the downvote(s). Like many great technology advancements it can be used for good or for malice. AI definitely can be a great boon to society, but one of the unique aspects of this vs something like the computer or vaccines is that the tech is quite new, organizations and governments are scrambling to regulate it, and almost any fool can get their hands on it.