this post was submitted on 08 Mar 2026
652 points (95.2% liked)

Off My Chest

1866 readers
116 users here now

RULES:


I am looking for mods!


1. The "good" part of our community means we are pro-empathy and anti-harassment. However, we don't intend to make this a "safe space" where everyone has to be a saint. Sh*t happens, and life is messy. That's why we get things off our chests.

2. Bigotry is not allowed. That includes racism, sexism, ableism, homophobia, transphobia, xenophobia, and religiophobia. (If you want to vent about religion, that's fine; but religion is not inherently evil.)

3. Frustrated, venting, or angry posts are still welcome.

4. Posts and comments that bait, threaten, or incite harassment are not allowed.

5. If anyone offers mental, medical, or professional advice here, please remember to take it with a grain of salt. Seek out real professionals if needed.

6. Please put NSFW behind NSFW tags.


founded 2 years ago
MODERATORS
 

I’ve been working with so many students who turn to it as a first resort for everything. The second a problem stumps them, it’s AI. The first source for research is AI.

It’s not even about the tech, there’s just something about not wanting to learn that deeply upsets me. It’s not really something I can understand. There is no reason to avoid getting better at writing.

top 50 comments
sorted by: hot top controversial new old
[–] daannii@lemmy.world 96 points 3 weeks ago* (last edited 3 weeks ago) (5 children)

Hey I'm an educator and I found a way to trick the chatgpt so students can't use it.

I have two methods I employ to reduce they use of chatgpt

Method 1.

I use examples of people in my questions and the people are characters from popular TV shows. Like star trek. You could also use names of athletes or anyone that likely has a lot of content on them in media and internet.

For example : Spock and Uhura both were given an image of a dress to determine if it matched the dress of the missing scientist. Spock perceived the colors to match and Uhura did not. What would explain this difference in color perception?

The answer would be color constancy. It's also a reference to the blue/black gold/white dress. But chatgpt would not be able to understand that.
(I'm a perception researcher and educator).

Anywho if they copy paste , they are likely to get replies based on episodes of star trek tos.

The other thing I do in conjunction with the first is make it so that the resources I give them are easier and less work to use than dealing with the chatgpt answers that would require a lot of additional edits of the text to finally get the correct answer. And may not ever give the correct answer.

If they have a resource like a PDF of the PowerPoint lecture, they will use it instead if it's easier to use.

So make it the easier choice.

[–] batshit@lemmings.world 30 points 3 weeks ago (1 children)

Spock and Uhura both were given an image of a dress to determine if it matched the dress of the missing scientist. Spock perceived the colors to match and Uhura did not. What would explain this difference in color perception?

I don't use ChatGPT but this seemed like a problem that LLMs today can easily solve. So I tried it and yeah ChatGPT answered it correctly.

[–] daannii@lemmy.world 11 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Well it didn't really.

It gave a list of multiple things that can influence color perception.
Color constancy was not listed first.

A student using chatgpt would have gotten the answer wrong.

I'm still surprised it didn't focus on episodes. I'll have to put in more keywords that hone in on specific episodes to cause more misdirection.

The first two answers :

1.Metamerism / spectra vs. appearance. Two fabrics can reflect different spectra but produce the same cone responses under one illuminant. An observer whose cones/sample sensitivities differ (or who assumes a different illuminant) can therefore see them as matching or not matching.

-This doesn't make sense for the example as they are using photographs.

  1. Different photoreceptor sensitivities. Real people (and fictional species) vary in cone types and sensitivity. So Spock might have different retinal sensitivity (or extra/shifted cones) than Uhura, causing them to perceive the same stimulus differently.

-there is no indication in any of the trek episodes or cannon information to indicate Spock has different color vision. But I could say "Kirk and Uhura" to limit the possibility of students thinking since Spock is half Vulcan, he may have different receptors. I doubt most students are trekies tho so this is also not that relevant.

But I also specifically used "dress" to refer to the dress example I discussed in the lecture. Chatgpt cannot know what examples I used in my lecture.

load more comments (1 replies)
[–] SLVRDRGN@lemmy.world 8 points 3 weeks ago (1 children)

The other thing I do in conjunction with the first is make it so

(I do applaud you, though. You're certainly a teacher)

load more comments (1 replies)
load more comments (3 replies)
[–] Mastengwe@sh.itjust.works 67 points 3 weeks ago (4 children)

Yeah. It’s definitely a major contributor to the dumbing of humanity. We’re barreling towards Idocracy with open ~~arms~~. AI.

[–] resipsaloquitur@lemmy.world 13 points 3 weeks ago (1 children)

Oh, don’t worry. AI is coming to arms, too. Might already have.

[–] arin@lemmy.world 9 points 3 weeks ago

Pretty sure that's why the us gov threatened Anthropic

load more comments (3 replies)
[–] Gorgritch_umie_killa@aussie.zone 57 points 3 weeks ago (8 children)

Because learning for kids/young adults isn't really the point anymore. The point of doing the learning is to "pass test" or, "get job" or, "move on to the next link in the education chain". So young people often feel faced with a choice, engage with the process to accomplish the tasks, or dissociate from the process entirely.

This systemic issue is likely why steiner schools and the like are seeing increased interest from parents.

[–] Mvlad88@lemmy.world 17 points 3 weeks ago

That mentality is already a general trend.

I'm currently studying for a certification exam for which you need a relatively solid work experience and educational background, yet there are a lot of instructors that instead of teaching you the subjects are pushing all kind of hacks to pass the exam with minimum study time.

I might be a nerd but, still if you are trying to get a title in some field of studies you better be able to back that shit up with some knowledge.

load more comments (7 replies)
[–] starelfsc2@sh.itjust.works 47 points 3 weeks ago (6 children)

It's because humans naturally want to avoid unpleasant work, and public schools teach us that learning is hard and work for some reason, rather than something fun. For instance, I used to read for fun an unbelievable amount, but then I was forced to do book reports with a required list of books to "prove" I was reading them, and it was just absolutely no fun at all. Why not have a discussion about it and the teacher can check the spark notes? This changes at community college back to learning is fun, but just years of being told to do busywork and be a drone kills learning for a lot of people I feel.

[–] rabidhamster@lemmy.dbzer0.com 13 points 3 weeks ago (1 children)

This answer speaks to me. I used to read nonstop when I was a child. Fiction, non-fiction, didn't matter. I loved it.

After college, it took me a good 5-6 years to start reading for fun again, and it's never quite been the same.

load more comments (1 replies)
[–] Cherries@lemmy.world 10 points 3 weeks ago

It's the natural result of how our society treats education. The end result is more valued than the process. Getting an A is more important than learning the material. When we tell kids that they need good grades to get into a good college to have a good life, education becomes a means to an end, an obstacle to be circumvented.

I didn't enjoy learning until I got out of the public education system. If I had chatgpt in high school I would have 100% used it because high school was just the place to prove I deserved to go to college. It wasn't a place of learning, everyone treated it as the crucible to access a better life instead of a place to figure out what you love.

AI will continue to be a problem the same way cheating will continue to be a problem. They have the same solution: we need to place more value on the learning process than the end results.

load more comments (4 replies)
[–] kandoh@reddthat.com 45 points 3 weeks ago (2 children)

It's only going to get worse. We're going to encounter people who are basically being piloted by AI throughout their lives, with everything they do.

[–] WorldsDumbestMan@lemmy.today 10 points 3 weeks ago (1 children)

I don't see why I should not become a meat puppet for AI, every decision I make, seems to be wrong. Why would I let myself make any more?

load more comments (1 replies)
load more comments (1 replies)
[–] lohky@lemmy.world 35 points 3 weeks ago (5 children)

I hate that LLMs have fucked my ability to find decent documentation. The Internet is done for. I'm learning to garden and do basic electronics from text books now.

load more comments (5 replies)
[–] BranBucket@lemmy.world 28 points 3 weeks ago* (last edited 3 weeks ago)

I feel like this is a progression of a trend I've been railing against for a while. My workplace has to contend with a massive amount of ever-changing regulatory and engineering information. There are thousands of pages of documents, with differing levels of authority and detail, governing all aspects of what we do.

I've been begging people to read the docs. Don't just ask your manager or predecessor, don't just skim through it, and for fuck's sake don't ctrl+f until you find something that looks good and run with it out of context. Treating this sort of research like a Google search is killing us during compliance inspections. Read the docs!

Shit changes, often. I have to constantly remind them, it's not what the docs said last year. It's what they say now. Know your responsibilities, know where to find the info that pertains to them, and review it often. Read it, know it, or at least know where to find it.

It's getting worse. I've seen experienced people submit supplemental documents with egregious errors after they "just used AI for grammar checking". I've seen proposed policy docs with references to regulations that are decades out of date. I've gotten questions about implementing things that were outlawed or obsolete before I was born, and I've been around a looooong while.

We can't meat puppet our way through this, blindly following AI, or people are going to die in horrible industrial accidents. I mean that literally. People will be killed. This is why we have the current mass quanties of regulatory documents, to prevent people from literally dying in awful ways.

I'm to old for this shit.

[–] Jankatarch@lemmy.world 24 points 3 weeks ago (4 children)

On one hand I don't blame people for wanting to make money.

On the other hand hand how come EVERYONE is in it for the money?

Integrity is all gone and I hate that I can be in classes with 40 CS majors and still can't share my hobby of programming with anyone.

[–] xep@discuss.online 10 points 3 weeks ago* (last edited 3 weeks ago)

Because corporate capture has made society all about money.

load more comments (3 replies)
[–] SuspciousCarrot78@lemmy.world 23 points 3 weeks ago* (last edited 3 weeks ago) (17 children)

It's not about AI; it's about how people are USING AI.

Take for example this recent video from Language Jones, showing how to use AI to leverage your native intelligence for language learning (Yes, it's from PhD in linguistics and yes, he cites research. "Always bring receipts" is logic 101). He shows how AI works best as a Socratic tutor, forcing you to generate answers rather than replacing thinking.

https://www.youtube.com/watch?v=xQXiSGDXknA

When used properly, AI is a force magnifier par excellence. When used in the way you're likely encountering (young cohort? poor attention span? no training in formal reasoning, logic?) then yeah... "shit's fucked" (in the Australian vernacular).

I use to teach biomed, just before AI took over (so, circa 2013-2019). Attention spans were already alarmingly low and we'd have to instigate movement breaks, intermissions, break outs etc. I had to fucking tap dance out there - anything to keep "engagement" high and avoid the dreaded attrition KPIs.

The days of students being able to concentrate for 60+ mins in a row are likely gone. Hell, there's an oft repeated meme stat that average attention span on digital devices has dropped from two and a half minutes in 2004 to 47 seconds today. Whether you consider the provenance of that dubious, it does point to "people have trouble paying attention".

But...that's not AI's fault. The "shit was already fucked".

I think there's something (still) to be said about Classical Education Method. We need things like that. We need to teach our young ones about things like "intuition pumps" and "street epistemology", reasoning etc. And we can use ShitGPT to do it.

Take a simple example: a student uses ChatGPT to write an essay on climate policy. The AI generates a claim. Now ask: "What would prove this wrong?" If they can't answer - if they can't articulate what evidence or logic would falsify it - they don't understand it.

They've outsourced the reasoning. That's the difference.

It's not easy out there; it never was. But there's a confluence of factors (popular culture, digital devices, changing demographics, family dynamics, "education" being streamlined as vocational pre-training etc etc ad infinitum) that certainly seem to be actively hostile towards developing thinkers.

Here endth the pro clanker sermon.

Ramen; may we be blessed by his noodly appendage.

PS: I’m actually pretty hostile to AI myself and have been working on an open source engineering approach to mitigate some of these issues. Happy to share it if curious (not selling anything, Open source: just something I'm trying to use to solve this sort of issue for myself)

load more comments (17 replies)
[–] mojofrododojo@lemmy.world 23 points 3 weeks ago (18 children)

yep. watching kids squander their one chance at university education over their reliance on this shit is depressing as fuck.

load more comments (18 replies)
[–] sloppy_diffuser@sh.itjust.works 21 points 3 weeks ago (4 children)

I'm in software development and land on both sides of this argument.

Having to review or maintain AI slop is infuriating.

That said, it has replaced traditional web searching for me. A good assistant setup can run multiple web searches for me, distill the useful info cutting through the blog spam and ads, run follow up searches for additional info if needed, and summarize the results in seconds with references if I want to validate its output.

There was a post a couple days ago about it solving a hard math problem with guidance from a mathematician. Sparked a discussion about AI being a powerful tool in the right hands.

[–] surewhynotlem@lemmy.world 63 points 3 weeks ago (1 children)

cutting through the blog spam and ads

We've solved the problem of enshittification of the web by having robots consume the shit for us!

[–] Jesusaurus@lemmy.world 38 points 3 weeks ago

And create an equal amount, if not more shit. Take that entropy!

[–] expatriado@lemmy.world 49 points 3 weeks ago

has replaced traditional web searching for me

i think part of the problem is that web search has enshitified over the years, back in the day you would enter the relevant key words and get the info you needed on the top results most of the time, nowadays it's all ads. now ai goes to the point, but less reliable. almost like Gemini trying to solve a problem that Google itself created

[–] DarrinBrunner@lemmy.world 25 points 3 weeks ago (7 children)

You trust it to "distill the useful info"? How do you know it's not throwing out important pieces just to lead you down the garden path, or, maybe because it "thinks" you wouldn't be interested because of all it "knows" about you? If you need to check everything it does, why not just do it yourself?

load more comments (7 replies)
load more comments (1 replies)
[–] GarboDog@lemmy.world 19 points 3 weeks ago

Not a teacher but rather was a student in language school and will be a student again hopefully soon. But last time we were in language school everyone was using Chat GOT to get answers in work sheets and translations and stuff just to get a passing grade when in reality the class didn’t actually have a grade. They were cheating for nothing, paying for a class to learn, and swapping out the critical language learning for slop???? Granted we were allowed translators for words we don’t know yet/had trouble with grammar (us especially since autism moment) but we only used Google Translate and normally only single words, which were then put into our need to learn vocab list. At first we felt stupid because everyone seemed to be finished way before us and at lightning speed understanding what’s going on?? But we started to notice they’re ask on their phones and not in the active workbook and after a while found out it was chat gpt. They even said we should get it to not fall behind and yet we were trying to actually learn. Anywho on any spoken portion and exams we and 2 other people who didn’t use gpt passed without issue. :P

[–] 33550336@lemmy.world 17 points 3 weeks ago (1 children)
load more comments (1 replies)
[–] Eggyhead@lemmy.world 17 points 3 weeks ago (2 children)

When I try to do a general search for help on how to solve a problem the top results in most search engines aren’t the old Academy style videos of guides anymore. They are sponsored links, paid tutoring websites, and YouTube videos of people playing at influencer instead of teaching.

Just wait until the AI companies move on from the onboarding phase and into the enshittification one.

load more comments (2 replies)
[–] HugeNerd@lemmy.ca 17 points 3 weeks ago (2 children)

It's called ChatGPT. Not ExpertGPT, ScientistGPT, EngineerGPT, DoctorGPT, or fucking TeacherGPT.

I have no idea how a novelty Eliza 2.0 impresses so many microcephalics to the point it's destroying our society.

load more comments (2 replies)
[–] lemmie689@lemmy.sdf.org 15 points 3 weeks ago (1 children)
[–] aramis87@fedia.io 9 points 3 weeks ago

Okay, wow, a Logan's Run TV series meme in the wild - cool!

[–] ritsku@lemmy.world 14 points 3 weeks ago* (last edited 3 weeks ago) (17 children)

Perhaps handwritten in-class research papers need to make a comeback

load more comments (17 replies)
[–] dream_weasel@sh.itjust.works 13 points 3 weeks ago

You can use AI to learn everything OR to learn nothing. They've made the second choice.

[–] MidsizedSedan@lemmy.world 12 points 3 weeks ago

I explain to the students, that the essays/tests, are to find out what YOU know. Not the computer. Not your friend for help. YOU.

You are getting your year 12 certificate some day. Not your friend.

[–] deadymouse@lemmy.world 12 points 3 weeks ago* (last edited 3 weeks ago) (6 children)

If this annoys you, watch the cartoon WALL-E. Sooner or later, humanity will come to something like this, and then they will self-destruct.

load more comments (6 replies)
[–] GreenKnight23@lemmy.world 11 points 3 weeks ago

It’s not even about the tech, there’s just something about not wanting to learn that deeply upsets me.

and this is exactly why I hate the "new age" society.

[–] Omega_Jimes@lemmy.ca 10 points 3 weeks ago

I'm a returning student, and I'm really upset at the level that my cohort turns to AI for everything. There's no effort to think about a problem when most school work is already in the system and can be retrieved easily.

It's also upsetting to me how attractive an option it can be. I can spend all this time working on something, or i drop a simple prompt and just rewrite the answer in my own words. When you have something that you forgot about or procrastinated on, it's really attractive to say "What are some themes in Moby Dick?" and basically rewrite the response. What's even more upsetting than that, is the amount of people who wont even retype a thing to pass it as their own, but just copy/paste.

An increasing number of my profs are adding AI into their courses, with the feeling that it's inevitable. It's distressing to see an assignment that calls for us to ask a chat or about the problem then fact check it. I feel like that's teaching the exact opposite of what these kids need.

[–] AntiBullyRanger@ani.social 9 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

U hate slop ’cause students swap slop fo thinking. I detest slop because it's fashprop praxis. WR≢

🤝 In slop hate we concur.

load more comments (2 replies)
load more comments
view more: next ›