this post was submitted on 12 Jan 2026
146 points (97.4% liked)

Technology

78543 readers
4537 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Across the world schools are wedging AI between students and their learning materials; in some countries greater than half of all schools have already adopted it (often an "edu" version of a model like ChatGPT, Gemini, etc), usually in the name of preparing kids for the future, despite the fact that no consensus exists around what preparing them for the future actually means when referring to AI.

Some educators have said that they believe AI is not that different from previous cutting edge technologies (like the personal computer and the smartphone), and that we need to push the "robots in front of the kids so they can learn to dance with them" (paraphrasing a quote from Harvard professor Houman Harouni). This framing ignores the obvious fact that AI is by far, the most disruptive technology we have yet developed. Any technology that has experts and developers alike (including Sam Altman a couple years ago) warning of the need for serious regulation to avoid potentially catastrophic consequences isn't something we should probably take lightly. In very important ways, AI isn't comparable to technologies that came before it.

The kind of reasoning we're hearing from those educators in favor of AI adoption in schools doesn't seem to have very solid arguments for rushing to include it broadly in virtually all classrooms rather than offering something like optional college courses in AI education for those interested. It also doesn't sound like the sort of academic reasoning and rigorous vetting many of us would have expected of the institutions tasked with the important responsibility of educating our kids.

ChatGPT was released roughly three years ago. Anyone who uses AI generally recognizes that its actual usefulness is highly subjective. And as much as it might feel like it's been around for a long time, three years is hardly enough time to have a firm grasp on what something that complex actually means for society or education. It's really a stretch to say it's had enough time to establish its value as an educational tool, even if we had come up with clear and consistent standards for its use, which we haven't. We're still scrambling and debating about how we should be using it in general. We're still in the AI wild west, untamed and largely lawless.

The bottom line is that the benefits of AI to education are anything but proven at this point. The same can be said of the vague notion that every classroom must have it right now to prevent children from falling behind. Falling behind how, exactly? What assumptions are being made here? Are they founded on solid, factual evidence or merely speculation?

The benefits to Big Tech companies like OpenAI and Google, however, seem fairly obvious. They get their products into the hands of customers while they're young, potentially cultivating their brands and products into them early. They get a wealth of highly valuable data on them. They get to maybe experiment on them, like they have previously been caught doing. They reinforce the corporate narratives behind AI — that it should be everywhere, a part of everything we do.

While some may want to assume that these companies are doing this as some sort of public service, looking at the track record of these corporations reveals a more consistent pattern of actions which are obviously focused on considerations like market share, commodification, and bottom line.

Meanwhile, there are documented problems educators are contending with in their classrooms as many children seem to be performing worse and learning less.

The way people (of all ages) often use AI has often been shown to lead to a tendency to "offload" thinking onto it — which doesn't seem far from the opposite of learning. Even before AI, test scores and other measures of student performance have been plummeting. This seems like a terrible time to risk making our children guinea pigs in some broad experiment with poorly defined goals and unregulated and unproven technologies which may actually be more of an impediment to learning than an aid in their current form.

This approach has the potential to leave children even less prepared to deal with the unique and accelerating challenges our world is presenting us with, which will require the same critical thinking skills which are currently being eroded (in adults and children alike) by the very technologies being pushed as learning tools.

This is one of the many crazy situations happening right now that terrify me when I try to imagine the world we might actually be creating for ourselves and future generations, particularly given personal experiences and what I've heard from others. One quick look at the state of society today will tell you that even we adults are becoming increasingly unable to determine what's real anymore, in large part thanks to the way in which our technologies are influencing our thinking. Our attention spans are shrinking, our ability to think critically is deteriorating along with our creativity.

I am personally not against AI, I sometimes use open source models and I believe that there is a place for it if done correctly and responsibly. We are not regulating it even remotely adequately. Instead, we're hastily shoving it into every classroom, refrigerator, toaster, and pair of socks, in the name of making it all smart, as we ourselves grow ever dumber and less sane in response. Anyone else here worried that we might end up digitally lobotomizing our kids?

top 18 comments
sorted by: hot top controversial new old
[–] Tehdastehdas@piefed.social 6 points 1 hour ago

They let AI into the curriculum immediately, while actual life skills have been excluded for the benefit of work skills since Prussian schooling became popular. Dumbing down the livestock.

https://www.quora.com/What-are-some-things-schools-should-teach-but-dont/answer/Harri-K-Hiltunen

[–] Jankatarch@lemmy.world 4 points 1 hour ago* (last edited 1 hour ago)

Schools generally buy anything microsoft offers with the little budget they have.

This time it's messed up tho. Allowing chatbots in schools will hurt education more than the entire pandemic and the effects only gets worse each year.

Why did any school higher ups pay to implement these? There is a small hint of "Screw you, I got mine" is the only explanation I can think of.

[–] ZILtoid1991@lemmy.world 8 points 2 hours ago (1 children)

The very same people, who called me stupid for thinking typing will be a more important skill that "pretty writing" now think art education is obsolete, because you can just ask a machine for an image.

AI stands for "anti-intellectualism".

[–] Disillusionist@piefed.world 2 points 1 hour ago

One of Big Tech's pitches about AI is the "great equalizer" idea. It reminds me of their pitch about social media being the "great democratizer". Now we've got algorithms, disinformation, deepfakes, and people telling machines to think for them and potentially also their kids.

[–] JeeBaiChow@lemmy.world 30 points 8 hours ago* (last edited 8 hours ago) (1 children)

Already seeing this in some junior devs.

[–] tidderuuf@lemmy.world 16 points 7 hours ago (2 children)

Meanwhile Junior Devs: "Why will no one hire me?!?!"

[–] Jankatarch@lemmy.world 3 points 55 minutes ago* (last edited 52 minutes ago)

There is a funny two-way filtering going on in here.

Job applications are auto-rejected unless they go over how "AI will reshape the future and I am so excited" as if it's linkedin.

Then the engineers that do the interviews want people interested in learning about computers through years of hard work and experience?

Just doesn't work out.

[–] JeeBaiChow@lemmy.world 12 points 7 hours ago (1 children)

Ths seniors can tell. And even if you make it into the job, itll be pretty obvious the first couple of days.

[–] kescusay@lemmy.world 12 points 6 hours ago (1 children)

I interview juniors regularly. I can't wait until the first time I interview a "vibe coder" who thinks they're a developer, but can't even tell me what a race condition is or the difference between synchronous and asynchronous execution.

That's going to be a red letter day, lemme tell ya.

[–] itsathursday@lemmy.world 4 points 3 hours ago

“Would you say I have a decorator on this function?”

[–] undrwater@lemmy.world 25 points 8 hours ago (3 children)

I spent some years in classrooms as a service provider when Wikipedia was all the rage. Most districts had a "no Wikipedia" policy, and required primary sources.

My kids just graduated high school, and they were told NOT to use LLM's (though some of their teachers would wink). Their current college professors use LLM detection software.

AI and Wikipedia are not the same, though. Students are better off with Wikipedia as they MIGHT read the references.

Still, those students who WANT to learn will not be held back by AI.

[–] avidamoeba@lemmy.ca 4 points 1 hour ago (1 children)

Still, those students who WANT to learn will not be held back by AI.

Our society probably won't survive if only the students who want to learn do so. 😔

[–] Disillusionist@piefed.world 1 points 30 minutes ago

I share this concern.

[–] Disillusionist@piefed.world 1 points 2 hours ago* (last edited 43 minutes ago)

Great to get the perspective of someone who was in education.

Still, those students who WANT to learn will not be held back by AI.

I think that's a valid point, but I'm afraid that it's making it harder to choose to learn the "old hard way" and I'd imagine fewer students deciding to make that choice.

[–] otter@lemmy.ca 16 points 6 hours ago (1 children)

I always saw the rules against Wikipedia to be around citations (and accuracy in the early years), rather than it harming learning. It's not that different from other tertiary sources like textbooks or encyclopedias. It's good for learning a topic and the interacting pieces, but you need to then search for primary/secondary sources relevant to the topic you are writing about.

Generative AI however

  • is a text prediction engine that often generates made up info, and then students learn things wrong
  • does the writing for the students, so they don't actually have to read or understand anything
[–] Disillusionist@piefed.world 3 points 1 hour ago

I see these as problems too. If you (as a teacher) put an answer machine in the hands of a student, it essentially tells that student that they're supposed to use it. You can go out of your way to emphasize that they are expected to use it the "right way" (since there aren't consistent standards on how it should be used, that's a strange thing to try to sell students on), but we've already seen that students (and adults) often choose to choose the quickest route to the goal, which tends to result in them letting the AI do the heavy lifting.

I've been online enough to know they weren't thinking before either.

[–] lemmie689@lemmy.sdf.org 9 points 8 hours ago