this post was submitted on 31 Oct 2025
839 points (96.6% liked)

Showerthoughts

38011 readers
891 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts:

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
 

[I literally had this thought in the shower this morning so please don't gatekeep me lol.]

If AI was something everyone wanted or needed, it wouldn't be constantly shoved your face by every product. People would just use it.

Imagine if printers were new and every piece of software was like "Hey, I can put this on paper for you" every time you typed a word. That would be insane. Printing is a need, and when you need to print, you just print.

you are viewing a single comment's thread
view the rest of the comments
[–] kadu@scribe.disroot.org 5 points 3 days ago (3 children)

LLMs have amazing potential.

That's not what studies from most universities, Anthropic, OpenAI, Apple and Samsung show.

Even if we didn't have this data - and we do have it - are you truly impressed by a machine that can simulate what a Reddit user said 6 months ago? Really? Either you're massively underselling the actual industrial revolution, or you'd be easily impressed by a child's magic trick.

[–] LeFantome@programming.dev 0 points 3 days ago

The Industrial Revolution was literally “are you truly impressed by a machine that can weave cloth as well as your grandmother”? And the answer was yes because one person could be trained to use that machine in much less time than it took to learn to weave. And they could make 10 times as much stuff in the same time.

LLMs are literally the same kind of progress.

Except we are not 200 years later when the impact on the world is obvious and not up for debate. We are in the first few years where the “machine” would be broken half the time and its work would have obvious defects.

[–] grindemup@lemmy.world 0 points 3 days ago

Honestly, yes I am impressed when you compare what was possible with NLP prior to LLMs. Your question is akin to asking: are you truly impressed by a machine that can stick blocks together as well as some random person? Regardless of whether you are impressed, significant amounts of human labour can be reproduced by machines in a manner that was previously impossible. Obviously there's still a lot of undeserved hype, but let's not pretend that replicating human language is trivial or worthless.

[–] Semi_Hemi_Demigod@lemmy.world 0 points 3 days ago* (last edited 3 days ago) (1 children)

I recently created a week-long IT training course with an AI. It got almost all of it right, only hallucinating when it came to details I had to fix. But it took a task that would have taken me a couple months to a couple weeks. So for specific applications it is actually quite useful. (Because it's basically rephrasing what a bunch of people wrote on Reddit.)

For this use case I would call it as revolutionary as desktop publishing. Desktop publishing allowed people to produce in a couple days what it would have taken a team of designers and copy editors to do in a couple weeks.

Everything else I've used it for it's been pretty terrible at, especially diagnosing issues. This is due particularly to the fact that it will just make shit up if it doesn't know, so if you also don't know you can't just trust it and end up doing research and experimentation yourself.

[–] akacastor@lemmy.world 3 points 3 days ago (1 children)

"It got almost all of it right, only hallucinating when it came to details I had to fix."

What does this even mean? It did a great job, the only problems were the parts I had to fix? 🤣

[–] Semi_Hemi_Demigod@lemmy.world -2 points 3 days ago (1 children)

Most of it was basic knowledge that it could get from its training on the web. The stuff it missed was details about things specific to the product.

But generating 90% of the content and me just having to edit a bit is still way less work than me doing it all myself, even if it’s right the first time.

It’s got intern-level intelligence

[–] BluescreenOfDeath@lemmy.world 2 points 3 days ago* (last edited 3 days ago) (1 children)

It’s got intern-level intelligence

The problem is, it's not "intelligence". It's an enormous statistical based autocorrect.

AI doesn't understand math, it just knows that the next character in a string starting "2+2=" is almost unanimously "4" in all the data it's statistically analyzed. If you try to have it solve an equation that isn't commonly repeated, it can't solve it. Even when you try to train it on textbooks, it doesn't 'learn' the math, it tries to analyze the word patterns in the text of the book and attempts to replicate it. That's why it 'hallucinates', and also why it doesn't matter how much data you feed it, it won't be 'intelligent'.

It seems intelligent because we associate intelligence with language, and LLMs mimic language in an amazing way. But it's not 'thinking' the way we associate with intelligence. It's running complex math about what word should come next in a sentence based on the other sentences of that sort it's seen before.

[–] Semi_Hemi_Demigod@lemmy.world -1 points 3 days ago* (last edited 3 days ago) (1 children)

Interns aren't that intelligent, either. But they can generate content even if they're not intelligent and that's helpful, too.

Having the right answer is a lot less useful than looking like you have the right answer, sadly.

[–] BluescreenOfDeath@lemmy.world 1 points 2 days ago (1 children)

Interns aren’t that intelligent, either. But they can generate content even if they’re not intelligent and that’s helpful, too.

An intern has the capacity to learn, an LLM does not.

Having the right answer is a lot less useful than looking like you have the right answer, sadly.

Only if you care about accuracy, which is 100% the problem with LLMs.

[–] Semi_Hemi_Demigod@lemmy.world 1 points 2 days ago

Which is what I said: It got some stuff wrong but it got a lot more right, which saved me a ton of time generating content.