this post was submitted on 04 Jan 2026
383 points (95.3% liked)
Showerthoughts
39117 readers
1452 users here now
A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.
Here are some examples to inspire your own showerthoughts:
- Both “200” and “160” are 2 minutes in microwave math
- When you’re a kid, you don’t realize you’re also watching your mom and dad grow up.
- More dreams have been destroyed by alarm clocks than anything else
Rules
- All posts must be showerthoughts
- The entire showerthought must be in the title
- No politics
- If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
- A good place for politics is c/politicaldiscussion
- Posts must be original/unique
- Adhere to Lemmy's Code of Conduct and the TOS
If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.
Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
No, it doesn't. It's just mimikry. Autocomplete on steroids.
Have you met many people?
Most people's entire lives are a form of autocomplete.
Obvious non-argument is obvios.
My father is convinced that humans and dinosaurs coexisted and told me that ai proved that to him. So.. people do let it think for them.
So he let's the "AI" do the hallucinating for him.
Yep lol.
This was true last year. But they are cranking along the ARC-AGI benchmarks designed specifically to test the kind of things that cannot be done by just regurgitating training data.
On GPT 3 I was getting a lot of hallucinations and wrong answers. On the current version of Gemini, I really haven’t been able to detect any errors in things I’ve asked it. They are doing math correctly now, researching things well and putting together thoughts correctly. Even photos that I couldn’t get old models to generate now are coming back pretty much exactly as I ask.
I was sort of holding out hopes that LLM’s would peak somewhere just below being really useful. But with RAGs and agentic approaches, it seems that they will sidestep the vast majority of problems that LLM’s have on their own and be able to put together something that is better at even very good humans at most tasks.
I hope I’m wrong, but it’s getting pretty hard to bank on that old narrative that they are just fancy autocomplete that can’t think anymore.
That's a lot of bullshit.
this bubble can't pop soon enough
was dotcom this annoying too?
Surprisingly, it was not this annoying.
It was very annoying, but at least there was an end in sight, and some of it was useful.
We all knew that http://www.only-socks-and-only-for-cats.com/ was going away, but eBay was still pretty great.
In contrast, we're all standing around today looking at many times the world's GDP being bet on a pretty good autocomplete algorithm waking up and becoming fully sentient.
It feels like a different level of irrational.
Dot com bubble was optimistic. AI bubble is pessimistic. People thought their lives would improve due to improved communication and efficiency. The internet was seen as a positive thing. The dot com bubble was more about monetizing it, but that wasn't the zwitgeist. With AI people don't see much benefits and are aware it's purpose is to take their jobs.
With the dot com bubble, it was mainly mom and pop investors that were worst off, but many companies died. With AI bubble it seems like it's the companies that will do worst when it crashes. Obviously, it affects everyone, but this skews more to the 1%. So hopefully it's a lesson on greed. Unlikely though.
To me, this is more annoying. But I might have been too young and naïve back then.
If you can't see it you're not paying attention.
If you're seeing it, you're delusional.
I'm pleased to inform you that you are wrong.
A large language model works by predicting the statistically-likely next token in a string of tokens, and repeating until it's statistically-likely that its response has finished.
You can think of a token as a word but in reality tokens can be individual characters, parts of words, whole words, or multiple words in sequence.
The only addition these "agentic" models have is special purpose tokens. One that means "launch program", for example.
That's literally how it works.
AI. Cannot. Think.
..And what about non LLM models like diffusion models, VL-JEPA, SSM, VLA, SNN? Just because you are ignorant of what's happening in the industry and repeating a narrative that worked 2 years ago doesn't make it true.
And even with LLM's, even if they aren't "thinking", but produce as good or better results than real human "thinking" in major domains, does it even matter? The fact is that there will be many types of models working in very different ways working together and together will be beating humans at tasks that are uniquely human.
Go learn about ARC-AGI and see the progress being made there. Yes, it will take a few more iterations of the benchmark to really challenge humans at the most human tasks, but at the rate they are going that's only a few years.
Or just stay ignorant and keep repeating your little mantra so that you feel okay. It won't change what actually happens.
Yeah those also can't think, and it will not change soon
The real problem though is not if LLM can think or not, it's that people will interact with it as if it can, and will let it do the decision making even if it's not far from throwing dice
We don't even know what "thinking" really is so that is just semantics. If it performs as well or better than humans at certain tasks, it really doesn't matter if it's "thinking" or not.
I don't think people primarily want to use it for decision making anyway. For me it just turbocharges research, compiling stuff quickly from many sources, writes code for small modules quite well, generates images for presentations, etc, does more complex data munging from spreadsheets or even saved me a bunch of time taking a 50 page handwritten ledger and near perfectly converting it to excel..
None of that requires decision making, but it saves a bunch of time. Honestly I've never asked it to make a decision so I have no idea how it would perform.. I suspect it would more describe the pros and cons than actually try to decide something.