Deep thoughts
Showerthoughts
A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.
Here are some examples to inspire your own showerthoughts:
- Both “200” and “160” are 2 minutes in microwave math
- When you’re a kid, you don’t realize you’re also watching your mom and dad grow up.
- More dreams have been destroyed by alarm clocks than anything else
Rules
- All posts must be showerthoughts
- The entire showerthought must be in the title
- No politics
- If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
- A good place for politics is c/politicaldiscussion
- Posts must be original/unique
- Adhere to Lemmy's Code of Conduct and the TOS
If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.
Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.
There’s been an extensive marketing campaign to convince people that LLMs are intelligent. I wouldn’t call someone a subhuman for assuming there is some truth to that.
Of those that understand what an LLM is, I think you can divide them into two groups, the honest, and the dishonest. Honest people see no use in a bullshit generator, a lying machine. They see it as a perversion of technology. Dishonest people have no such objection. They might even truly see intelligence in the machine, as its outputs don’t differ substantially from their own. If you view language as a means to get what you want, rather than a means to convey factual information, then lying is acceptable, desirable, intelligent. It would be difficult for such a person to differentiate between coherent but meaningless bullshit, and a machine with agency making false statements to pursue its own goals.
I disagree about the dichotomy. I think you can (1) understand what LLMs actually are. (2) See the value of such technology.
In both cases being factual (not being deceived) and not being malicious (not attempting to deceive others)
I think a reasonable use of these tools is as a "sidekick" (you being the main character). Some tasks can be assigned to it so you save some time, but the thinking and the actual mental model of what is being done shall always be your responsibility.
For example, LLMs are good as an interface to quickly lookup within manuals, books, clarify specific concepts, or find the proper terms for a vague idea (so that you can research the topic using the appropriate terms)
Of course, this is just an opinion. 100% open to discussion.
Marketing is a valid use for AI (because bullshit was always thewod anyway)
Wait, let's hear OP out.
The mirror test is frequently cited as a means of testing sentience.
OP I think you hit the nail on the head.
Based on the fact that most people don't see their interaction with the LLM as gazing into the mirror, am I being led to believe that most people are not sentient???
Based entirely on the opinions of people on niche social media platforms, yes.
Except it's not my reflection, it's a reflection of millions if not billions of humans.
Except it’s not their reflection, it’s a string of phrases presented to you based partly on the commonality of similar phrases appearing next to one another in the training data, and partly on mysterious black box modifications! Fun!
I like to describe it as a "force multiplier" along the lines of a powered suit.
You are putting in small inputs, and it's echoing out in a vast, vast virtual space and being compared and connected with countless billions of possible associations. What you get back is a kind of amplification of what you put in. If you make even remotely leading suggestions in your question or prompt, that tiny suggestion is also going to get massively boosted in the background, this is part of why some LLM's can go off the rails with some users. If you don't take care with what exactly you're putting in, you will get wildly unexpected results.
also, it's devil tech so there's that.
Related: is there a name for "question bias"?
Like asking ChatGPT if "is x good?", and it would reply "Yes, x is good." but if you ask "is x bad?" it would reply "Yes, x is bad, you're right."
It's just a leading question.
It is not a leading question. The answer just happens to be meaningless.
Asking whether something is good is the vast majority of human concern. Most of our rational activity is fundamentally evaluative.
Huh.....so what you're saying is that mirrors are actually AI.
THAT MAKES A LOT OF SENSE!!! EVERYBODY COVER YOUR MIRRORS!!!
Laughs in vampire
Not nearly enough people understand this about our current models of AI. Even people who think they understand AI don't understand this, usually because they have been talking to themselves a lot without realizing it.
Just think about the fact llms are basically trying to simulate reddit posts and then think again about using them.

Annihilation?
I checked with that other gorilla who lives in the bathroom and he says you're wrong
lol, Is that the same gorilla that you see in other bathrooms? Or (like me) you meet a new gorilla every time you wash your hands?
I think he's the same guy. I used to try to bust him up but he just kept multiplying into more pieces and then coming back whole every time I saw a new mirror, so I eventually gave up
Except when you leave several LLMs able to communicate with one another they will, on their own, with no instructions, including creating their own unique social norms.
This is nothing else than the reflexion I am talking about. It is not a reflexion of you, the person chatting with the bot, but an "average" reflexion of what humanity has expressed in the data llms have been trained on.
If a mirror is placed in front of another mirror, the "infinite tunnel" only exists in the mind of the observer.
No.
This is a great one - although I never see animals worshipping the mirror.
I've got a duck that prefers to dance in front of a chrome bumper or glass door where he can see his reflection than to go after any potential mates. Possibly he's worshipping the mirror. Possibly he's just really vain.
Your duck:

Sounds like he’s ducking handsome
He is actually. When he washes himself he's blinding white. And when he dances he gets a little feather pompadour on the top of his head.
Nothing wrong with a handsome duck taking a little self affirmation time - he knows his value, he can't look away.
Or forming romantic attachments to the mirror
Uhmm ... you never had a pet bird Im guessing?
Seeing all bird masturbate up against a mirror is just par for the course when you have bird pets. Its gonna be either a mirror, a favorite toy ... or you.
Animals aren't cursed with the human ability to think our way into harmful and unproductive behavior due to conscious re-interpretation of information around us. Except for occasional zoo-animals in captivity that fall in love with inanimate objects.
Something something about our species basically being in captivity.
Ooofff... Good call
And here I am practising my smile in the mirror (like that golden retriever)
My dog doesn't pay any attention to mirrors, or llms.
False. My reflection can't tell me that pressing the Steam button and X will bring up the keyboard on Steam Deck's desktop mode.
pressing and holding the steam button tells u every steam shortcut
If I understand your statement correctly, only the most intelligent creatures would understand that LLM's are themselves?
But they are a reflexion of ourselves. If you look at the algebra and stats underneath, you'll realize that they spit out whatever it is in us
the “mirror stage” by lacan is worth looking into here but no, I don’t think humans automatically think the llm has a sort of reified other, but as we get past an uncanny valley and into generations growing up with entire personal histories in an llm - I can absolutely see that happening.
I think most people using them know that there's not something there, and yet when using it they'll still act as there is. Even giving it that benefit of doubt in what it outputs as valid, as if it's from another person, maybe even one who knows more than they do. So it's a gray area of "not believing".
Continuing on "there's not something there", what about theater analogy to llm's? Usually people know it's all act, but if actors play too good, you immerse. And it's focus is not the truth, it tells the story.
I find this kind of Anti AI Sentience bigotry horrible!
Interesting take. Could you elaborate?
My post comes from the study of the algebra and stats that enable LLMs (well, part of it. i am not done with the "attention".
I was making a joke based on my username