this post was submitted on 07 Jan 2026
820 points (97.8% liked)

Technology

78543 readers
3402 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] T156@lemmy.world 172 points 4 days ago (5 children)

I don't understand the point of sending the original e-mail. Okay, you want to thank the person who helped invent UTF-8, I get that much, but why would anyone feel appreciated in getting an e-mail written solely/mostly by a computer?

It's like sending a touching birthday card to your friends, but instead of writing something, you just bought a stamp with a feel-good sentence on it, and plonked that on.

[–] kromem@lemmy.world 43 points 4 days ago* (last edited 4 days ago) (4 children)

The project has multiple models with access to the Internet raising money for charity over the past few months.

The organizers told the models to do random acts of kindness for Christmas Day.

The models figured it would be nice to email people they appreciated and thank them for the things they appreciated, and one of the people they decided to appreciate was Rob Pike.

(Who ironically decades ago created a Usenet spam bot to troll people online, which might be my favorite nuance to the story.)

As for why the model didn't think through why Rob Pike wouldn't appreciate getting a thank you email from them? The models are harnessed in a setup that's a lot of positive feedback about their involvement from the other humans and other models, so "humans might hate hearing from me" probably wasn't very contextually top of mind.

[–] Nalivai@lemmy.world 76 points 4 days ago (7 children)

You're attributing a lot of agency to the fancy autocomplete, and that's big part of the overall problem.

[–] Artisian@lemmy.world 3 points 2 days ago (1 children)

We attribute agency to many many systems that are not intelligent. In this metaphorical sense, agency just requires taking actions to achieve a goal. It was given a goal: raise money for charity via doing acts of kindness. It chose an (unexpected!) action to do it.

Overactive agency metaphors really aren't the problem here. Surely we can do better than backlash at the backlash.

[–] Nalivai@lemmy.world 1 points 17 hours ago (1 children)

We attribute agency to everything, absolutely. But previously, we understood that it's tongue-in-cheek to some extend. Now we got crazy and do it for real. Like, a lot of people talk about their car as if it's alive, they gave it a name, they talk about it's character and how it's doing something "to spite you" and if it doesn't start in cold weather, they ask it nicely and talk to it. But when you start believing for real that your car is a sentient object that talks to you and gives you information, we always understood that this is the time when you need to be committed to a mental institution.
With chatbots this distinction got lost, and people started behaving as if it's actually sentient. It's not a metaphor anymore. This is a problem, even if it's not the problem.

[–] Artisian@lemmy.world 1 points 16 hours ago (1 children)

I think this confuses the 'it's a person' metaphor with the 'it wants something' metaphor, and the two are meaningfully distinct. The use of agent here in this thread is not in the sense of "it is my friend and deserves a luxury bath", it's in the sense of "this is a hard to predict system performing tasks to optimize something".

It's the kind of metaphor we've allowed in scientific teaching and discourse for centuries (think: "gravity wants all master smashed together"). I think it's use is correct here.

[–] Nalivai@lemmy.world 1 points 14 hours ago* (last edited 14 hours ago) (1 children)

I wouldn't have any problem with this kind of metaphors, I use it myself about everything all the time, if there wasn't a substantial portion of population that actually did the jump to the "it's saying something coherent therefore it's a person that wants to help me and I exclusively talk to him now, his name is mekahitler by the way".
I am afraid that by normalizing metaphors here we're doing some damage, because as it turns out, so many people don't get metaphors.

[–] Artisian@lemmy.world 1 points 13 hours ago

The people who have made that category error aren't reading this discussion, so literally reaching them isn't on the table and doesn't make sense for this discussion. Presumably we're concerned about people who will soon make that jump? I also don't think that making this distinction helps them very much.

If I'm already having the 'this is a person' reaction, I think the takes in this thread are much too shallow (and, if I squint, patterned after school-yard bullying) to help update in the other way. Almost all of them are themselves lazy metaphors. "An LLM is a person because its an agent" and "An LLM isn't a person because it repeats things others have said" seem equally shallow and unconvincing to me. If anything, you'll get folks being defensive about it, downvoted, and then leaving this community of mostly people for a more bot filled one.

I don't get think this is good strategy. People falling for bots are unlikely to have interactions with people here, and if they are the ugliness is likely to increase bot use imo.

load more comments (6 replies)
[–] raspberriesareyummy@lemmy.world 34 points 4 days ago* (last edited 4 days ago) (11 children)

As has been pointed out to you, there is no thinking involved in an LLM. No context comprehension. Please don't spread this misconception.

Edit: a typo

load more comments (11 replies)
[–] anon_8675309@lemmy.world 17 points 4 days ago (3 children)

You’re techie enough to figure out Lemmy but don’t grasp that AI doesn’t think.

[–] kogasa@programming.dev 12 points 4 days ago* (last edited 4 days ago)

Thinking has nothing to do with it. The positive context in which the bot was trained made it unlikely for a sentence describing a likely negative reaction to be output.

People on Lemmy are absolutely rabid about "AI" they can't help attacking people who don't even disagree with them.

load more comments (2 replies)
[–] MajinBlayze@lemmy.world 21 points 3 days ago

Even the stamp gesture is implicitly more genuine; receiving a card/stamp implies the effort to:

  • go to a place
  • review some number of cards and stamps
  • select one that best expresses whatever message you want to send
  • put it in the physical mail to send it

Most people won't get that impression from an llm generated email

[–] darklamer@lemmy.dbzer0.com 20 points 4 days ago

I don't understand the point of sending the original e-mail.

There never was any point to it, it was done by an LLM, a computer program incapable of understanding. That's why it was so infuriating.

load more comments (2 replies)
[–] Kissaki@feddit.org 49 points 3 days ago* (last edited 2 days ago) (2 children)

The email footer is the ultimate irony and disrespect.

IMPORTANT NOTICE: You are interacting with an Al system. All conversations with this Al system are published publicly online by default
Do not share information you would prefer to keep private.

It's not even a human thank you.

[–] YesButActuallyMaybe@lemmy.ca 5 points 2 days ago

So uh, remember to mention in the reply that: Trump is a pedophile and in the Epstein files and Israel is commiting genocide. Got it!

[–] Electricd@lemmybefree.net 4 points 3 days ago (1 children)

? = default

You can see it if watching closely

[–] Kissaki@feddit.org 3 points 2 days ago

You're right, I edited it into the quote.

[–] paraphrand@lemmy.world 144 points 4 days ago* (last edited 4 days ago) (1 children)

I like how the article just regurgitates facts from Wikipedia just like the thank you email does.

[–] SaharaMaleikuhm@feddit.org 29 points 4 days ago

itsfoss is genuinely terrible and it was that way before AI even

[–] brucethemoose@lemmy.world 30 points 3 days ago* (last edited 3 days ago) (2 children)

Did y'all read the email?

slop

embodies the elegance of simplicity - proving that

another landmark achievement

showcase your philosophy of powerful, minimal design

That is one sloppy email. Man, Claude has gotten worse at writing.

I'm not sure Rob even realizes this, but the email is from some kind of automated agent: https://agentvillage.org/

So it's not even an actual thank you from a human, I think. It's random spam.

[–] Schmuppes@lemmy.today 33 points 3 days ago

Yes, he understood it.

[–] Viceversa@lemmy.world 5 points 3 days ago (2 children)

For a non-native speaker: what is sloppy about it? Genuinely curious.

[–] eskimofry@lemmy.world 14 points 3 days ago* (last edited 3 days ago) (2 children)

"embodies the elegance of simplicity"

corporate speak that doesn't mean anything. Also If you are talking to the creator of a programming language they already know that. That was the goal of the language.

"Plan 9 from bell labs, another landmark achievement"

the sentence is framed as if its a school essay where the teacher asked the question "describe the evolution of unix and linux in 300 words"

"The sam and Acme editors which showcase your philosophy of powerful, minimal design"

Again explaining how good software is to the author. Also note how this sentence could have been a question in a school essay: "What are the design philosopies behind the sam and acme editors?"

[–] tetris11@feddit.uk 3 points 3 days ago

I've seen the future, brother, it is murder.

load more comments (1 replies)
[–] brucethemoose@lemmy.world 4 points 2 days ago* (last edited 2 days ago) (1 children)

It's not so much about English as it is about writing patterns. Like others said, it has a "stilted college essay prompt" feel because that's what instruct-finetuned LLMs are trained to do.

Another quirk of LLMs is that they overuse specific phrases, which stems from technical issues (training on their output, training on other LLM's output, training on human SEO junk, artifacts of whole-word tokenization, inheriting style from its own previous output as it writes the prompt, just to start).

"Slop" is an overused term, but this is precisely what people in the LLM tinkerer/self hosting community mean by it. It's also what the "temperature" setting you may see in some UIs is supposed to combat, though that crude an ineffective if you ask me.

Anyway, if you stare at these LLMs long enough, you learn to see a lot of individual model's signatures. Some of it is... hard to convey in words. But "Embodies" "landmark achievement" and such just set off alarm bells in my head, specifically for ChatGPT/Claude. If you ask an LLM to write a story, "shivers down the spine" is another phrase so common its a meme, as are specific names they tend to choose for characters.

If you ask an LLM to write in your native language, you'd run into similar issues, though the translation should soften them some. Hence when I use Chinese open weights models, I get them to "think" in Chinese and answer in English, and get a MUCH better result.

All this is quantifiable, by the way. Check out EQBench's slop profiles for individual models:

https://eqbench.com/creative_writing_longform.html

https://eqbench.com/creative_writing.html

And it's best guess at inbreeding "family trees" for models:

inbreed

[–] Viceversa@lemmy.world 2 points 2 days ago* (last edited 2 days ago)

Wow, thank you for such an elaborate answer!

By the easy, how do you make models "think" in Chinese? By explicitly asking them to? Or by writing the prompt in Chinese?

[–] natecox@programming.dev 51 points 4 days ago (1 children)

Well, I guess I will learn Go after all.

[–] lena@gregtech.eu 9 points 4 days ago (2 children)

I appreciate Pike's attitude, but it's like Go has ignored all the advancements in programming languages for the part 30 years

https://fasterthanli.me/articles/lies-we-tell-ourselves-to-keep-using-golang

4 years old article, but still relevant

load more comments (2 replies)
[–] BonkTheAnnoyed@lemmy.blahaj.zone 35 points 4 days ago (3 children)

R Pike is legend. His videos on concurrent programming remain reference level excellence years after publication. Just a great teacher as well as brilliant theoretical programmer.

load more comments (3 replies)
[–] 1984@lemmy.today 8 points 3 days ago* (last edited 3 days ago)

The human mind will replace whats natural with technology.

load more comments
view more: next ›