this post was submitted on 15 Sep 2025
175 points (83.8% liked)

Technology

75191 readers
2487 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Knock_Knock_Lemmy_In@lemmy.world 1 points 23 hours ago (1 children)

AI is, despite being called "intelligent", not learning anything from its mistakes

Don't they also train new models on past user conversations?

[–] ICastFist@programming.dev 1 points 13 hours ago* (last edited 13 hours ago) (1 children)

Considering how many AI models still can't correctly count how many 'r' there are in "strawberry", I doubt it. There's also the seahorse emoji doing the rounds at the moment, you'd think that the models would get "smart" after repeatedly failing and realize it's an emoji that has never existed in the first place.

[–] Knock_Knock_Lemmy_In@lemmy.world 1 points 12 hours ago (1 children)

Chatgpt5 can count the number of 'r's, but that's probably because it has been specifically trained to do so.

I would argue that the models do learn, but only over generations. So slowly and specifically.

They definitely don't learn intelligently.

[–] hark@lemmy.world 1 points 11 hours ago (1 children)

That's the P in ChatGPT: Pre-trained. It has "learned" based on the set of data it has been trained on, but prompts will not have it learn anything. Your past prompts are kept to use as "memory" and to influence output for your future prompts, but it does not actually learn from them.

[–] Knock_Knock_Lemmy_In@lemmy.world 1 points 9 hours ago* (last edited 9 hours ago) (1 children)

The next generation of GPT will include everyone's past prompts (ever been A/B tested on openAI?). That's what I mean by generational learning.

[–] hark@lemmy.world 2 points 8 hours ago

Maybe. It's probably not high quality training data for the most part, though.