this post was submitted on 15 Sep 2025
175 points (83.8% liked)
Technology
75191 readers
3148 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Considering how many AI models still can't correctly count how many 'r' there are in "strawberry", I doubt it. There's also the seahorse emoji doing the rounds at the moment, you'd think that the models would get "smart" after repeatedly failing and realize it's an emoji that has never existed in the first place.
Chatgpt5 can count the number of 'r's, but that's probably because it has been specifically trained to do so.
I would argue that the models do learn, but only over generations. So slowly and specifically.
They definitely don't learn intelligently.
That's the P in ChatGPT: Pre-trained. It has "learned" based on the set of data it has been trained on, but prompts will not have it learn anything. Your past prompts are kept to use as "memory" and to influence output for your future prompts, but it does not actually learn from them.
The next generation of GPT will include everyone's past prompts (ever been A/B tested on openAI?). That's what I mean by generational learning.
Maybe. It's probably not high quality training data for the most part, though.