this post was submitted on 15 Sep 2025
175 points (83.8% liked)

Technology

75191 readers
3148 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ICastFist@programming.dev 23 points 1 day ago (2 children)

Carla Rover once spent 30 minutes sobbing after having to restart a project she vibe coded. Rover has been in the industry for 15 years, mainly working as a web developer. She’s now building a startup, alongside her son, that creates custom machine learning models for marketplaces.

Using AI to sell AI, infinite money glitch! /s

“Using a coding co-pilot is kind of like giving a coffee pot to a smart six-year-old and saying, ‘Please take this into the dining room and pour coffee for the family,’” Rover said. Can they do it? Possibly. Could they fail? Definitely. And most likely, if they do fail, they aren’t going to tell you.

No, a kid will learn if s/he fucks up and, if pressed, will spill the beans. AI is, despite being called "intelligent", not learning anything from its mistakes and often forgetting things because of limitations - consistency is still one of the key problems for all LLM and image generators

[–] squaresinger@lemmy.world 10 points 1 day ago

If you bring a 6yo into office and tell them to do your work for you, you should be locked up. For multiple reasons.

Not sure why they thought that was a positive comparison.

[–] Knock_Knock_Lemmy_In@lemmy.world 1 points 23 hours ago (1 children)

AI is, despite being called "intelligent", not learning anything from its mistakes

Don't they also train new models on past user conversations?

[–] ICastFist@programming.dev 1 points 13 hours ago* (last edited 13 hours ago) (1 children)

Considering how many AI models still can't correctly count how many 'r' there are in "strawberry", I doubt it. There's also the seahorse emoji doing the rounds at the moment, you'd think that the models would get "smart" after repeatedly failing and realize it's an emoji that has never existed in the first place.

[–] Knock_Knock_Lemmy_In@lemmy.world 1 points 12 hours ago (1 children)

Chatgpt5 can count the number of 'r's, but that's probably because it has been specifically trained to do so.

I would argue that the models do learn, but only over generations. So slowly and specifically.

They definitely don't learn intelligently.

[–] hark@lemmy.world 1 points 11 hours ago (1 children)

That's the P in ChatGPT: Pre-trained. It has "learned" based on the set of data it has been trained on, but prompts will not have it learn anything. Your past prompts are kept to use as "memory" and to influence output for your future prompts, but it does not actually learn from them.

[–] Knock_Knock_Lemmy_In@lemmy.world 1 points 9 hours ago* (last edited 9 hours ago) (1 children)

The next generation of GPT will include everyone's past prompts (ever been A/B tested on openAI?). That's what I mean by generational learning.

[–] hark@lemmy.world 2 points 8 hours ago

Maybe. It's probably not high quality training data for the most part, though.