Yes, and the system can figure out the correct answer as soon as you point out that hallucination is wrong. Somehow ChatGPT is even more unwilling to say "no" or "don't know" recently.
this post was submitted on 14 Jun 2025
11 points (100.0% liked)
ChatGPT
9834 readers
3 users here now
Unofficial ChatGPT community to discuss anything ChatGPT
founded 2 years ago
MODERATORS
The only model worth using for a topic is o3. Everything else is just garbage a lot of the time.
Minimal context window so you ended up repeating stuff, or it didn't bother to look at its memories. Stuff that just doesn't make sense unless it's more complex that a couple of simple Google queries.