In the last few months, it’s been more rare that my model just made stuff up
But now, it searches for almost every query even if asked not to search and it makes up nonsense too
For instance I asked if about small details in video games and it told me “the music box stops playing when Sarah dies”. There is no music box. This is nonsense
The only model worth using for a topic is o3. Everything else is just garbage a lot of the time.
Minimal context window so you ended up repeating stuff, or it didn’t bother to look at its memories. Stuff that just doesn’t make sense unless it’s more complex that a couple of simple Google queries.
Yes, and the system can figure out the correct answer as soon as you point out that hallucination is wrong. Somehow ChatGPT is even more unwilling to say “no” or “don’t know” recently.