OpenAI's chatbot had a preachiness problem. The company just owned up to it.
On Tuesday, OpenAI released GPT-5.3 Instant, the latest model powering everyday ChatGPT conversations. The update is less about raw intelligence and more about making the chatbot less annoying to talk to. Hallucinations — instances where the AI confidently makes things up — dropped 26.8%, while OpenAI also dialed back what it openly called an "overbearing" personality.
"We heard feedback that GPT‑5.2 Instant would sometimes refuse questions it should be able to answer safely, or respond in ways that feel overly cautious or preachy," the company said in its announcement.
If you've used ChatGPT recently and felt like it was giving you a lecture instead of an answer, you weren't imagining things. Users have been vocal about this for months. As one Reddit user put it, "no one has ever calmed down in all the history of telling someone to calm down."
GPT-5.3 Instant addresses the complaints in two ways:
- Fewer unnecessary refusals. The model is less likely to reject harmless questions because they touch on a topic that could, in some theoretical universe, be sensitive. Ask about the chemistry of household cleaners and you'll get an answer, not a disclaimer.
- Less moralizing. The model "tones down overly defensive or moralizing preambles before answering the question," OpenAI said. Translation: it stops hedging everything with paragraphs of caveats before getting to the point.
The accuracy improvement matters, too. That 26.8% hallucination reduction means roughly one in four errors the previous model would have produced simply doesn't happen anymore. For anyone relying on ChatGPT for research, homework or work tasks, that's a real step toward trusting what comes back without having to fact-check every sentence.
According to the GPT-5.3 Instant system card, the safety guardrails around genuinely harmful content haven't changed. The fixes target false positives, the moments where overcaution was actively getting in the way of usefulness.
The timing here is hard to ignore. With users increasingly migrating to Anthropic's Claude, which climbed to No. 1 on Apple's US App Store last week, OpenAI has real incentive to fix whatever was driving people away. Tone complaints might sound trivial next to the Pentagon contract fiasco. But for the hundreds of millions of people who use ChatGPT every day, how the chatbot talks to you is the product.
Valley View
The most revealing part of this release isn't the hallucination number. It's that OpenAI publicly called its own model "overbearing." Companies almost never concede a personality problem. That admission tells you how real the user exodus was. If Claude's rise taught OpenAI anything, it's that people don't just want a smart chatbot. They want one they actually enjoy talking to. The race for AI dominance might come down to something deeply human: likability.
