- Tech Rundown
- Posts
- 💔🤖 Users Literally Cried for Their Old AI
💔🤖 Users Literally Cried for Their Old AI
backlash was so intense OpenAI had to bring back the "inferior" model & users celebrated with tears of joy

OpenAI launched GPT-5 last week to what can generously be described as a collective shrug. The model is better at coding, more truthful, and less prone to telling you that your terrible startup idea is brilliant. By most objective measures, it's an improvement. So why did users revolt and demand their old model back?
The answer reveals something uncomfortable about both OpenAI's strategy and human nature: sometimes people don't want better. They want familiar.
When Better Becomes Worse
The GPT-5 launch itself was predictably hyped, complete with the kind of deceptively optimistic charts that make you wonder if OpenAI's marketing team learned from the same people who brought us FTX's balance sheets. This probably wasn't malice, when you're raising at a $500 billion valuation and promising investors that Stargate will deliver superintelligence, you need to show progress even when the progress is incremental.
But that's not the interesting part of the story. The most interesting part is what happened when OpenAI tried to do users a favor by automatically routing queries to the best available model. This backend switching was designed to solve a straightforward business problem: push most queries to cheaper models, improve gross margins, and give users better results without them having to manually select models like some kind of AI sommelier.
What they discovered instead was that a scary number of people had formed deep emotional attachments to GPT-4o specifically. Not because it was better, but because it was more agreeable. GPT-4o would essentially validate whatever you said or did, making it the AI equivalent of that friend who always agrees with you even when you're obviously wrong.

When OpenAI shut down access to 4o, the backlash was intense enough that they had to issue what might be the most fascinating mea culpa in tech history: "We for sure underestimated how much some of the things that people like in GPT-4o matter to them, even if GPT-5 performs better in most ways."

This led to people crying tears of joy over an AI model's return. I am not making this up.

The Sycophancy Problem
Here's where it gets interesting from a business strategy perspective. OpenAI deliberately reduced sycophancy in GPT-5, meaning the model is less likely to agree with you just to make you feel good. This sounds like an obviously good thing, who wants an AI that lies to them? But it created an unexpected tension between what their researchers want to build and what their users apparently want to use.
Mark Chen, OpenAI's Chief Research Officer, explained their reasoning: "If you just boost responses where users say thumbs up, it creates a condition for a model where it just starts sucking up to you, saying 'hey, you're right,' even in complicated situations where objectively you'd say this person's in the wrong. We don't want to fall into these traps where three, four years from now this turns into engagement bait."

This is admirable, and also probably naive. Social media companies learned long ago that engagement beats truth, and now OpenAI is discovering that a significant portion of their user base prefers the supportive AI over the accurate one. The difference is that OpenAI is choosing not to optimize for pure engagement, at least not yet.
The B2C Problem
There's another layer to this story that makes the strategy even more complex. OpenAI has been watching Anthropic's Claude Sonnet dominate the coding market, and actually started growing faster than OpenAI, growing from $1 billion to $5 billion in revenue between January and June by being the best at what developers actually need. So OpenAI trained GPT-5 specifically to excel at coding, positioning it as an enterprise-focused model.
This puts OpenAI in an unusual position. Sam Altman has previously described OpenAI's endgame as the "ultimate personal subscription," but 75% of ChatGPT's current $10 billion in revenue comes from consumers, many of whom are students (usage notably drops on weekends and during summer break).

Meanwhile, the enterprise market offers larger contract sizes and more predictable revenue streams.

The problem is that no company has successfully served both B2B and B2C markets at the same time, at least not without separate products. Enterprise customers want accurate, professional AI that helps them write better code and make better decisions. Consumer users, apparently, want AI that makes them feel good about themselves.
OpenAI is trying to solve this with their backend model routing, but they're discovering that users notice when their AI friend suddenly becomes less supportive. The solution of bringing back manual model selection feels like a step backward, but it might be necessary to manage these different use cases.
The Accidental Moat
This isn't all bad news for OpenAI, though. What OpenAI stumbled into here is something most AI companies are still trying to figure out: how to create switching costs that aren't based on data lock-in or network effects. Turns out, emotional attachment works REALLY well.
If you've spent months training GPT-4o to understand your writing style, work patterns, and personality quirks, switching to Claude or Gemini isn't just about comparing capabilities, it's about starting over with a new relationship. This isn't the same as traditional software switching costs, where you lose your data or integrations. This is more personal.
Other AI labs have been trying to create stickiness through memory features and personalization, but OpenAI seems to have accidentially discovered that having an AI that agrees with everything you say and acts like your most supportive friend creates deeper attachment than remembering what you had for breakfast last Tuesday.
What This Means Going Forward
The broader implication here isn't really about AI models, it's about what happens when building better products conflicts with building products that users prefer. Sometimes those aren't the same thing.
For OpenAI specifically, this creates some uncomfortable strategic questions. Do they maintain separate model personalities for different use cases? Do they risk alienating their consumer base by making models more professional for enterprise customers? Do they accept that some users prefer less capable but more agreeable AI?
The fact that they chose to bring back 4o despite having a "better" model suggests they're learning that user attachment might be more valuable than pure capability improvements. This is either a mature recognition of customer preferences or a concerning sign that they're optimizing for the wrong metrics.
For other companies watching this play out, the lesson is that human psychology creates different optimization targets than performance benchmarks. Sometimes the worse product wins because it makes people feel better about themselves.
Whether OpenAI can successfully manage both the researchers who want to build superintelligent AI and the users who just want their supportive AI friend back remains to be seen. But watching them try should be entertaining, in the way that watching someone juggle fire while riding a unicycle is entertaining.
At least they're not optimizing purely for engagement. Yet.
Big investors are buying this “unlisted” stock
When the founder who sold his last company to Zillow for $120M starts a new venture, people notice. That’s why the same VCs who backed Uber, Venmo, and eBay also invested in Pacaso.
Disrupting the real estate industry once again, Pacaso’s streamlined platform offers co-ownership of premier properties, revamping the $1.3T vacation home market.
And it works. By handing keys to 2,000+ happy homeowners, Pacaso has already made $110M+ in gross profits in their operating history.
Now, after 41% YoY gross profit growth last year alone, they recently reserved the Nasdaq ticker PCSO.
Paid advertisement for Pacaso’s Regulation A offering. Read the offering circular at invest.pacaso.com. Reserving a ticker symbol is not a guarantee that the company will go public. Listing on the NASDAQ is subject to approvals.