OpenAI has reinstated GPT-4o as a default model option for paid ChatGPT subscribers, reversing a controversial change made during last week’s release of GPT-5. CEO Sam Altman announced the GPT-4o move on Wednesday, along with other updates to the company’s AI chatbot.
When GPT‑5 debuted on Aug. 7, OpenAI made it the default model and removed the model picker from the top left corner of the app and browser versions of ChatGPT, preventing users from manually selecting earlier models. The company said GPT-5 could automatically select the most suitable model for each query under what Altman called “unified intelligence,” making manual selection unnecessary.
However, many paid users — some deeply attached to specific models — expressed frustration over losing GPT-4o, with some likening it to losing a friend. One Reddit user said, “It had a voice, a rhythm, and a spark I haven’t been able to find in any other model.” Another ChatGPT user was more annoyed because they had been using different models for specific use cases, and canceled their subscription as a result of the change.
GPT-4o returns to the model picker
Following the criticism, OpenAI introduced a Show legacy models toggle in the settings for Plus subscribers to restore GPT-4o and other older models in the model picker. Altman has now reinstated the original model picker for all paid users, making GPT-4o one of the standard options available. He assured users in a post on X that “if we ever do deprecate it, we will give plenty of notice.”
Paid subscribers will also gain a Show additional models setting to access GPT-4.1, o3, and GPT-5 Thinking mini. To clarify, paid tiers include the $20-a-month Plus plan, $200-a-month Pro subscription, $30-a-month Team subscription, and custom-priced Enterprise and Edu subscriptions with custom pricing.
GPT-4.5, the last non-chain-of-thought model, remains available only to Pro users because it “costs a lot of GPUs,” Altman said.
Ironically, all these different models have been seen as a problem in the past. During a podcast appearance back in June, Altman admitted the current state of model choice is a “whole mess”, and that the goal is to replace all the confusing variants (see: 4, 4o, and o4) with a simple progression.
GPT-5 could get a ‘warmer’ personality
After GPT-5 was released, many users were disappointed by stark differences in tone from GPT-4o. As a result, OpenAI is “working on an update to GPT-5’s personality which should feel warmer than the current personality but not as annoying (to most users) as GPT-4,” Altman said on X.
He also said that per-user customization of model personality would be preferable in the long term, given the attachments users have formed to the models they have come to use regularly. In January, a nursing student said that she had fallen in love with an AI boyfriend she created using ChatGPT and, since then, such instances have become more prevalent. The r/MyBoyfriendIsAI subreddit currently has 16,000 members.
Whether enabling such fixations will ultimately benefit those with them remains uncertain. Vulnerable people have been left in psychological turmoil due to interactions with AI chatbots, although some research has found that they can have a positive impact on users’ emotions.
AI companies will financially benefit from users becoming emotionally attached to their services. Nevertheless, earlier this month, OpenAI launched a series of updates to ChatGPT aimed at promoting mental well-being and preventing excessive dependency.
Altman has acknowledged that it would be bad if ChatGPT “unknowingly nudged (a user) away from their longer term well-being.” He also said that the thought of someone using the chatbot’s advice for important decisions makes him “uneasy,” but that society and OpenAI “have to figure out how to make it a big net positive.”
Free users can choose between these GPT-5 modes
According to Altman’s X post, users without a paid ChatGPT subscription can use the model picker to toggle between Auto, Fast, and Thinking modes for GPT-5.
- Auto automatically selects the mode it deems best for the task.
- Fast prioritises quicker, more concise responses.
- Thinking focuses on slower but deeper and more thorough reasoning.
“Most users will want Auto, but the additional control will be useful for some people,” Altman said.
The Thinking mode, which features a 196,000-token context window, is limited to one message per day for free users and 3,000 messages per week for Plus and Team users. Once the weekly limit is reached, a pop-up notification will appear, and GPT-5-Thinking will no longer be selectable from the menu.
Users can switch to GPT-5 Thinking mini mode, which offers a shorter context window and faster, less resource-intensive reasoning compared to the full version. Altman will consider updating the rate limits depending on usage trends.
Pro and Team tier users also have access to GPT-5 Thinking Pro, which takes a bit longer to think but delivers better accuracy for complex tasks. OpenAI would like to give Plus users a limited number of Pro queries a month, Altman said, but it does “not have the compute to do it right now.”
For a deeper dive into GPT-5 including wild demos, read TechnologyAdvice writer Grant Harvey’s guide on our sister site The Neuron.