Introducing OpenAI o3-mini (January 31, 2025)
We’re excited to release o3-mini, our newest cost-efficient reasoning model optimized for coding, math, and science.
On the API, o3-mini supports Structured Outputs, function calling, developer messages, and streaming. It offers three adjustable reasoning efforts (low, medium, and high), so you can balance speed with depth for your use case.
ChatGPT Team, Pro, Plus, and Free plan users can access o3-mini starting today. Additionally, o3-mini now works with search to find up-to-date answers with links to relevant web sources. This is an early prototype as we work to integrate search across our reasoning models.In side-by-side testing, o3-mini delivered results on par with o1 at a lower latency, and outperformed o1-mini on advanced STEM tasks.
Expert evaluators preferred o3-mini’s answers 56% of the time over o1-mini’s, citing improved clarity and fewer critical errors on difficult questions. We look forward to your feedback and will keep refining o3-mini as we expand our family of advanced reasoning models.
Updates to GPT-4o in ChatGPT (January 29, 2025)
We’ve made some updates to GPT-4o–it’s now a smarter model across the board with more up-to-date knowledge, as well as deeper understanding and analysis of image uploads.
More up-to-date knowledge: By extending its training data cutoff from November 2023 to June 2024, GPT-4o can now offer more relevant, current, and contextually accurate responses, especially for questions involving cultural and social trends or more up-to-date research. A fresher training data set also makes it easier for the model to frame its web searches more efficiently and effectively.
Deeper understanding and analysis of image uploads:
GPT-4o is now better at understanding and answering questions about visual inputs, with improvements on multimodal benchmarks like MMMU and MathVista. The updated model is more adept at interpreting spatial relationships in image uploads, as well as analyzing complex diagrams, understanding charts and graphs, and connecting visual input with written content. Responses to image uploads will contain richer insights and more accurate guidance in areas like spatial planning and design layouts, as well as visually driven mathematical or technical problem-solving.
A smarter model, especially for STEM: GPT-4o is now better at math, science, and coding-related problems, with gains on academic evals like GPQA and MATH. Its improved score on MMLU—a comprehensive benchmark of language comprehension, knowledge breadth, and reasoning—reflects its ability to tackle more complex problems across domains.
Increased emoji usage ⬆️: GPT-4o is now a bit more enthusiastic in its emoji usage (perhaps particularly so if you use emoji in the conversation ✨) — let us know what you think.
Introducing GPT-4o with scheduled tasks (January 14, 2025)
Today we’re rolling out a beta version of tasks—a new way to ask ChatGPT to do things for you at a future time. Whether it's one-time reminders or recurring actions, tell ChatGPT what you need and when, and it will automatically take care of it.
Scheduled tasks is in early beta for Plus, Pro, and Teams. Eventually this will be available to anyone with a ChatGPT account.
Update to GPT-4o (November 20, 2024)
We’ve updated GPT-4o for ChatGPT users on all paid tiers. This update to GPT-4o includes improved writing capabilities that are now more natural, audience-aware, and tailored to improve relevance and readability. This model is also better at working with uploaded files, able to provide deeper insights and more thorough responses.
Update to GPT 4o-mini (November 5, 2024)
Today, we’ve updated GPT-4o mini for ChatGPT users on the Free, Plus, and Team tier, along with users that use ChatGPT while logged out.
Introducing GPT-4o with canvas (October 3, 2024)
We trained GPT-4o to collaborate as a creative partner. The model knows when to open a canvas, make targeted edits, and fully rewrite. It also understands broader context to provide precise feedback and suggestions.
Canvas is in early beta, and we plan to rapidly improve its capabilities.
Advanced voice (September 24, 2024)
Advanced voice uses GPT-4o’s native audio capabilities and features more natural, real-time conversations that pick up on non-verbal cues, such as the speed you’re talking, and can respond with emotion. Usage of advanced Voice (audio inputs and outputs) by Plus and Team users is limited on a daily basis.
Introducing OpenAI o1-preview and o1-mini (September 12, 2024)
We've developed a new series of AI models designed to spend more time thinking before they respond. They can reason through complex tasks and solve harder problems than previous models in science, coding, and math.
Today, we are releasing the first of this series in ChatGPT and our API. This is a preview and we expect regular updates and improvements.
ChatGPT Plus and Team users will be able to access o1 models in ChatGPT starting today. Both o1-preview and o1-mini can be selected manually in the model picker, and at launch, weekly rate limits will be 30 messages for o1-preview and 50 for o1-mini. We are working to increase those rates and enable ChatGPT to automatically choose the right model for a given prompt.
Update to GPT-4o (September 3, 2024)
Today, we've updated GPT-4o in ChatGPT. This version is better at incorporating uploaded files and updating memory with key parts of a conversation to make future interactions more helpful and relevant.
Update to GPT-4o (August 12, 2024)
"Bug fixes and performance improvements” … we’ve introduced an update to GPT-4o that we’ve found, through experiment results and qualitative feedback, ChatGPT users tend to prefer. It’s not a new frontier-class model. Although we’d like to tell you exactly how the model responses are different, figuring out how to granularly benchmark and communicate model behavior improvements is an ongoing area of research in itself (which we’re working on!).
Sometimes we can point to new capabilities and specific improvements — and we'll try our best to communicate that whenever possible. In the meantime, our team is constantly iterating on the model by adding good data, removing bad data, and experimenting with new research methods based on user feedback, offline evaluations, and more. That's the case with this model update.
We’ll continue to keep you posted as best as we can. Thank you for your patience!
Introducing GPT-4o mini (July 18, 2024)
We’re introducing GPT-4o mini, the most capable and cost-efficient small model available today. GPT-4o mini surpasses GPT-3.5 Turbo and other small models on academic benchmarks across both textual intelligence and multimodal reasoning and supports the same range of languages as GPT-4o. It also demonstrates strong performance in function calling, which can enable developers to build applications that fetch data or take actions with external systems, and improved long-context performance compared to GPT-3.5 Turbo.
You can read more about GPT-4o mini in the blog announcement.