Skip to main content
All CollectionsChatGPT Enterprise
OpenAI o1 models - FAQ [ChatGPT Enterprise and Edu]
OpenAI o1 models - FAQ [ChatGPT Enterprise and Edu]
Updated over a week ago

The OpenAI o1-preview and o1-mini models are a new series of reasoning models for solving hard problems. This is a preview and we expect regular updates and improvements. While GPT-4o is still the best option for most prompts, the o1 series may be helpful for handling complex, problem-solving tasks in domains like research, strategy, coding, math, and science.

General Questions

What usage limits will be enforced on OpenAI o1 models on ChatGPT?

Each user has access to 50 messages a week with OpenAI o1-preview and 50 messages a day with OpenAI o1-mini to start.

You can check the date that your usage limits restarts at any time by highlighting the model name in the model picker drop-down. Your weekly usage limit resets every seven days after you send your first message. For example, if you start sending messages on September 12, your limit will reset on September 19 (00:00 UTC), regardless of when you reach the limit.

Currently, there is no way to check how many messages you have used in your usage budget.

Please note that o1-preview and o1-mini are preview models, and their usage limits described here may be subject to changes.

Why are these limits being introduced? What’s the reasoning behind this change?

The o1 series is more computationally intensive than other models, so we’re introducing weekly limits to conversations on the user level.

Enterprise customers will continue to have unlimited access to GPT-4o, which is still the best model for most tasks and questions.

Are the limits based on individual usage, overall team usage, or a combination of both?

Limits are set at the individual user level. Most users will likely not hit these limits, given that GPT-4o is still the best option for most tasks and questions in ChatGPT.

How will these limits be enforced? (e.g., notifications, access restrictions, overage charges, etc.)

When a user hits their limit, they will receive a notification in the product along with a date of when their usage limit will reset.

What happens if we exceed the limit? Are there penalties, service disruptions, or extra fees?

There are no penalties or extra fees if members of your ChatGPT Enterprise or ChatGPT Edu workspace reaches their limit. They will simply not be able to the model until their usage limit resets (in a week’s time).

ChatGPT Usage Limits

Will individual users be notified directly, or is this the responsibility of account administrators?

The individual user will be notified directly in the product. The ability to use n\\ o1-preview or o1-mini will be grayed out if they hit their usage limit.

Can we see our current usage against these new limits? Is there a dashboard or report to monitor usage?

Because these limits are only at the individual level, there is no dashboard or monitoring needed of usage against the individual conversation caps.

For most use cases, especially those that involve the use of tools and vision, we recommend using GPT-4o in ChatGPT. Please note the following limitations on the OpenAI o1-preview and o1-mini models in ChatGPT:

Our OpenAI o1-preview and o1-mini models do not have access to the following advanced tools and features:

  • Memory

  • Custom instructions

  • Data analysis

  • File uploads

  • Web browsing

  • Discovering and using GPTs

  • Vision

  • Voice

You will need to switch over to GPT-4o to access these tools.

Will there be more limits or restrictions in the future? Should we expect more changes, and what might they be?

o1-preview and o1-mini are preview models, and their usage limits may be subject to changes.

API

What’s the context window for OpenAI o1 models?

In ChatGPT, the context windows for o1-preview and o1-mini is 32k.

Are the OpenAI o1-preview and o1-mini supported in the Enterprise Compliance API?

User inputs and model outputs will be included in the Enterprise Compliance API. We are working to make fine-tuning accessible in the future.

How can I access OpenAI o1-preview API?

Developers who qualify for API usage Tier 3, 4 and 5, can prototype o1-preview and o1-mini with the following rate limits:

Tier 5:
o1-preview: 10,000 requests per minute
o1-mini: 30,000 requests per minute

Tier 4:

o1-preview: 10,000 requests per minute

o1-mini: 10,000 requests per minute

Tier 3:

o1-preview: 5,000 requests per minute

o1-mini: 5,000 requests per minute

Developers can access the model in the Chat Completions API. We plan to progressively expand access over the weeks following launch.

What are the main differences between the o1-preview and o1-mini API?

  • Capability: o1-preview offers advanced reasoning for complex tasks; o1-mini is a smaller, cheaper version optimized for faster responses.

  • Performance: o1-preview may perform better on highly complex problems due to more extensive training. However, o1-mini may outperform o1-preview when it comes to coding applications.

  • Resource Usage: o1-mini is ideal for applications prioritizing speed and cost-efficiency.

What is the context window for the OpenAI o1-preview and o1-mini API?

The OpenAI o1-preview and o1-mini models both have a 128k context window. The OpenAI o1-preview model has an output limit of 32k, and the OpenAI o1-mini model has an output limit of 64k.

Are there any limitations to the OpenAI o1 API?

The OpenAI o1 API currently does not support function calling, structured outputs, streaming, support for system messages, and some other features. We are working to add these functionalities in future updates.

Are the OpenAI o1 API models available to Rescap customers?

Currently, OpenAI o1 API models are only available to developers who qualify for API usage tier 4 and 5 only. It is only available to pay-as-you-go (PAYG) customers for now, not customers on Scale Tier or ResCap.

Are the OpenAI o1 API models ZDR eligible?

Trusted customers with zero data retention will also be eligible for zero data retention with the OpenAI o1 API models.

Can I fine-tune the OpenAI o1 API models?

Fine-tuning capabilities are not yet available for these models.

Did this answer your question?