Function calling allows you to connect LLMs like gpt-4o
to external tools and systems. This is useful for many things such as empowering AI assistants with capabilities, or building deep integrations between your applications and LLMs.
Learn more in our function calling developer guide.
In June 2024, we launched Structured Outputs. When you turn it on by setting strict: true
, in your function definition, Structured Outputs guarantees that the arguments generated by the model for a function call exactly match the JSON Schema you provided in the function definition.
In October 2024, we launched the 'Generate Anything' feature, which allows developers to describe a function, paste it in directly or paste your code and generate a valid function schema. Learn more about 'Generate Anything' in help center article here
How can I use function calling?
Function calling is useful for a large number of use cases, such as:
Enabling assistants to fetch data:
an AI assistant needs to fetch the latest customer data from an internal system when a user asks “what are my recent orders?” before it can generate the response to the user
Enabling assistants to take actions:
an AI assistant needs to schedule meetings based on user preferences and calendar availability.
Enabling assistants to perform computation:
a math tutor assistant needs to perform a math computation.
Building rich workflows:
a data extraction pipeline that fetches raw text, then converts it to structured data and saves it in a database.
Function calling is supported in both the Chat Completions API and the Assistants API.
How can I use JSON mode?
When JSON mode is turned on, the model's output is ensured to be valid JSON, except for in some edge cases that you should detect and handle appropriately.
To turn on JSON mode with the Chat Completions or Assistants API you can set the response_format
to { "type": "json_object" }
. If you are using function calling, JSON mode is always turned on.
Important notes:
When using JSON mode, you must always instruct the model to produce JSON via some message in the conversation, for example via your system message. If you don't include an explicit instruction to generate JSON, the model may generate an unending stream of whitespace and the request may run continually until it reaches the token limit. To help ensure you don't forget, the API will throw an error if the string "JSON" does not appear somewhere in the context.
JSON mode will not guarantee the output matches any specific schema, only that it is valid and parses without errors. You should use Structured Outputs to ensure it matches your schema, or if that is not possible, you should use a validation library and potentially retries to ensure that the output matches your desired schema.
Your application must detect and handle the edge cases that can result in the model output not being a complete JSON object (see below)