Raymond Niles avatar
Written by Raymond Niles
Updated over a week ago

What is it?

Our latest image generation model is now available in the API. The DALL·E 3 API now:

  • Can generate text in images

  • Supports landscape and portrait images

  • Generates significantly more attractive and detailed images

  • Can understand complex prompts

Note: because DALL·E-3 expects highly detailed prompts, the API will automatically add detail if needed (same as in ChatGPT).

How do I access it?

Anyone with an OpenAI API account has access via the API. Make sure to provide the “dall-e-3” model parameter to get the new model. Plus subscribers can also access DALL·E-3 via ChatGPT.

Is DALL-E 2 still available?

Yes, the DALL·E-2 API will remain available. For backwards compatibility reasons the API will default to DALL·E-2, but you can set the model parameter to dall-e-3 to switch to the new model.

Are there different rate limits for DALL-E 3? How can I increase these?

Default rate limits will start at 6 images/minute and we hope to increase these over time.

Why can I not generate 512 or 256 images any more?

DALL-E 3 was trained to generate 1024x1024, 1024x1792 or 1792x1024 images, so the older sizes are deprecated. To create images more quickly or with lower quality and less cost, you can instead use the new `quality` parameter. You can also still generate 512x512 or 256x256 sized images if you select the model as dall-e-2.

What's the difference between quality options?

A quality property is available when using DALL·E-3. It defaults to “standard”, which will create attractive images quickly and at low cost. Users can specify “hd” (and pay a higher price) to give the model more time to generate images, resulting in higher image quality, but also higher latency.

What does the style option do?

We introduced "style" as a new optional param to provide advanced control of the visual style of the generation. At this time the valid options are "vivid" or "natural". We recommend experimenting with this new parameter to determine which option works best for your use case. The default is “vivid”.

Why does DALL·E-3 only support n=1?

For system scalability and reliability reasons we only currently support n=1 when calling DALLE-3. We recommend you make multiple parallel calls to the API if you wish to receive more than 1 image.

Did this answer your question?