All Collections
API
How to fix common issues
Common questions
Why am I getting different completions on Playground vs. the API?
Why am I getting different completions on Playground vs. the API?

Troubleshooting discrepancies between completions

Updated over a week ago

If the temperature parameter is set above 0, the model will likely produce different results each time - this is expected behavior. If you're seeing unexpected differences in the quality completions you receive from Playground vs. the API with temperature set to 0, there are a few potential causes to consider.

First, check that your prompt is exactly the same. Even slight differences, such as an extra space or newline character, can lead to different outputs.

Next, ensure you're using the same parameters in both cases. For example, the model parameter set to gpt-3.5-turbo and gpt-4 will produce different completions even with the same prompt, because gpt-4 is a newer and more capable instruction-following model.

If you've double-checked all of these things and are still seeing discrepancies, ask for help on the Community Forum, where users may have experienced similar issues or may be able to assist in troubleshooting your specific case.

Did this answer your question?