Doing Math in the Playground

On the challenge of doing math - specifically algebra - using the OpenAI Playground with workarounds to consider

Johanna C. avatar
Written by Johanna C.
Updated over a week ago

Have you ever tried to solve for x using the OpenAI playground?

For example, solve for x:

3 x + 4 = 66

First you'd isolate terms with x to the left hand side like so:

3 x + (4 - 4) = 66 - 4


3 x = 62

to get the result:

x = 62 / 3

... simple, right? Unfortunately, you won’t always get the same result from the Playground.

Our language models currently struggle with math

The models are not yet capable at performing consistently when asked to solve math problems. In other words if you were to try this example in our Playground using text-davinci-002 you will likely get inconsistent answers when performing math. With some generations you will get the correct answer, however we do not recommend you depend on the GPT models for math tasks.

What you can do to improve output consistency in our Playground

Disclaimer: Even implementing everything below there is only so far we can push the current model.

  1. The GPT models are great at recognizing patterns, but without enough data they’ll try their best to interpret and recreate a pattern that seems most probable. With minimal data it’s likely to produce a wide variety of potential outputs.

  2. A prompt designed like a homework assignment, will generally have clear instructions on the task and expected output, and may include an example task to further establish the expectations around the task and output format. The text-davinci-002 model does best with an instruction, so the request should be presented in a format that starts with an instruction. Without this the model may not understand your expectations and it will be a bit confused.

Using the "solve for x where 3x + 4 = 66" example:

To improve this prompt we can add the following:

  1. Start with an instruction like, “Given the algebraic equation below, solve for the provided variable”, then test to see the results.

  2. Append to the instruction a description of the expected output, “Provide the answer in the format of ‘x=<insert answer>’“, then test once more

  3. If results are still inconsistent, append an example problem to the instructions. This example will help establish the pattern that you want the model to recognize and follow, “Problem: 3x+4=66, solve for x. <newline> Answer: x=”

  4. The final result will be a prompt that looks like this:

Given the algebraic equation below, solve for the provided variable. Provide the answer in the format of ‘x=<insert answer>. 
Problem1: y-1=0, solve for y
Answer1: y=1
Problem2: 3x+4=66, solve for x.
Answer2: x=

Overall recommendation for math problems

We are aware our currently available models are not yet capable at performing consistently when asked to solve math problems. Consider relying on tools like for now when doing math such as algebraic equations.

Did this answer your question?