Once you have a prompt that works, you can often optimize it to use fewer tokens but be just as effective.

The following prompt shows the API how to write command line code when given a request in English.

The prompt includes a description at the beginning to help the API understand the task and then a natural language English request (Input) and its corresponding command (Command.) This prompt uses 211 tokens and works well with the instruction models.

Open this example in the Playground

Turn the statement into a Windows command line command.

Input: Clone a repository.

Command: git clone https://github.com/username/repository.git

Input: Add a remote.

Command: git remote add origin https://github.com/username/repository.git

Input: Set a new environment variable called OPENAI_API_KEY

Command: export OPENAI_API_KEY=my-key-value

Input: Access the System32 folder.

Command: cd C:\Windows\System32

Input: Open a file.

Command: open file.txt

Input: Create a new file.

Command: echo "value" > file.txt

Input: Delete file.

Command: del file.txt

Input: Rename file.

Command: rename file.txt file-new.txt

Input: Copy a file to directory Documents.

Command: copy file.txt C:\Users\UserName\Documents

While this prompt works well, we can refine it further to reduce the overall cost per request and generate a broader range of outputs.

With enough examples the API often doesn’t often need an instruction at the beginning. Alternatively, with an instruction and the instruct models, in many cases, fewer examples are required.

In the prompt below we updated it to only require one example set. This prompt creates a repeatable structure that corresponds to the format we want for our completion.

Create Windows cmd line and zsh terminal commands from user input.

Input: Create a new .txt file. Append the word "Hello, world!" into the new file.

cmd line: echo "Hello, world!" > Hello.txt

zsh: touch Hello.txt && echo 'Hello, world!' > Hello.txt

###

Input: Delete hello.txt

cmd line: del Hello.txt

zsh: rm Hello.txt

Open this example in the Playground and try this yourself.

The prompt’s token count was reduced with the following steps:

  • The instruction and examples were updated to use specific language that clarifies the type of output wanted and better links these parts of the prompt together.

  • “###” was added as a break between examples, which allows us to use this as a stop sequence value.

  • We used fewer examples because the model already had an understanding of how to do the task.

Did this answer your question?