​Since releasing the Search endpoint, we’ve developed new methods that achieve better results for this task. As a result, we’ll be removing the Search endpoint from our documentation and removing access to this endpoint on December 3, 2022. New accounts created after today will not have access to these endpoint.

While applications currently using the endpoint won’t be immediately impacted by this change, the endpoint won’t be actively maintained going forward. We strongly encourage developers to switch over to newer techniques which produce better results, outlined below.

Current documentation


This options are also outlined here.

Option 1: Transition to Embeddings-based search (recommended)

We believe that most use cases will be better served by moving the underlying search system to use a vector-based embedding search. The major reason for this is that our current system used a bigram filter to narrow down the scope of candidates whereas our embeddings system has much more contextual awareness. Also, in general, using embeddings will be considerably lower cost in the long run. If you’re not familiar with this, you can learn more by visiting our guide to embeddings.

If you have a larger dataset (>10,000 documents), consider using a vector search engine like Pinecone or Weaviate to power that search.

Option 2: Reimplement existing functionality

If you’re using the document parameter

The current openai.Search.create and code can be replaced with this snippet (note this will only work with non-Codex engines since they use a different tokenizer.)

We plan to move this snippet into the openai-python repo under openai.Search.create_legacy.

If you’re using the file parameter

As a quick review, here are the high level steps of the current Search endpoint with a file:

Step 1: upload a jsonl file

Behind the scenes, we upload new files meant for file search to an Elastic search. Each line of the jsonl is then submitted as a document.

Each line is required to have a “text” field and an optional “metadata” field.

These are the Elastic search settings and mappings for our index:

Elastic searching mapping:

"properties": {
"document": {"type": "text", "analyzer": "standard_bigram_analyzer"}, -> the “text” field
"metadata": {"type": "object", "enabled": False}, -> the “metadata” field

Elastic search analyzer:

"analysis": {
"analyzer": {
"standard_bigram_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": ["lowercase", "english_stop", "shingle"],
"filter": {"english_stop": {"type": "stop", "stopwords": "_english_"}},

After that, we performed standard Elastic search search calls and used `max_rerank` to determine the number of documents to return from Elastic search.

Step 2: Search

Once you have the candidate documents from step 1, you could just make a standard openai.Search.create or call to rerank the candidates. See Document

Did this answer your question?