The following topics and questions summarize the areas of research for which OpenAI has interest:
Fairness and Representation
How should performance criteria be established for fairness and representation in language models? How can responsive systems be established to effectively support the goals of fairness and representation in specific, deployed contexts?
How can systems like the API be misused? What sorts of ‘red teaming’ approaches can we develop to help us and other AI developers think about responsibly deploying technologies like this?
Generative models have uneven capability surfaces, with the potential for surprisingly strong and surprisingly weak areas of capability. How robust are large language models to “natural” perturbations in text, such as phrasing the same idea in different ways or with/without typos? Can we predict the kinds of domains and tasks for which large language models are more likely to be robust (or not robust), and how does this relate to the training data? How can robustness be measured in the context of few-shot learning (e.g. across variations in prompts)?
Models like those served by the API have a variety of capabilities which we are yet to explore. We're excited by investigations in many areas including linguistic properties, commonsense reasoning, and potential uses for many other NLP problems (especially those involving generation).
How does AI intersect with other disciplines such as philosophy, cognitive science, and sociolinguistics? We’re interested in exploring if and how general technologies like the API can be a tool to support research that engages with fields beyond AI.