In a new paper, researchers at OpenAI have revealed details about Codex, a deep learning model that generates software source code.
So what is Codex? It’s essentially a specialized GPT model fine-tuned on publicly available code from GitHub that can produce functionally correct Python code bodies from natural language docstrings and could excel at a variety of coding tasks.
Codex powers Copilot, an “AI pair programmer” tool developed jointly by OpenAI and GitHub. Codex tends to replicate common coding samples that it was trained on; if you write something that looks similar, it will fill in the blanks with what it thinks should go next though the code generated is often not quite right.
If you’re writing something that is more specialized for a particular application or is more complex than most scripts, Codex at least in 2021, will not be as useful.
So Codex is an AI that translates natural language into code, but it still has a long ways to go.
Codex is a descendent of GPT-3, a massive deep learning language model release last year. The complexity of deep learning models is often measured by the number of parameters they have. In general, a model’s learning capacity increases with the number of parameters.
Codex proves that machine learning is still ruled by the “no free lunch” theorem (NFL), which means that generalization comes at the cost of performance. In other words, machine learning models are more accurate when they are designed to solve one specific problem.
So think of it this way, Codex can perform one specialized task (transforming function descriptions and signatures into source code) with high accuracy at the cost of poor natural language processing capabilities. On the other hand, GPT-3 is a general language model that can generate decent text about a lot of topics (including complicated programming concepts) but can’t write a single line of code.
Codex also suffers from misalignment amplification. So Codex uses the contents of the file you’re working on as context to generate its output. If your code contains subtle bugs (which is quite normal if you’re a human programmer), Codex may “deliberately” suggest code that superficially appears good but is incorrect, the researchers warn.
Codex is a work in progress and a significant potential product for Microsoft in its partnering of OpenAI and GitHub to create new useful tools. The era of AI being more involved in coding is upon us and we’ll have to wait and see if it figures out to have a better understanding of coding itself rather than just correlations among code fragments.
Codex is ingenious but still limited. It’s just a specialized GPT model fine-tuned on GitHub code that produces functionally correct code bodies from natural language docstrings and could excel at a variety of coding tasks. That don’t necessarily yet work as intended. Requires significant human supervision! Long live the machine inside the code!
OpenAI is going to be releasing Codex through their API this summer.