Can the text generator kill coders?


As the developers of Open AI have stated, Codex is not as good at understanding code as it is at generating it.

Codex DNA, a pioneer of automated synthetic biology systems, could digitally encode DNA sequences and retrieve stored information accurately afterwards. It seems like a great achievement though, Codex tech has coders worried about its ability to code on its own. It is an AI-powered coding, which can create a one-signal coding miracle. In 2021, Open AI released Codex, a new system that writes code only with simple prompts given as input in plain language. However, experts believe that the time is long gone when programmers will be rendered redundant simply because a system is smart enough to generate code.

The work of a developer is not limited to writing code:

Typically, writing code takes less than 20% of a developer’s time. In an article titled “Evaluating Large Language Models Trained on Code,” OpenAI reveals many interesting facts that should be enough to quell any unreasonable slander from programmers. In the paper, they say, “Engineers don’t spend all day writing code. Instead, they spend a lot of their time on tasks like conferring with colleagues, writing design specs, and upgrading existing software stacks. He goes on to say that in a way, this can help coders develop good code by letting systems do the tedious coding work. This should come as no surprise because developing a project requires so much trivial and repetitive coding. As for job loss, about 20% of programmers may become fired if ever Codex succeeds in generating genuine code. This will only happen the day a non-coder can collaborate with the codex to come up with the datasheet and develop working software. Experts do not see this day in the near future and there are many reasons why they think so.

Is Codex really a programming application?

Codex is a direct descendant of the GPT-3 model developed to generate code using few simple inputs. Deep learning models are only as good as the data fed to them. And ironically, the GPT-3 datasets contained no sample coding. It is therefore completely illogical to consider Codex as a full-fledged programming application. And what’s more, as the Open AI developers themselves have stated, Codex isn’t as good at understanding code as it is at generating it. Codex, like any other deep learning language model, only captures statistical correlations between code fragments. It has also been observed that the efficiency of the deep learning model decreases with the increase in the number of variables it is fed with. Elaborating further on its inability to understand the program’s very basic structure, the document states, “It may recommend syntactically incorrect or undefined code and invoke variables and functions outside of the code base.” Sometimes it can even put the pieces of code together even if they don’t fit together. Moreover, the developers themselves said that Codex only succeeds in 37% of cases.

Can programmers and Codex coexist?

Although OpenAI CTO and co-founder Greg Brockman is optimistic about Codex’s inclusivity, seeing it as a tool to multiply programmers, experts see the whole picture from a different. In addition to helping programmers generate quality code, it will create a new breed of programmers called “fast engineers”. A prompt engineer is one who develops the appropriate prompt for the Codex application to generate the code. Daniel Jeffries, a tech podcaster specializing in future technologies, believes that Codex could create hybrids between humans and AIs, called “centaurs”, like in a game of chess, and do something faster and better. together, which neither of them can do alone.

More trending stories

Share this article

Do the sharing

About the Author

More info about the author


Comments are closed.