12 April 2024

Can generative AI replace programmers, or do we need the mix of people and AI to get the best code?

Generative AI, such as ChatGPT and Gemini (the AI formerly known as Bard) regularly turn up in the news. Although originally conceived as chatbots that can hold meaningful conversations, by accessing vast quantities of information from the internet they have proved to be versatile tools for everything from writing university essays to generating rap lyrics. Inevitably, the abilities of generative AI make those who earn a living from writing feel nervous. Could the software replace humans? And generative AI can produce code too.

I started programming professionally in the late 70s, starting with the obscure ALGOL variant NELIAC and going on to work in APL and then C. In 1981, a program called ‘The Last One’ was launched, so called because its writer claimed it was ‘the last human-produced program that needs to be written’. It generated BASIC code without the need to know how to program. The very fact that it produced BASIC meant we didn’t worry much, yet it did feel that such program generators might mean that our careers had limited lifespans.

The reality, of course, is that programmers are still in demand over forty years later, but generative AI presents more of a threat to coding professionals (just as it does for journalism or PR copywriting). For simplistic tasks it is certainly up to the job. I asked ChatGPT and Gemini to write some simple JavaScript, which they did in seconds. This ability is a boon to anyone who can’t program but wants to generate web code. Of the two, Gemini did a better job, not in the code per se, but in giving an example of how to use it – ChatGPT’s response could not be used directly. But most interesting was both came up with sensible suggestions on how to improve the other’s code.

Context has always been an issue for AI

In practice, it feels highly unlikely that current generative AI could replace professional programmers. It’s one thing to produce a few standalone lines, but another to understand the context of a section of code in a much larger program. Context has always been an issue for AI, because it does not understand why it is doing something (or, indeed what it is doing). ‘Never say never’ is an essential stance in science and technology – but, as yet, the threat seems over-emphasised.

What AI applications can do well, though, is fill in basics for beginners and make suggestions for improvements for more advanced coders. I intentionally left out a semicolon from a piece of JavaScript and asked what was wrong with it. Both models came back with a variant of ‘The JavaScript code you provided is correct, and there’s nothing inherently wrong with it. However, in HTML, it’s a good practice to include a semicolon at the end of the statement, even though it’s not strictly required in this case.’

Economist and journalist Tim Harford points out an interesting parallel with the spreadsheet. When this was introduced, there were concerns that accountants and clerks would be put out of work. Yet, since the introduction of the first spreadsheet program VisiCalc the number working in the field in the US has risen from around 340,000 to 1.4 million. As Harford put it, accountants ‘are merely outsourcing the arithmetic to the machine’. Arguably, the same potential exists for AI in coding.

What does AI think about all this?

It is impossible to write a piece like this without asking an AI for its view. ChatGPT told me that we should be looking at augmentation rather than replacement, with AI both automating repetitive tasks and assisting in debugging and optimising code. It suggested that ‘it may shift the focus of coding roles towards higher-level tasks that require human creativity, problem-solving, and domain expertise’. Its conclusion was ‘Overall, the consensus is that AI will likely complement and enhance the work of coders rather than replace them entirely.’

To paraphrase Mandy Rice Davies, it would say that, wouldn’t it? But it does seem very likely that this picture is correct. When generative AI attempts to produce professional reports, it has distinct limitations such as making up references. It can be helpful in giving starting points, or suggesting improvements, but it works best as a tool to help writers, rather than a replacement for their skill. The same is true for coding.

Brian Clegg is an award-winning science writer with over 50 books in print and articles in a wide range of newspapers and magazines (www.brianclegg.net).

Back to Curo News