No, ChatGPT Is Not Writing Code For You

Matthew Reynolds
5 min readDec 6, 2022

--

“create some javascript code that shows a donut chart in chart.js for me”

I’m sort of being lightly driven round the bend by the million plus people on social media who are falling over themselves about this whole “ChatGPT is writing code?!?!” thing.

As such it is time for a gentle rant.No, ChatGPT Is Not Writing Code For You

ChatGPT is without a doubt a watershed moment in that it moves us into a different mode of information discovery. As I wrote about before, the whole “here’s a bunch of web pages, summarise them yourself” is old hat. There will now be “software agents” like ChatGPT that does summarisation work for us.

It is an amazing achievement, and I think one of the reason why it feels so “woah” is that I don’t think anyone knew the state of the art is where it is. I would remind you, dear reader, that there are some issues with it. For example, ask it “who is queen elizabeth 2”, it describes her in the present tense

Anyway, back to code.

If you open TikTok and look for “chatgpt coding”, the videos you get back are nearly all from non-professional programmers. The videos show ChatGPT responding with blocks of code in response to prompts.

One example has the user asking the tool to “create a block of JavaScript that uses Chart.js to create a donut chart showing business costs and expenses”. The tool responds with a block of code that the user copies and pastes into CodePen and, voila, it works.

The output is actually decent. The code looks sensible, and importantly it also emits code required to import the Chart.js library and a sample of the HTML needed to host it.

And as I said before, it runs. And there are millions of other examples, and I’ve spent a good while mucking around with it, and it does some clever stuff.

What it completely misses, however, is the “engineering” part of software systems development. Referring back to our example that generates a donut chart, the code emitted by ChatGPT has sample data in it. To be useful, that chart needs to get its data from somewhere — it needs to make a call over to (e.g.) a REST endpoint. That endpoint has to be able to get the data from somewhere, e.g. a database. That data likely has to be massaged into shape and returned back.

Building software — separate to the concept of engineering — is the process of taking a great number of separate parts and connecting them together so that inputs are transformed into outputs. You cannot build anything of meaning without a “great number of separate parts”, as you need to get over some a threshold of triviality in order to create something that delivers value.

The problem with how people are looking at ChatGPT for “coding” is that there is an expectation that it is doing anything other than the very trivial. Don’t get me wrong — it is *amazing* that it can do what it does, but the outputs are trivial. They are the smallest building blocks that go into an engineered solution.

For example, let’s say I am a capable full stack developer with ten years experience who has been asked to put a donut chart on a screen on an application. With that much experience, I am likely to be very skilled, and I know how to use Bootstrap, and React, and WebAPI, and SQL Server, and C#, and .NET 7, and blah blah. But I may well not have ever had to use Chart.js before.

If I Google for “create a donut chart in chart.js”, the first link happens to be the Chart.js docs, which I can read, and there is an example there of how do to it. Alternatively, I can limit my search to Stack Overflow and, voila, there’s an example there as good as the ChatGPT one.

The key here is that as an engineer, there is a gap in my knowledge. I’ve never used Chart.js before, and I need to get this chart in. I’m also glossing over the assessment of whether Chart.js is a safe library to use — what risk management thinking is going into the process of including out or including in that library? Assuming it is safe, ChatGPT within five minutes gets that chart in the application on a proof-of-concept basis.

What ChatGPT is looking like it will be good at is creating baseline proof-of-concept examples that inform the engineering process.

For example, I did manage to synthesise some interesting examples of things I didn’t know how to do, such as “how to create objects that can hold an address in Erlang”, or “how do I host a Rust process in IIS”. If I just want to know where to start, ChatGPT looks very useful — but I can imagine how that sort of question might just end up being asked via whatever Google integrates in as their competitor to the ChatGPT tool.

There are two more factors to consider ChatGPT — one specific to the engineering domain, and one generally.

Firstly, ChatGPT via GPT-3 is trained on Stack Overflow. If people *en masse* stop using Stack Overflow and start using ChatGPT, you’ll end up with some sort of informational void. Post a question on Stack Overflow and someone will answer it — and this add to the corpus of data that is Stack Overflow. Ask ChatGPT, all you’re doing is mining the existing set — there’s no accretive effect that has made Stack Overflow so successful.

Secondly, GPT-3 (and therefore ChatGPT) is not real time. It still thinks one Lizzie is still alive, and thinks that the other Lizzie (Liz Truss) is a lowly MP. A snapshot of the model has to be trained, and this realistically is the “moat” that Google still has because it is real time. Peculiarly, I suspect this might be less of a problem than it sounds for software engineering activities, because by the time things find their way into general use they’re usually at least 12–18 months old — “bleeding edge” is not that important in this space.

--

--

Matthew Reynolds
Matthew Reynolds

Written by Matthew Reynolds

I help non-technology people build technology businesses. Check out my course at www.FractionalMatt.com/course

Responses (3)