Oh, so it’s OK for Computers to Lie Now?

Matthew Reynolds
4 min readFeb 11, 2023

--

One of these computers is lying to you.

I often like to add a bit of history to my blog posts, so let’s go back to 1936 and Alan Turing. Although famous for work on defeating Enigma encryption, before he got to Bletchley Park he wrote a paper called “ On Computable Numbers, with an application to the Entscheidungsproblem “ and in doing so managed to lay the foundation for modern computing. The device you’re reading this on (assuming someone didn’t print it out and hand it to you), can be traced back to the ideas in this paper.

Regardless of how famous he was, Turing was a mathematician. So was Charles Babbage, who’s credited with inventing the idea of a mechanical computation device. Whatever you may think about maths, a core principle is that it is inherently accurate. Maths doesn’t lie, doesn’t have moods, doesn’t get duped by being made to believe things that are not true.

Computers are based on maths, and indeed under it all operate the same principle — take an input, do some operation to it, get an output. Use the same input and the same operation, you 100% without fail get the same output — presuming that you account for malfunctions that otherwise mangle the data as it goes through its operations.

Throughout the whole history of computers, they have been lauded for their accuracy and reliability. They store data, process data, and produce the same output. I press a key on the keyboard writing it, and it appears in the document. When I save the document, it’s there when I need to open it again. Post it online, and the URL of that document will always result in that document being called and presented on the screen. (Unless, of course, you take some other action to produce a different output, or there is a malfunction.)

To give you some idea of how unusual malfunctions are, high-end computers have a special type of memory in them that will deal with the off chance that a bit of cosmic background radiation will collide with a collection of atoms in the chip that flips a bit from a 0 to 1 or vice versa. That’s how much we don’t like computers to lie to us — we literally look to defeat random particles mucking up data as it’s being processed.

Now that’s been a few months since ChatGPT came out and surprised the world with just how capable it is, Microsoft is updating Bing to do the same thing. Google is also scrambling to make sure that it turns up for this particular party. I used ChatGPT regularly, and I use Stable Diffusion to generate images for blog posts (and some of them are even good).

What’s very peculiar in the world of generative AI is that everyone seems to be cool with the lies.

Some of this is down to the training data. GPT-3 is an older set of data that predates Liz Truss becoming the UK’s prime minister last year, so the question, “What is Liz Truss most famous for?”, ChatGPT will respond: “ Liz Truss is a British politician who is most famous for serving as the current Foreign Secretary of the United Kingdom since September 2021 “.

Given enough time though, those sorts of mistakes can be clipped away as we’re able to ingest newer data into the models. Back in the day, it used to take a few weeks to index a site on Google — over time that’s got much faster. We know that problem will be fixed.

The underlying issue is that generative AI doesn’t have “specific domain governance” — yet. These generative AIs are able to take a natural language query, find documents that satisfy the query, and then summarise those documents. For example, if I enter into ChatGPT: “Give me heads of agreement for an IT support contract”, it will do this. And in fairness, it does tell me that I should get legal advice to write a contract. (And it also refused to actually write a contract for me.) But there’s nothing “lawyerly” about that output. No person with specific domain experience (i.e. a qualified lawyer) has had any governance oversight of the output at all.

The output is not guaranteed to not be a lie, and this is the first time that we’ve built computer systems at scale that don’t offer this guarantee. Even in Angry Birds if you start a level you’ve played before and position and fling the bird at exactly the same speed and angle as you did before, you’ll get exactly the same score.

Humans are, at the end of it, biological computers walking around in a suit made of meat swimming with hormone and emotions. We could calculate like computers (or Mr Spock), but we don’t. Now though, we’re asking computers to do meaty/creative stuff for us. I guess to do that, we have to invite them over to our side of the table — now we’re asking them to be just as unreliable and inaccurate as we are.

--

--

Matthew Reynolds
Matthew Reynolds

Written by Matthew Reynolds

I help non-technology people build technology businesses. Check out my course at www.FractionalMatt.com/course

Responses (1)