Friday 17 February 2023

Do AIs Hallucinate Electric Sheep? Part 1: Context

 AI seems to be everywhere nowadays, whether it's in the "creation" of new artworks, the generation of deepfakes, or even ChatGPT's ability to produce confident text in a variety of formats on any subject you'd care to mention.

So where does that leave us, the humans who's job/inclination is to be creative, and to bring different pieces of information to create something new, or provide a new insight into something that's already known?

Firstly, let's start with what ChatGPT (and other chatbot large language models) is not: it is not human. It is not an expert, and in many cases, it can be absolutely wrong on fundamental bits of knowledge. It is a model that takes the proximity of words to each other in a given corpus (for example, a load of crawled webpages, or Wikipedia) and it encodes those relationships as a set of numbers. When it's called on to answer a question, what it does is string the words together in a way that is determined by those numbers. It's a statistical process, that produces readable text in a user friendly way. 

Alright, it's an interesting computer science problem to work on, with some cool applications. But why are people collectively freaking out about it, now that it's freely open and available for anyone to use?

My answer to this is culture. We, as humans, are so used to accepting that "computer says x" is the right answer, because it's instilled to us from an early age in schools. Computers use maths, and maths always has a right answer and a wrong answer. Therefore, if computers do arithmetic perfectly (which they don't, but that's a digression), then the answers they give are always correct.

Combining this with a deterministic view of the world through school-taught science, and we can easily wind up thinking that computers can model the world around us to a level of precision that we don't need to question. "Computer says X" is always the correct answer.

Even computer scientists buy into this mode of thinking sometimes - as the rapidly growing field of AI and data science ethics can show you. Computers may not be biased in themselves, but they are very, very good at replicating and amplifying any biases in their datasets. And history is full of bias, there's no denying that.

For some AI models, there's also this well known issue with hallucination - Open AI acknowledge in their list of limitations of ChatGPT that "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers." These answers, or hallucinations, have no basis in the data that the AI was trained with, but the chatbot can deliver them with the same certainty it delivers all the answers to questions, even going so far as to argue the validity of the hallucination while being challenged on it. Determining what answer is accurate, and what is a hallucination can be very difficult, especially for non-experts in the field of the question being asked. Which, to be fair, is likely to be the vast majority of users.

So, computers are not always right, combined with the tendency of them to be very convincing, means that people are worried about floods of misinformation, and the misuse of them in a wide range of contexts, from getting them to write school essays for you, to making excuses about why you're filing your taxes late, to telling you how to break into a house in rap form.

From a research integrity point of view, there have been documented examples of ChatGPT including references in academic answers where the references do not exist.  

All this is enough to have universities, academic publishers and knowledge repositories coming out with restrictions on the use of ChatGPT, and in some case outright bans. 

Where do we go from here? The chatbot is very firmly out of the bag now, and I am sure that the problems that have been identified by it are already being worked on, one way or another. But what does that mean for the future of research, and, more fundamentally, to the future of human creativity?

I don't know, but in my next post, I'm going to explore human creativity and what it means for us when an AI can easily do what we find difficult, but ultimately is fundamental to our sense of self as human beings.


No comments:

Post a Comment