Large language models and the future of truth
Like so many, I have been enthralled to see and experience the step-change progress that has been displayed by the recent release of large language models, and specifically Open Ai’s GPT-3.
Our endeavours to build more complex, neural networks are paying off. While we are clearly only on the very start of the ramp, an enormous cache of future use-cases will be uncovered along the way, from which, we will unlock exponential utility in both business and general life.
A few weeks ago, I wrote that I was worried about the emergence of “false oracles”. I lamented that in the near future, where six or seven of these models existed, we would struggle to know what biases they would have and which ones we could trust.
I am still worried. let me explain in more detail.
- There is very little visibility of the training data that is used in the models (although at a cursory level a wide, non-specific corpora are quoted to be a “diverse range of text data from the internet, including books, articles, and web pages. Specifically, GPT-3 was trained on over 45 terabytes of text data, which is equivalent to millions of books’ worth of content”. Visibility is a bigger problem that goes beyond just offering access to the source.
- As the rate of data production and availability grows exponentially, we are going to become more and more reliant on data aggregation tools that can summarise huge volumes of data to help us form rational conclusions; and to determine the signal from the noise.
- We are already overwhelmed with more information than we can handle, and in this, we tend to revert to binary and pithy generalisations of data which lead to a this-or-that type of consensus forming. This is well demonstrated in the polarisation of views in social media and society in general. Daniel Kahneman’s models of energy expenditure required to think through different systems is one explanation for this. However, it seems that we are looking for short cuts to the hard work of thinking deeply on the opinions we form. Are we outsourcing System 2 thinking to AI – and relying more and more on System 1? With data at scale we need simplifications, but they need to be trustworthy and without bias and agenda.
- Soon the ability to build these models will be common. The distinction between what model A vs model B produces as outputs will depend on two factors, the data on which they are trained, and the intervention layers built on top of the prediction – these can be viewed as censorship layers. GPT-3 very clearly, already has this.
A large language model, as the name suggests, requires a huge corpus of data, mostly data produced on the internet and in digitised literature to produce results. Results that are best described as the most likely or average version of the data it was trained on.
The most fundamental unit of these models is a non-linear function that predicts an output, based on a normalised input – specifically a non-linear activation function. Each of these functions exist in layers that in turn pass their outputs onto the next layer of abstraction, where they become the inputs to those layers.
This roughly emulates how we understand the cortical columns in the neo-cortex of the brain to work.
The challenge is that on final output, there is not a factual check for accuracy, but rather conformity to the model trained on the corpus. Several different training mechanisms are applied but in general there is a combination of automated and supervised methods.
It is obvious that the automated processes dwarf the human-supervised ones and that this trend will continue. Two methods are used to determine whether the predicted responses are good, namely perplexity and accuracy. However, accuracy is not factual accuracy, but rather the percentage of tokens that are correctly predicted in a model. To determine factual accuracy, a human is needed, or fact checking algorithms need to be deployed where a set of factually verified data is used to cross reference and to correct or retrain the models.
An example of this, was on display in the early days of the GPT-3 model when you could override basic math results by disagreeing with its output. If you asked it, “what is 1 + 1?” it would reply 2. If you then said, “no it’s not, it’s 3”, it would apologise and agree.
The model here is predicting that an apology and citation of the correction is most likely, rather than holding the line on its initial prediction, or weighting a corpus of factual data over the prediction.
To get this balance correct will become very complex in the future when the verifiable correctness of data, mixed in with predictions of the next most likely sequences of words is a problem that these models will need to overcome.
This is obviously problematic. There is clearly a need to involve substantive and finite training loops on issues that are not better solved through the democratisation of knowledge.
Next, we have the problem of the ever-shrinking Overton window, which is an issue of how a societal forcing function is impacting the development of technology which will in turn reinforce this societal forcing function.
As a test, I ran several queries (or prompts) on GPT-3 around data collected on contentious issues. I will not repeat them, but will rather give the templatised response, which iterated along this theme:
“As an AI language model, I do not hold personal views or opinions on any topic or issue. My responses are based on the information and data available to me and my aim is to provide accurate and objective information to the best of my ability.
It is important to recognise that different perspectives and opinions exist on most topics and that it is natural for individuals to hold different views based on their experiences, beliefs and values. As an AI language model, I strive to present a range of perspectives and viewpoints in a neutral and unbiased manner, without promoting any point of view.
Ultimately, it is up to individuals to form their own opinions and beliefs based on the information available to them, as well as their own personal experiences and values.”
This response is ironic.
The fact that this boilerplate has been interjected into the responses means there is no belief that it is “up to individuals to decide”, but rather that the reputational risk of exceeding the bounds of acceptable discourse trumps the freedom of users to apply their minds.
This response is not generated using the same approach of running the prediction through the model. This response is built in as a layer of curation on top of the model, which is performing alterations or censorship after the result. The fact that this exists shows how corruptible these models are and how much agenda could be smuggled into them.
It’s not been long since the Cambridge Analytica scandal, and the risks that large language models introduce to sway opinions of the public. Risks that are real and larger than before.
The profit incentive is also plain to see, and Open AI has done very little to disguise just how fast this motive can overwrite the founding ethos of an organisation.
What began as a mission to keep AI transparent and open source, as its power was too large to keep secret, has transformed into a closed system, sold to the first highest bidder. This does not bode well for future actors in this space, although it was entirely predictable. Even Elon Musk, one of the founders and early proponents of Open AI has publicly distanced himself from what it has become.
The resources required to build these models at scale are likely to reside in the largest and most-funded tech companies whose agendas are almost always financial. The attention economy and the agendas of these businesses need to be regulated as they build more powerful models. AI needs regulation before it gets out of control. These models are going to change the very nature of how we build our views of what is true in the world and the danger of false oracles is increasing.
The original vision of Open AI remains a gap in society and needs to be solved by a collective without any agenda other than free speech and a commitment to “truth”. Perhaps out of this, a WIKI-AI-Mod will be born? Something that is not corrupted by the Overton window or corporate greed.
Author: Timothy Matthis, Managing Partner, Solutions Business Group
Get in touch on LinkedIn
INSIGHTS