OpenAI’s GPT, like Google’s PaLM 2 and Meta’s LLaMA, is an example of a large language model (LLM). It’s an advanced artificial intelligence system designed to understand and generate human-like language. At great expense and resource, these models have been ‘trained’ on vast amounts of data to perform tasks - like answering questions, generating text, translating text and more.
For example, GPT-4, which was released in March 2023, is widely reported to have been trained on significantly more data than its predecessor, GPT- , which was trained on 45 terabytes of text data.
This enables it to understand and create contextually appropriate responses - making it valuable for a variety of applications like virtual assistants and content generation.
Among the consumer-facing generative AI applications that have been launched, ChatGPT, developed by OpenAI, is trained on the GPT model, whilst Google Bard is trained on the PalM 2 model. Both are particularly suited to language processing and can generate text, images or audio in response to specific prompts.
However, despite their enormous ‘knowledge’, the results that LLMs provide will only ever be based on their pre- trained models. For example, ChatGPT’s training data only includes sources up until September 2021. If you asked it to name the current Prime Minister, it would still believe Boris Johnson is in post – although it will caveat that with a line about verifying this with up-to-date sources.
What’s more, most do not have access to real time information, so they are unable to tell you the current time.
So, how can you trust what GPT tells you?
There are always challenges when it comes to the adoption of revolutionary new technologies. And one of the most important considerations when it comes to deploying generative AI is its ability to tell the truth. To put it simply. LLMs provide information, but they're incapable of deciphering what is true and what is not. Any organisation needs to trust that the response they are giving to the end user is accurate. So, when it comes to generative AI applications there needs to be a process for testing the truthfulness, appropriateness, and bias of the information.
For example, if a consumer uses a generative AI chatbot, hosted on a retailer’s website, to ask about how to self- harm, the response can’t be a fact-based response that provides the person with advice on how to cause harm to themselves. While the response is truthful, it would be wholly inappropriate and clearly create several issues for the organisation.
So how can GPT be useful to an organisation?
"It is important to recognise that when we talk about how an organisation can deploy generative AI, we are not talking about simply deploying ChatGPT or Bard".
These examples demonstrate the limitations of consumer-facing generative AI applications in providing valuable and accurate responses. While they may be valuable to the public, they can be damaging when it comes to decision making. And these models will only ‘learn’ more once the developer has invested in retraining them, or other data sources have been embedded or indexed.
So, it is important to recognise that when we talk about how an organisation can deploy generative AI, we are not talking about simply deploying ChatGPT or Bard. In almost all circumstances we’re also not talking about creating brand new models and training them on new data – something that would be cost-prohibitive to most organisations. As the leading Solutions Integrator, we are harnessing the power of existing LLMs, like GPT-4 (its latest model), and embedding indexed enterprise data to deliver more reliable, trustworthy and accurate responses that meet an organisation’s demands.
In the case of the user that asked about self-harm. Indexing the retailer’s own data and embedding it in a LLM would lead to a very
different outcome. It could mean that the question is intercepted immediately and flagged to the police or a list of help line phone numbers could be provided. Rather than the fact-based answer that the model learnt during its training.
At this point it is prudent to touch upon data security, because if you are embedding your valuable data into a LLM you need to know it is secure. We’ve already heard stories of employees using ChatGPT to check coding and, in doing so, unwittingly giving away corporate IP, which is why using these consumer-facing versions is simply not advisable. Running your data parallel to the model will enable you to keep your IP secure, while delivering a much more accurate response from the application.
How Insight can help
Insight can work with organisations to help them understand the advantages of incorporating generative AI into their organisation safely. We are already delivering customer workshops to discuss many of the issues raised in this article; we can also work directly with individual, existing or potential clients to help them navigate the generative AI maze effectively.
We do that in the first instance by hosting a one-day workshop, which is typically comprised of four sessions. The first helps the attendees understand generative AI, its capabilities and challenges, and the risks of using it. The second looks at the technicalities of integrating it into existing systems, the security and reliability of that and how it will be costed.
The third step looks at typical use cases for the public sector and helps organisations to define specific uses, outcomes and the potential return on investment. And, finally, the fourth step discusses and creates a roadmap detailing how the AI might be implemented and used within the organisation. This then leads to the AI accelerator step, which runs for four weeks and sees Insight implement the different uses identified alongside the organisation, followed by a managed service approach, if required.
Of course, generative AI is still in its infancy and organisations are only just beginning to consider how it might be useful beyond simple, online customer service requests.
Insight can work with your organisation to uncover the opportunities and bring them to life in a secure, safe and ethical way.
Learn more about how to get your organisation AI ready.