Businesses continue to invest in data analytics, despite a growing emphasis on artificial intelligence (AI).
Business intelligence and data analytics projects continue to offer the prospect of more efficient and effective operations in both the commercial and public service sectors, with organisations looking to derive more value from their data.
But this is going hand in hand with growing awareness of the potential for AI, and a willingness to experiment with generative AI (GenAI) tools, and large language models (LLMs) in particular.
This was very much in evidence at this month’s Tech Show London, where one speaker – Prudence Leung, a data scientist at Compare the Market – described her firm’s approach to AI as “cautious but curious”.
The insurance comparison site is just one organisation investing in GenAI tools and putting them in the hands of business users.
Effective GenAI projects, though, require extensive groundwork by organisations. Much of this – such as the need for clean and accurate data – will be familiar to anyone who has worked on large-scale analytics and business intelligence projects.
However, AI brings its own challenges, including ethical and copyright considerations. Firms also need to develop working methods and prompt engineering – creating catalogues of GenAI prompts – to bring the most out of the technology.
Bridging the gap
One driver for adopting AI, especially GenAI, is its potential to bridge the gap between an enterprise’s data assets, and the people who need to interact with them and use them to support decisions.
“Over my career, technology has become more human, with laptops, different OSes, and smartphones,” said John (JK) Kundert, chief product and technology officer at The Financial Times. “Things will only move faster and in one direction, which is a more human interaction with tech. But customers are what we return to: every business has a customer … ChatGPT-type experiences will create a different type of interaction with customers.”
Some of the early use cases for GenAI have been “chatbots” and other applications that give a more natural way for customers to interact with an organisation, replacing either less intelligent systems, such as interactive voice response (IVR) technologies, or reducing the number of calls that are routed through to a human operator.
Organisations in sectors such as insurance are already using chatbots for tasks such as summarising insurance documents and helping customers find the right policies.
Though Compare the Market first launched AutoSergei, a robotic meerkat character, in adverts back in 2018, well before the GenAI boom, he reminded customers to renew policies, but was not a true AI tool. However, the company is currently working on a proof of concept for a more advanced AI platform that will undertake content creation tasks.
Translation tools
Other firms are going further still. According to Andy Caddy, group chief information officer at PureGym, GenAI combined with translation tools offers potential efficiencies for a company that now operates in six countries.
But Pure Gym is also looking at how computer vision could be used to help monitor its gyms outside staffed hours. In the future, this could allow the business to operate smaller sites. “We could look to smaller gyms in smaller towns without people,” he said. “The tech can open that up for us.”
Caddy concedes that there are, as yet, only “a handful” of case studies of effective AI. “A lot of this is about timing,” he said. “When do you jump in, and when do you wait for a partner to do it?” He predicts that AI will be used where it can help the business to scale up.
AI literacy
At the Financial Times, staff across the business are already being encouraged to try out GenAI. “We have a strategy to make every employee AI literate. Going beyond that, we are talking about AI fluency,” said JK.
The business has deployed Google’s Gemini and Duet, and an enterprise version of OpenAI’s ChatGPT. “We’ve said, ‘we want you to play,’ and blend AI literacy and empowering the workforce to use these tools,” he said.
Use of AI at the Financial Times, though, is tied into strict processes for governance. All the companies willing to discuss their AI projects stressed the need for governance, ethics and legal compliance, as well as the need to respect copyright and avoid bias from data.
At business advisory firm EY, for example, the use of an enterprise instance of ChatGPT is overseen by the chief ethicist, and the company ratifies prompts before they can be used with its EYQ large language model, according to Catriona Campbell, the firm’s chief technology and information officer.
Rob Spence, senior data scientist at Compare the Market, said: “Governance can be challenging. There are ethical considerations and copyright.”
There are information security issues, as well as legal considerations and limits on how internal data can be used with third-party LLMs. “Mitigations are [using] public data, non-business sensitive data and even synthetic data,” he said.
Another factor is the way GenAI, or LLMs, operate. “LLMs are a very uncertain black box,” said Javier Mora Jimenez, another data scientist working on the Compare the Market proof of concept.
Given the vast range of possible configurations for AI systems, Compare the Market has set up a gateway to both control prompts and ensure the quality of the information the system outputs. The company is also looking at how it can integrate AI into its other business applications.
The idea of AI as a technology that can extend into a wide range of business processes is a common theme among chief information officers and chief technology officers who have started to deploy the tools. At the Financial Times, JK points to the need to protect the news organisation’s reputation, meanwhile allowing greater personalisation of content, and better access to the newspaper’s historic archive – in itself, a unique data store. “The FT generates content and sells it, so generative AI has the potential to be disruptive to us,” said JK. “But there is the unique value the paper creates, so how do we use it [AI] to help find stories or to mine data?”
The newspaper has archives going back 135 years, but much of the value in that content is hard to unlock. “We have this amazing archive, but it is hard to access it,” he continued. “Discoverability is low. We are beginning to look at generative AI technology [as a way] to understand the content. If you understand the content, you can answer questions with high relevance. Not with answers based on a generic model, but on the FT’s body of content.”
Moves such as the FT’s are part of a wider trend for businesses to look at GenAI as a way to unlock the value in their own data, rather than relying on the public internet as a training model for AI systems.
This is also an area where IT suppliers, including Oracle and Google, are making investments. By linking GenAI to corporate data, firms hope to produce more relevant results, to improve decision making and safeguard their intellectual property. But it is also about understanding the technology’s limits.
“It is really important to understand up-front what these models can and cannot do,” cautioned Compare the Market’s Spence. “They can do a lot, but can’t do everything. It’s important to educate ourselves on what we can and cannot do.”