Generative AI

Navigating 5 key business risks of generative AI

25 September 2023

Generative AI will undoubtedly reshape elements of the working world and encourage business growth. But, as with any emerging technology, it comes with risk areas. The business leaders, academics and policymakers that are actively seeking avenues to harness the transformative power of this technology must thoroughly consider and plan for these risks.

Middle market displays cautious optimism

Our survey showed that the middle market has largely embraced generative AI, with 45% of those surveyed saying they use it in at least one area of their business, and 68% of business leaders employing generative AI tools. However, even more employees may be using generative AI tools than business leaders realise – with a recent survey conducted by networking app, Fishbowl, finding that 70% of workers that use ChatGPT don’t inform their boss. 

It’s clear there’s a balance to be struck if businesses are to make the most of innovative technologies, while minimising risk. Businesses must ensure that safety measures are in place and should be mindful of how and by whom the tools are used.

Let’s look further into the five key risk areas for businesses using or intending to implement generative AI tools, and ways to mitigate them.

1. Data management – safeguard your content

Data is key. Data is what fuels the AI, therefore, data management is crucial. Generative AI tools have sparked significant concerns regarding data privacy, primarily due to their reliance on data scraped from the internet and user-provided information. Evidencing this, 47% of our survey respondents said they had major concerns around data security and privacy arising from the use of generative AI.

Of course, businesses will face potential issues when entrusting personal or sensitive data on third-party systems they neither own nor control. The terms of use associated with many generative AI platforms often grant broad licenses and unrestricted utilisation rights over uploaded content. Furthermore, these platforms capture and exploit data to generate future responses, which could effectively diminish users' exclusive rights to their own content.

Therefore, it is crucial for businesses to exercise caution and scrutinise the terms and conditions to safeguard their valuable data. For example, when training AI models to compose emails to clients, ensure you remove any identifiable information from the training samples, keeping content as generic as possible. Businesses should also think carefully about subcontractors, and any other data owners outside the organisation and how they may use generative AI tools.

2. Workforce composition – upskill and reassure

The AI revolution is also being badged the next industrial revolution. But this time, manual labour seems to be less susceptible to displacement. Instead, the news is replete with commentary on the potential replacement of jobs in white-collar or office settings, encompassing tasks such as, writing, research or photo analysis . One example is coverage of the writers’ strike in the US, which has seen 11,000 members of the Writer Guild of America (WGA) put down their pens, citing concerns around generative AI, including writers’ intellectual property being used as feeder material to train tools as one reason for industrial action.

While current systems typically complement human workers, the increasing power of AI raises the possibility of full or partial job replacement. For now, the focus for businesses should be on upskilling their workforces to properly utilise the technology and also providing an element of reassurance that jobs won’t disappear overnight. With this issue being both important and complex, we have produced a further article which explores the impact of generative AI on the workforce more thoroughly – read our article on 'Workforce possibilities and implications'.

3. Misinformation and disinformation – verify everything

Generative AI are trained on data. And if, unfortunately, that data includes inaccuracies, these will be reflected in the tool’s output.

One example where this could cause issues, is when businesses use chatbots built with generative AI technology instead of human service agents. Businesses must recognise that, if the chatbot gives an end user information more akin to advice and, if this impacts the end user negatively, the business associated could potentially be liable .

Large language models, the AIs that power chatbots, can also ‘hallucinate,’ so the chatbot has no idea of the meaning of what it is saying, only that it correlates – which can result in inaccurate information. An interesting example of this is when a lawyer used ChatGPT for research purposes, then failed to fact check the cases cited to support their argument. In this case, the chatbot had made up case names, describing the background and how they related to the case in hand – making them seem legitimate and relevant.

Of course, generative AI tools can also simply be providing factually incorrect information, like in the example below, where we asked Chat GPT to list countries beginning with the letter ‘V.’

ChatGPT incorrect answers show in this prompt
Source: ChatGPT

It’s vital to be aware of these potential inaccuracies, and for businesses to understand that they could face legal liability. This aside, the reputational damage from providing inaccurate information is risk enough.

4. Fraud – educate and strengthen authentication

AI plays a significant role in fraud detection, easily identifying a business’ ‘normal’ and flagging abnormalities as potentially malicious activity. The technology can identify irregularities rapidly and autonomously at the earliest – and most salvageable – stage of an attack.

However, bad actors are also increasingly exploiting generative AI tools for malicious activity, including to create convincing emails and set up fake websites at speed; and scale up ransomware attacks by automating the time-consuming manual elements of the process.

There’s also the use of deep fakes to consider. While the below images, created by free image generators, seem somewhat realistic, their flaws become apparent when studied more closely. However, with better prompts and the ever-increasing performance of the software, these free-to-use tools will undoubtedly become more sophisticated, quickly.

Image of people generated by AI
Source: Leonardo.ai and Craiyon.com

Deepfakes are currently being used to impersonate people’s voices, to request privileged information, account amends or payment approvals. What’s more, as this technology develops, targeted video spoofing will advance too, potentially impacting the virtual workspaces of the post-pandemic norm.

Organisations should consider enhancing education for the workforce around the fraud threat landscape, and rethink their multifactor authentication approaches, as our biometric data becomes increasingly compromised.

Our article 'Generative AI – fraud friend or foe?', explores the fraud risks this technology brings in more detail. 

5. Bias – assess data holistically

Generative AI models can perpetuate biases inherent in their training data, leading to potential discrimination or unfair outcomes. In employment settings, the use of biased AI tools could result in discrimination claims. Recognising that AI is inclined to result in inequities and mitigating those biases should be a priority for businesses.

While it can be challenging to detect bias in AI, because of the lack of transparency in how it reaches its conclusions, it is the responsibility of organisations using the technology to actively shape and create a more equitable world, rather than relying solely on AI tools and past descriptions to determine the future. This will involve assessing data inputs and outputs holistically.

As generative AI continues to develop, and businesses become more comfortable with its risks, and limitations – it may become easier to safeguard against them.

In the meantime, it’s vital that businesses continuously monitor and evaluate the output of generative AI tools, as well as the results of using them. Identifying and addressing ethical and accuracy concerns, and staying abreast of regulations, particularly the UK digital transformation roadmap and the UK national AI strategy, will be key.

The Real Economy

Generative AI report

Explore the findings from our generative AI survey

UK quarterly economic outlook

Our quarterly economic outlook analyses the conditions of the driving force behind the UK economy, the real economy.

Subscribe to The Real Economy 

You will be first to receive all our Real Economy content including topical reports and insights into the Middle Market Business Index.