Generative AI

Is generative AI a transformational technology?

25 September 2023

The hype and anticipation surrounding generative AI and its transformative potential has become increasingly palpable since the launch of ChatGPT in November 2022 – and will only grow as the underlying models are enhanced and new ways are found to embed the technology.

Businesses have begun implementing these tools into their processes and considering how best to harness their power to boost productivity, while remaining concerned about security and privacy. Generative AI is being described as the huge technological step forward that we’ve all been waiting for in artificial intelligence – one that’s set to revolutionise the way we all work. But is that really the case?

We believe the answer will ultimately be yes, but that patience is required while generative AI capabilities develop in the months ahead.

Generative AI tools, including ChatGPT, are user-friendly, requiring no technical background for use. In fact, many of our survey respondents (70%), felt their knowledge of generative AI was good enough to confidently explain it to someone else, indicating that these tools are so intuitive that a large base understanding has already been achieved. There’s no need for software installation or extensive user training, making the adoption much quicker than for many other technologies.

The progress that’s been made in the natural language generative ability of AI in the past two years has been astonishing and a tipping point has undoubtedly been reached to push it from research lab to widespread adoption, globally. Still, the generative AI tools now gaining such massive publicity are more the culmination of many small steps in AI model refinement.

But the accumulation of these stepwise improvements (coupled with Microsoft’s investment in OpenAI) have really catapulted generative AI from limited applicability to mass take-up. The so-called Turing test has been passed and the better quality generative AI language output is now indistinguishable from human-written content.

From public tools to proprietary AI models

So, while everyone can, in theory, utilise generative AI tools, and many businesses are – 45 % of respondents say they are actively using generative AI around one or more areas of their business, with a further 37% experimenting with the tools – how can businesses maximise their potential, to create measurable change?

The first step is to encourage safe trialling of the free tools, such as ChatGPT. This needs to be wrapped around with an appropriate generative AI policy. Good practice balances trialling the tools with clear and understandable safeguards, so that staff are educated in the risks attached to using AI outputs without proper review, as well as the potential privacy and data security risks that come with using the free tools.

At RSM we have quickly embraced Bing Chat Enterprise as a secure generative AI tool because of the protection it accords to the contents of the user prompts. For organisations with access to this tool, this could be a ready way to encourage adoption securely.

Arguably the bigger benefits from generative AI come when businesses have the skills and confidence to begin leveraging the foundational data models (such as OpenAI and Llama) by training them with their own data models and building proprietary generative AI tools. Several financial services institutions and consultancies have gone down this route, for example to provide their staff with natural language query access to their collective knowledge base.

This of course requires pretty advanced data science skills, although these can increasingly be sourced externally through a large number of specialist AI consultancies. What is key is to identify an appropriate use case, which means finding a business process where there is sufficient historical data and where the productivity gains are worth the effort. 'Proprietary AI models after all require a good deal of support and maintenance once built (with the discipline of MLOps becoming more commonplace to bring together the skills required to develop and maintain sophisticated, proprietary AI models under one umbrella within the organisation). 

Risks and guardrails

Despite the huge potential productivity gain from successful deployment of generative AI, many businesses rightly see significant risk attached – both to their business model and to their data.

When asked to what extent they felt the threat of generative AI, 23% of respondents in our survey said a ‘great extent’, with a further 40% saying ‘to some extent’, and those principally felt that the threat came from potential new AI-enabled competitors challenging their business models (45%).

Many businesses (42%) are also concerned about increased risks around cyber security in relation to generative AI. The risks here are clearly real, unless safeguards are put in place to help prevent data entered into AI from seeping out into the public domain or ending up in the wrong hands internally within the business. These safeguards need to combine end user training, generative AI policies as well as centrally administered technical guardrails.

Nonetheless, in RSM’s view the bigger risk for businesses is to ignore or sideline generative AI. Our research confirms that generative AI has the potential to be a truly transformative technology innovation. Those businesses which fail to embrace it safely are most likely to see their market share challenged by established players who do, or new entrants who have built their business models around the adoption of AI from the ground up.

Businesses operating in the education, publishing and advertising sectors for example, whose primary outputs are text or imagery, could have their business models fundamentally challenged through the deployment of generative AI within their customers’ organisations. If we take advertising, for example, if anyone with access to generative AI can produce good quality advertising campaign material – this could potentially leave some agencies’ services redundant.

Of course, it’s not as clear cut as that. A generative AI tool cannot complete work which requires a more human touch, such as running focus groups. Also, in some respects, it isn’t ‘truly’ creative (yet). However, it’s imperative that these businesses, that may be most affected by AI, harness its power to their own benefit, educating their workforce to use the tools to enhance productivity and reduce timescales, allowing them to stay competitive.

Leaving room for innovation

An optimal approach to deployment of generative AI balances driving adoption ‘at the coal face’ with control over security and privacy from the centre. Controlled trial groups in RSM’s experience are the best way to define the multiple use cases for generative AI across an enterprise, and to distil the learnings into organisation-wide training, tips and tricks and experiential learning. Your staff want to try these tools! And if you don’t permit it, they will likely try them anyway, but without control.

In short, generative AI certainly has the potential to transform our ways of working – but it’s how we leverage it that will make the biggest difference. And patience in the coming months will be rewarded as the models become more powerful and software and data suppliers themselves move swiftly to incorporate AI into their service provision, removing the need for you to apply it yourself within those domains.

The Real Economy

Generative AI report

Explore the findings from our generative AI survey

UK quarterly economic outlook

Our quarterly economic outlook analyses the conditions of the driving force behind the UK economy, the real economy.

Subscribe to The Real Economy 

You will be first to receive all our Real Economy content including topical reports and insights into the Middle Market Business Index.