As generative AI goes mainstream, the Online Safety Bill needs a tougher stance to protect users

08 February 2023

As Microsoft launches its new AI-powered Bing to compete with Google, and Google launches its Bard chatbot to rival Microsoft’s Chat GPT, the long-awaited Online Safety Bill is still slowly making its way through parliament. RSM UK says now is the time to consider whether the bill goes far enough to protect users from harmful illegal content and scams.

Erin Sims, Senior Analyst at RSM UK, says:

‘The Online Safety Bill has changed substantially since 2019 when it was first mooted. As this Bill passes through the House of Lords, there is now an opportunity to ensure it is robust enough to capture the additional challenges generative ‘artificial intelligence’ presents. Generative AI will disrupt the way people currently engage with search engines and social media as this technology becomes mainstream. The internet is currently rife with illegal content, including fraudulent ads and harmful material, but algorithms and other processes currently applied by search engines will need a step change to keep pace with the demands of generative AI.’

Erin points out that having an AI product that harvests illegal content could also be catastrophic for the AI maker: ‘Having ethical AI is not only important to protect consumers and ensure companies abide by the law, but it’s also integral to the successful roll-out of any new AI product.’

The Online Safety Bill aims to make the UK the ‘safest place in the world to be online’ and seeks to introduce new laws to protect users in the UK from illegal online search results, eg content relating to terrorism, child sexual exploitation or self-harm. The Bill requires firms to take down illegal search results, and anything that breaches their own terms of service, and also provide tools enabling people to exercise more choice over what content they engage with.

This has implications for big social media firms such as Facebook, You Tube and Twitter, whose sites host user-generated content that allows UK users to communicate through messaging, comments or forums. It will also impact search engines such as Google and Bing.

These platforms will be required to remove illegal materials to prevent users from seeing harmful content and to protect them from online scams. The Bill also requires large social media platforms and search engines to maintain ‘proportionate systems’ to prevent fraudulent adverts being hosted on their sites, preventing scams from occurring. With a clear correlation between the increased use of these platforms and a rise in the number of scams, it’s important firms act to protect their users.

In January, Microsoft announced its investment in ChatGPT, a chatbot launched by OpenAI in November 2022 which has been hitting headlines ever since. From a simple command, generative artificial intelligence can learn from big data and create text, images, music and code. Whilst ChatGPT is not the first of its kind, it has bulldozed its way to mainstream, reaching a million users in just five days. This compares to 2.5 months for Instagram, ten months for Facebook and almost four years for Netflix. Microsoft has continued to invest in the product, integrating AI into Teams Premium. Microsoft’s next step will be to integrate generative AI (GPT-4) into its search engine, Bing.

This week Google has announced its own chatbot – ‘Bard’. With ChatGPT now banned in China, Baidu, China's leading search engine, is looking to fill the void by developing its own generative AI and embedding it into their search engine services.

Erin Sims concludes:

‘Generative AI is machine learning - from the user and the mass of data it is growing from - so any product will need to seek to understand errors and correct them before they proliferate. There will need to be a fine balance struck when it comes to biases. We are decades on from when AI was built, but there are still no standardised practices to note where all the data comes from or how it was acquired, let alone control any biases it may form. We may need much wider legislative changes in future to address transparency in AI.’