16 Mar 2023
With AI expected to emerge out of the hype cycle and become all pervasive, arguably, the ethics of how we use it could become a matter for philosophers to debate - let alone middle market business leaders. Nonetheless, as ethics will cascade through the broader use of AI within a business, the topic is one that needs careful consideration, at least at a high level, in the board room.
Generative AI tools work by scraping vast tracts of data. This means that generative AI can learn and replicate biases present in the training data. A good example is the images presented in our previous article ‘Generative AI – Is this a business revolution?’ We asked DALL-E to show “a business advisor in London writing an article about Generative AI for a global audience.”
The images are a product of the image bank and the algorithm. If the image bank only contained white, British males then the generated images would almost certainly only represent white, British males. To combat this - training data needs to be as diverse and as unbiased as possible to prevent the reinforcement and repetition of existing biases. A key challenge for business leaders when incorporating these tools will be to consider biases in the training data.
Who has ownership of the images above? This is a key area of debate, and it is easy to consider some near-term challenges that may arise. For example, if generative AI is used by an automotive garage to prepare a custom paint job inspired by the album art of a recording artist – who owns the image? It is inevitable that generative AI will create content that is identical to existing works. If you cannot distinguish between the original works and the AI generated work, how do you appropriately attribute ownership to the correct creator? It’s a question of copyright that our legal system will need to evolve to manage.
The images above are not real people – but what would happen if they were used to violate privacy, perpetrate fraud or cause harm? If you look closely at the images, you will see some details are far from perfect – but as the technology advances the tools will get better too. Businesses incorporating generative AI technologies need to map out how the tools can be used and will need to put in control frameworks to protect against privacy breaches.
Boards will need to decide the extent to which they disclose the use of generative AI in their business. A relatively common use of AI at present are online chatbots tasked with resolving customer facing matters. If the chatbot becomes as humanlike as ChatGPT – should the business disclose that you are engaging with a robot, and not a human?
If a law firm is asked to engage in a matter – do they need to disclose the extent to which a tool has been used in building their arguments? Even if the use of ‘tools’ is allowed within their engagement letter, does the lawyer need to take time explaining to their client that generative AI tools have been used in the engagement as opposed to hard-working paralegals?
In matters of PR and social media, if a generative tool is used, does this need to be disclosed? If there are mistakes in copy, no doubt a business would rush to blame the tool.
Philosophical debate in the board room
The board room agenda almost certainly does not allow time for an extended debate. In some matters - for example copyright - evolving legislation and regulation may provide a framework that simplifies the debate. In other business practices, for example transparency over social and PR, society may settle over time on a generally expected, and preferred, approach.
In the meanwhile, at this relatively early stage of widespread adoption of AI, boardrooms will need to be comfortable that they understand the risks provided by ethical matters and that they have implemented effective policies and safeguards to manage them.