As businesses and individuals continue to embrace and incorporate artificial intelligence (AI) tools into everyday life, it is vitally important to understand and appreciate the legal risks of using such tools. The likes of ChatGPT, BARD and Adobe Firefly can speed up efficiencies – but can the output pose a risk to your business? This article explores some of the common risks of using AI tools in your business, together with recommended actions to combat the risks.
AI Output
The main objective when using AI tools is to generate information or data (known as the output) in a matter of seconds when it would otherwise take us minutes, hours or even months. However, this leads to several questions such as: who now owns the output and how can I use the output?
The general position in law is that, when it comes to computer generated works, the person instructing the AI tool is the owner of the output. In fact, ChatGPT even echoes this position in its terms. If you are the individual instructing the AI tool, then this is great on the face of it. But, if you are a business and have hired a third party to instruct AI tools for you, then you should ensure there is a contract in place to deal with assigning the ownership or otherwise licensing it to you.
If you are the owner of the output, either because you are the prompter who generated the information or you’ve been assigned the intellectual property rights in the output, there is a risk of inheriting infringing material. This is because such AI tools are trained on an extensive amount of data, often including copyright protected works. That said, it is prohibited to scrape or mine copyright protected works for commercial purposes, including for training AI tools.
You could find yourself on the receiving end of a copyright infringement claim if you exploit AI generated output which infringes the copyright of another. In January 2023, Getty Images brought a claim against Stability AI for using its copyrighted works and brand markings. Whilst this case is still in progress and applies directly to AI developers, there still lies a risk with the users of an AI tool if they were to be given authorship of such infringing output and subsequently exploit it.
Hallucinations and Inaccuracies
The Cambridge Dictionary recently announced its Word of the Year 2023 as ‘Hallucinate’, echoing the surge of interest in artificial intelligence and recognising one of the most prominent risks in its use. When an AI tool hallucinates, it essentially produces false information – often, rather convincingly.
ChatGPT has been known to do this, especially given that its basic model has only been trained on data leading up to 2021. In the US, lawyers have been sanctioned for using Chat GPT to generate case citations which were found to be fake and contain fake judicial decisions. As a user of an AI tool, you should therefore be cautious of relying on the output as the absolute truth as this could lead to potentially significant errors in your work. Instead, to get the most out of an AI tool whilst reducing its risks, use it to enhance efficiencies to your business rather than relying on it as a sole source of information.
Data Security
Outside of infringing outputs and false information, AI tools have also been known to leak data and share information with third parties. For example, in April 2023 Samsung banned their employees from using ChatGPT after it was found that sensitive information relating to the company had been shared onto the AI platform. In Italy, regulators temporarily banned Chat GPT across the country due to concerns over how the platform processed and saved user data (see our article on Data Protection and Generative AI written by Data Protection Specialist Max Miliffe for more information).
Amidst the concern over data security, the UK Government have added artificial intelligence to a list of business sectors that pose a risk to national security, as per the National Security and Investment Act 2021. As such, if an individual or entity in the UK is looking to acquire a business that incorporates artificial intelligence (subject to a two part test), then then the onus is on the acquirer to notify the Government of their buying intentions.
If you are therefore looking to use AI tools, either for personal or business use, avoid inputting personal data or sensitive commercial information. Additionally, for your business, ensure you have the correct policies and staff training in place to mitigate this. An AI Policy would sufficiently guide your employees and help to build a culture of appropriate usage. We have a dedicated AI team who can support in drafting such a policy for your business.
The AI Act
At present, EU countries are discussing the implementation of a world-first piece of legislation dedicated to governing AI, with the European Parliament aiming to reach a decision by the end of 2023. Whilst this has no direct application to UK law due to Brexit, there is a chance this could still impact UK businesses. In addition, the passing of this Act could begin a domino effect of worldwide action to further mitigate and govern the use of AI.
For more information on how the AI Act could affect UK businesses, please read our dedicated article.
Practical Steps your Business can take
So, what practical steps can you put in place to combat the risks of AI tools?
Here are some examples:
- Putting in place an AI policy for your business – for your staff and for procurement
- Terms of business
- End user licence agreement
- Audit and risk assessments
- Contract reviews with providers and software developers
Our Intellectual Property team at Stephens Scown are well equipped to draft bespoke and robust agreements and policies that will clearly outline your IP and AI position to ensure you are best protected. We can also provide specific advice for you to consider as you look to develop your business. If you would like to learn more, please email ip@stephens-scown.co.uk or call us on 0345 450 5558.
This article was co-written by Joey Medway and Amy Ralston, Paralegal and Associate in the Intellectual Property, Data Protection and Technology team.