With artificial intelligence (AI) becoming an increasing topic of interest, so too has deepfake.
What is deepfake?
Deepfake is an AI generated video or audio recording which has been altered to appear as if it someone else. As the name suggests, it’s fake, but it can be extremely realistic.
You may have already seen deepfake in your day-to-day lives. Since 2020, Channel 4 have aired an ‘alternative Christmas message’ which uses deepfake to depict the monarch ‘addressing the nation’. ITV also have a programme called ‘deepfake neighbour wars’ which depicts high-profile celebrities in various comedy sketches.
What are the privacy risks of deepfake?
In July, Martin Lewis (the founder of MoneySavingExpert.com) expressed his concern when discovering that a deepfake scam depicting him was circulating on social media. The scam advert included a lifelike recording of Martin Lewis talking on a video conferencing platform, urging people to invest their money into a new project which was alleged to have been backed by billionaire, Elon Musk. This scam advert has since been taken down, but it likely would have opened people up to financial harm even in its short stint. The scam also demonstrates that AI can be used for malicious purposes if it’s in the wrong hands.
One trend in recent years is the use of biometric data to attempt to streamline and improve cyber security measures. Most notably, mobile phones or even banking apps commonly use face recognition in place of a password. There is a risk of deepfake becoming so realistic that it could breach these types of biometric security measures, allowing someone with malicious intent to access personal and sensitive information using deepfake.
Is artificial intelligence (AI) regulated?
Insofar that AI (including deepfake) uses personal data, it is regulated by the UK’s data protection laws (namely, the UK GDPR and Data Protection Act 2018). Parliament are currently reviewing some new data protection legislation, known as the Data Protection and Digital Information Bill (DPDI Bill). In its current format, the DPDI Bill does not specifically mention artificial intelligence, although it would continue to regulate AI’s use of personal data.
On the 14 June 2023, the European Parliament approved its version of the draft EU Artificial Intelligence Act. Whilst this does not have a direct impact on the UK, it may pave the way for comprehensive legislation to be introduced – and it will no doubt impact UK based businesses who operate in the EU. Earlier this year, the UK government published a white paper which outlines their plans for regulation of artificial intelligence in the UK. The white paper claims that regulation will proactively support innovation and increase public trust in these technologies.
The need for AI to be regulated is further fuelled by the growing popularity of generative AI. Platforms such as ChatGPT, Microsoft Bing Chat, Google BARD etc. are all becoming more integrated within people’s everyday lives. For businesses, it is important to remember that use of generative AI should always consider data protection.
In other areas of law, like intellectual property, legislation like the Copyright, Designs and Patents Act 1988 applies to AI tools – from input to output there are considerations that businesses need to make when building or otherwise engaging with these tools.