The Finally AI Guidance note provides a detailed summary of the AI landscape currently and offers insight into the legal considerations of using AI.
Last updated: November 2024 (Version 1.0)
Section 1: Introduction
1. PurposeThis AI Guidance Note (Guidance Note) has been created as a guide for Finally Agency (Finally or you) to use in circumstances where:
- Finally is using AI in connection with client work.
- Finally is contemplating the use of AI for business purposes.
The objective of this Guidance Note is to present you with accurate information about the latest legal developments of AI (as of the date of the Guidance Note) and how these developments fit into the current established legal areas of intellectual property (IP), contract law (B2B Contracts) and data protection (Personal Data).
This Guidance Note will mainly focus on the legal developments affecting English law. However, due to the fact that the legal implications of AI can affect a UK business servicing clients outside of the UK, this Guidance Note will also refer to some legal developments outside of the UK. Nevertheless, the purpose of the Guidance Note is to explain how the use of AI is being governed under existing English law. Due to the services that Finally provides, this Guidance Note mainly looks at the legal considerations when incorporating outputs from generative AI into your services.
Please note that all terms italicised have been done for the purpose of drawing your attention to it. Also, the names of institutions, organisations and legislations (including their acronyms) placed in bold and orange are detailed in the footnotes.
It is our intention that the contents of this Guidance Note provides you with an understanding of what to consider in relation to AI.
2. DisclaimerAs detailed in Section 1.1, this Guidance Note does not constitute legal advice on using AI, any other area of English law or the laws of another legal jurisdiction. Finally acknowledges that the contents of the Guidance Note have been provided to give an understanding of the possible considerations that an agency may want to consider when using AI. We would recommend that you request specific legal advice before making a decision about implementing or using AI for business purposes.
Additionally, Finally understands that any commercial or legal advice required to help manage its risks whilst using AI will be specific to the relevant circumstances.
Section 2: Current legal landscape
3. What is AI
As of the date of this Guidance Note, there is no one definition of AI. However, both the OECD and the EU Commission focus on the concept of AI operating with some level of autonomy. Additionally, the idea of AI having a level of autonomy is central to a business managing its risks and it is included in the definition of AI in Finally’s AI Policy.
4. AI definition from the AI Policy:
Technology that operates with a certain level of autonomy which is based on machine learning and/or human provided data and inputs to develop its own outputs (such as content, recommendations or decisions).
5. Legal developments within the EUAs Finally has partners and clients within the EU, we thought it would be helpful to provide a high-level outline of the recent developments within AI in the EU.
6. The EU AI ActThe EU AI Act (Act) will come into effect on 2 August 2024. The enforcement of most of its provisions is due to commence on 2 August 2026. The Act takes a risk-based approach to regulating AI systems and places obligations on developers, owners and users of such systems. The Act is considered one of the first legislations on AI in the world.
A UK company providing services within the EU involving the use of an AI system can still be liable under the Act. The Act defines the user of an AI system as anyone that uses an AI system or tool for commercial use (a deployer). Please note that the idea of commercial use in the Act seems wide, as an AI system must be used in a personal non-professional activity for the Act not to apply.
Whilst the Act prohibits certain uses of an AI system (such as the use of subliminal techniques), it is unlikely that Finally will require such use of an AI system to provide its services. However, as using an AI system as prohibited could attract the highest tier of fines (with a maximum fine of €35 million or 7% of a company’s total annual turnover for the preceding financial year), we recommend obtaining legal advice before using an AI system in connection with the services for any EU clients.
The Act details categories of AI systems that are high risk of negatively impacting individuals’ fundamental rights. These 8 categories are listed below:
- Remote biometric identification systems.
- Critical infrastructure (essential national infrastructure).
- Education.
- Recruitment and employment.
- Access to essential services (both public and private).
- Law enforcement.
- Immigration and asylum.
- Administration of justice and democratic processes.
We appreciate that Finally is unlikely to use an AI system to provide services within the EU for any of the above categories or for profiling individuals. Therefore, we think that it is unlikely that you will be using AI systems that have been categorised as high risk in the Act. However, as fines for non-compliance in connection with high risk AI systems include a maximum fine of €7.5 million or 1% of a company’s total annual turnover for the preceding financial year, we recommend obtaining legal advice if you have any concerns about using a AI system with the EU.
7. The Act and generative AIThe developers of generative AI systems such as ChatGPT will need to comply with the requirements under the Act. Although generative AI systems aren’t classified as high-risk, they must comply with both the transparency requirements and EU copyright law. These include:
- Complying with the transparency requirements detailed in Article 52 of the Act.
- Ensuring that the generative AI system doesn’t contravene EU law (e.g., copyright law).
- Documenting and detailing publicly a summary of the use of copyrighted training data.
We recommend that before Finally incorporates any output from a generative AI system into work for EU clients, that it reviews the terms of the developer of the relevant system to ensure that the developer is compliant with the requirements of the Act.
8. Legal developments within the UKAs of the date of the Guidance Note, the UK Government has not passed an equivalent legislation to the EU AI Act in the UK. In the King’s Speech dated 17th July 2024, the UK Government presented the Digital Information and Smart Data Bill (the previous Data Protection and Digital Information Bill failed to become legislation within the UK). However, as the Digital Information and Smart Data Bill doesn’t provide any proposed regulation of AI, the current use of AI is something that is being regulated and developed using existing legislation with new guidance.
The UK Government (via the Department for Science, Innovation and Technology) has taken a more pro-innovation approach and confirmed the below principles for regulating the use of AI in any sector:
- Safety, security and robustness.
- Appropriate transparency and explainability.
- Fairness.
- Accountability and governance.
- Contestability and redress.
The regulators of the different sectors within the UK will be expected to apply these AI principles alongside their existing rules and regulations. This may involve the creation of new guidance notes to assist companies in understanding best practice within their respective areas. Consequently, the legal development of AI within the UK may differ dependent within sectors. We discuss these legal developments below in detail.
9. AI and the legal developments in IPThe use AI (especially generative AI) for commercial use has given rise to claims of IP infringement. This has mainly resulted from the unauthorised use of IP (such as the image, video, or other content) by an AI system without the IP owner’s consent. Such infringing IP could have been unlawfully incorporated as training data (via web scraping) or it may have been inputted by a user. The main issues that arise are:
- Who is the owner of the “new IP” that is created using the infringing IP (Point 1).
- What is the position of the IP owner when AI is used to infringe their work (Point 2)
The general rule of copyright law in England and Wales is that the “first owner of ownership of copyright will be the author”. It is important that when you use generative AI tools that you check their terms to see if Finally will own the works / outputs created from inputting the prompts.
Additionally, any copyright which is “…computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken”. The definition of “computer-generated” used in the CDPA refers to “…where there is no human author of the work”. Therefore, in the absence of case law expanding on the meaning of “undertaking necessary arrangements”, any new copyright developed by an AI system will belong to the individual that created the AI system (subject to legal agreements assigning the original copyright ownership such as employment and IP assignment).
As of the date of the Guidance Note, there is no voluntary code of conduct that provides guidance balancing the protection of the content creator against the use of third party copyright for the purpose of training an AI system. Currently, an AI developer can only do this in limited circumstances. Therefore, the UK Government may look to take imminent action and push for a voluntary code to be created.
11. Point 2 - The Getty caseThe case of Getty Images (US), Inc and others v Stability Al Ltd (Getty Case) is currently being trialled in the High Court (the senior courts of England and Wales). The Getty Case centres on Getty Images Inc, claiming amongst other things, that Stability Al Ltd unlawfully “scraped” millions of images from its website without Getty Images Inc’s consent. Stability Al Ltd used these images as inputs to train and develop its deep-learning AI model called Stable Diffusion. Subsequently, UK users had access to the outputs of Stable Diffusion, which incorporated a substantial part of the infringing copyright.
Additionally, Getty Images Inc claims that Stability AI Ltd are responsible for making the infringing work available on the platforms of others such as GitHub, HuggingFace, and DreamStudio (constituting secondary copyright infringement). The Getty case is still ongoing and will set a precedent for how copyright laws should be enforced against infringing AI-generated content. However, it can be seen that IP owners can rely on established rights to protect their copyright. Thus, it’s important that Finally conducts some internal due diligence before it uses a AI platform (we can assist with this).
12. AI and contract law
As AI isn’t considered a legal person (AI can’t hold any legal rights for itself or be held legally accountable for its actions / output). Thus, a recipient of services that incorporates AI is likely going to expect the supplier to compensate such recipient for any loss it suffers because of AI (e.g., IP infringement, data protection non-compliance or general breach of contract). Generally, a services contract will already include such terms.
However, a supplier may want to limit the level of compensation it provides the recipient if it can’t control the outputs being provided by the use of AI. Such outputs may include results derived from information provided from the recipient that is inputted into an AI tool. The below factors should be considered:
- Does the supplier own the AI tool (if yes, is the supplier responsible for training the AI tool (checking for data mitigation bias and the sources of the training data)?
- Will only the supplier be inputting the information on behalf of the recipient (if yes, can the supplier refuse to input information that will breach the law e.g., certain personal data)?
- Will the recipient have access to the AI tool?
- Will the recipient compensate the supplier if the information provided by it breaches the rights of others?
- Will the recipient be limited to only using the AI output for the agreed purpose?
If a supplier is incorporating AI outputs into the deliverables such as content (in contrast to the deliverables actually being AI outputs), it’s going to be hard to get the recipient of the services to accept a limit to the general compensation terms in a contract (as detailed above). The recipient expects the supplier to have checked that incorporating the AI output would not infringe the rights of others and that the recipient will receive ownership of the IP rights in the deliverables (subject to the contract). Thus, the use of AI outputs tends to place more risk on the supplier.
13. AI and data protectionThe ICO has expressed that monitoring the use of AI in connection with personal data is a priority due to the potential high risk it poses to individuals rights and freedoms. In particular, the ICO has chosen to focus on the below aspects:
- Fairness in AI.
- Dark patterns (tricks used to manipulate users into making decisions benefitting the company and exploiting the user’s psychology and habits e.g., presenting AI generated content as authentic).
- AI-as-a-Service (a service by third parties to businesses allowing AI powered tools / capabilities to be incorporated into systems).
- AI and recommender systems (involves the recommendation of additional products to consumers).
- Biometric data and biometric technologies (technologies that authenticate and verify unique human characteristics).
- Privacy and confidentiality in explainable AI (explainable AI is a set of processes allowing users to understand and have oversight over the results / outputs created).
As Finally doesn’t currently own an AI system or tool, the above provides an understanding of the main risks areas when using AI to process personal data. We recommend that Finally continues not to input any personal data into any AI system to manage the risk in this area.
Finally should ensure the below is in place before inputting personal data into an AI system:
- If the AI system has been developed in accordance with some “responsible AI standards" (we note that Microsoft has such standards).
- An understanding of how the AI system has been trained and how data biases are detected / mitigated.
- There is a clear understanding of your data protection obligations.
- Whether both the EU GDPR and UK GDPR will apply.
- There is a contractual arrangement ensuring that it will be compensated if personal data has been provided non-compliantly by the client.
- It has conducted its own data protection impact assessment.
We would recommend that you don’t start inputting personal data until we have discussed the above considerations (and any other relevant factors) to accurately assess your risk.
14. Automated decision makingWith the rapid growth of the capabilities of AI, it’s getting better at analysing data about individuals to predict personal preferences via profiling (automated decision making). Automated decision making involves using certain personal aspects relating to an individual to analyse or predict aspects concerning that individual’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.
There are limits to the use of automated decision making solely to process personal data. These include:
- Where it’s necessary to enter into, perform a contract between the individual and the data controller.
- Where is authorised by the national law of the UK / EU member state to which applies to the data controller, and suitable measures are in place to safeguard the individual’s rights.
- Where the individual has provided explicit consent.
However, there is the requirement that the individual should be able to speak to someone who understands, can explain the automated decision and deal with any challenges raised by the individual in certain instances. Therefore, the quality and genuineness of human intervention in connection with automated decision making is important.
From our understanding of the business, we don’t currently see the need for Finally to use automated decision making within its operation or in providing its services. We have provided the information in this Section to give you a better understanding of what automated decision-making is.