AI, recruitment and the law: how do equality and data protection laws regulate this process?
31 October 2023
The regulation of AI is front and centre of the minds of policymakers around the World. Central to concerns raised about the rapidly increasing use of AI are the risks of bias and discrimination, particularly in the employment context. We look at how existing equality and data protection laws apply to these kinds of automated decisions.
The UK’s Department for Science, Innovation and Technology trumpeted its AI Safety Summit at Bletchley Park at the beginning of November, bringing together experts from around the World to consider the risks of AI and how they might be mitigated through internationally co-ordinated action. As it stands, approaches differ across the world.
Global AI regulation
In August, the UK published a White Paper (a policy paper) setting out its approach to the regulation of AI. This advocates a “pro-innovation” (in other words lightly regulated) approach to AI regulation. The UK is not proposing a new set of laws governing AI but a set of broad principles to guide existing regulators, namely:
- safety, security and robustness
- appropriate transparency and explainability
- fairness
- accountability and governance
- contestability and redress.
Other jurisdictions are looking to take a different approach with detailed AI-specific laws. In particular, the EU AI Act is due to be enacted in 2024 and come into force in 2025 or 2026. Though some of the detail is to still be agreed, this promises comprehensive regulation of the use of AI according to a risk-based classification. AI systems used for “employment, worker management and access to self-employment” are classified as high risk. This classification imposes additional regulatory obligations, including to assess and mitigate risks throughout the system’s lifecycle. Also of note are requirements on such systems with regard to transparency, as well as enabling effective human oversight.
Several US states have proposed specific laws regulating the use of AI in employment decisions. New York, Illinois and Maryland have all already introduced laws. Although these are, to date, generally limited in scope, New York’s includes a requirement of annual bias audits for automated employment decision-making tools.
The Canadian government has proposed the Artificial Intelligence and Data Act with requirements to identify and mitigate risks of bias.
The UK law today
In the UK, the use of AI in employment is already subject to detailed regulation but not laws designed specifically to address AI. In some cases, these rules now amount to round pegs for square holes and in other cases, their application to AI is unclear.
The most relevant laws that apply to the use of automation in processes such as recruitment shortlisting are discrimination laws covered by the Equality Act 2010 and data protection laws covered by the UK GDPR (and the Data Protection Act 2018 that supplements it).
The Equality Act
The Equality Act 2010 incorporates the UK’s discrimination laws. AI is used in employment at all stages from the advertising jobs, shortlisting, interviewing, recruiting, setting remuneration, determining promotions and bonuses and even making decisions on dismissal.
Equality laws cover direct and indirect discrimination on any of the protected grounds as well as a specific duty to make reasonable adjustments to ensure workers with a disability are not substantially disadvantaged.
The prescribed protected characteristics include:
- Sex/gender
- Race/ethnicity
- Age
- Religion/belief
- Disability
- Sexual orientation
- Pregnancy/maternity
- Gender reassignment
- Marriage/civil partnership.
Direct discrimination takes place where the claimant has been disadvantaged because of their protected characteristic (sex/gender, race/ethnicity and age in our case study). Indirect discrimination occurs where the claimant has been disadvantaged by reason of the application of a PCP (provision, criterion or practice) which places members of their protected group at a particular disadvantage and cannot be justified.
Direct sex and race discrimination is generally always unlawful whereas all forms of indirect discrimination (and direct age discrimination) will be lawful if justified as a proportionate means of achieving a legitimate aim.
A further type of discrimination claim can arise in the UK where a party induces or causes another to commit an act of unlawful discrimination.
Discrimination claims under the Equality Act are brought in an employment tribunal. Compensation is not capped and is assessed on the basis of both financial loss and injury to feelings.
The Equality Act covered the UK’s duty to implement EU laws on equality. These have not been changed post-Brexit so discrimination laws around the EU will be very similar to the UK’s.
Data protection
The UK GDPR governs data protection law. Where personal data are used in the context of AI tools - whether to train, test or deploy them - UK data protection law imposes various obligations on data controllers and data processors. Generally, UK GDPR will not apply to processors outside the UK and that have no presence in the UK; but data controllers in the jurisdiction are required by the UK GDPR to contract with their overseas processors to comply with obligations both in relation to the appointment of processors, as well as to the international transfer of personal data to those processors.
The extent to which the UK GDPR applies to the automation of employment decisions is in many respects unclear.
Profiling
The automation of employment decisions often involves profiling as a core processing activity.
The UK GDPR defines ‘profiling’ as: “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements”.
AI systems can create and apply algorithms. These algorithms are a sequence of instructions or set of rules designed to complete a task or solve a problem. Profiling often involves using algorithms to find correlations between separate datasets, and then using those correlations to categorise or profile individuals. A wide range of decisions can be based on profiles, for example to predict their behaviour based on inferences about them.
Profiling can often lead to quicker and more consistent decisions, but – as the European Data Protection Board makes clear in its guidance on the topic – it also comes with risks: processes can be opaque; people might not expect to be profiled or understand what is involved or how it affects them; and profiling can lead to significant adverse effects such as perpetuating existing stereotypes and discrimination.
Automated decision-making is the process of making a decision by automated means without any human involvement. Whilst automated decision-making often involves profiling, it does not have to.
Lawful, fair and transparent processing
In order to comply with UK GDPR, the processing of data must be lawful, fair and transparent – this is the first data protection principle.
Lawfulness means that the processing not only has a lawful basis under the UK GDPR (more below), but is not unlawful in the wider sense i.e., it complies with other applicable laws, including equality laws. Fairness means that the processing not only accords with individuals’ reasonable expectations, but that it does not produce discriminatory effects. Transparency ensures that individuals understand how the processing works and how it can affect them, and there are various prescriptive requirements about the information that needs to be provided to them.
Lawful processing
Lawful processing of personal data in a profiling requires a lawful basis. Of the lawful bases listed in the UK GDPR, the ones relevant to most employers are:
- consent
- necessity to enter into or perform a contract (such as an employment contract)
- necessity for the purposes of a legitimate interest.
It is also worth keeping in mind that some types of data are more sensitive and, therefore, are considered more deserving of protection. Those data are known as ‘special categories’ and, in an employment context, will commonly include information about matters such as health, trade union membership and race/ethnicity. Where special category data are processed, as well as a lawful basis, the controller will need a separate condition. Employers will select one of the ten conditions potentially available depending on their purpose, but explicit consent, necessary for employment obligations or substantial public interest are conditions commonly relied on.
Automated decision-making
The UK GDPR has some specific rules on automated decision-making.
It provides that:
- solely automated decision-making, including profiling, that has a legal or similarly significant effect, is generally prohibited.
- there are exceptions to that prohibition.
- where one of the exceptions applies, there must be safeguards in place.
Whether a decision is ‘solely automated’ comes down to the level of human involvement – if someone considers the result of an automated decision before applying it to an individual, then it will not be ‘solely automated’. If, however, the human involvement is a token gesture simply rubber-stamping the automated decision, then it could well be.
Legal’ or ‘similarly significant’ effect is not defined in the UK GDPR. In an employment context, “e-recruiting practices without any human intervention” is given as an example in the recitals of a similarly significant effect, and EU guidance is clear that decisions that deny someone an employment opportunity or put them at a serious disadvantage fall into this category).
The exceptions to this prohibition are where the processing is:
- based on the individual’s “explicit” consent; or
- necessary to enter into or perform a contract with them (such as an employment contract).
There will be a question about whether this exception can apply at all to any solely automated workforce profiling. Arguably, a candidate cannot give “explicit consent” unless there is an opportunity for their applications to be considered without the automated processing (i.e., otherwise it is a ‘Hobson’s choice’). Also, the automated processing is arguably not “necessary” to enter into the employment contract - an employment contract could be entered into with an individual without the automated processing.
Where special category personal data are processed in this way, then an additional layer of protection is included. This requires, in addition to the exceptions mentioned above, that controllers have the individual’s explicit consent or that the processing is necessary for reasons of substantial public interest.
But even where one of these exceptions does apply, safeguards must be put in place. These include:
- the right for the data subject to require human intervention in the decision; and
- the right to challenge the decision.
In addition, to satisfy the transparency obligations which relate to solely automated decisions, meaningful information must be provided about the logic involved and the consequences arising from the automated decision.
If neither of the exceptions apply, then the shortlisting process needs to ensure that the decisions are not based solely on an automated decision. Each rejection could be considered an automated decision, meaning an effective human review of each application would potentially be necessary for the decision not to be solely automated. This would seriously undermine the time and cost benefits of the automation.
In terms of enforcement, most commonly alleged compliance breaches of UK GDPR result in individuals making a complaint to the regulator, the ICO. The ICO is then required to consider that complaint and, where necessary, can take enforcement action. High Court claims are also possible but relatively rare given the expense.
The UK’s forthcoming data protection reforms (in the Data Protection and Digital Information (No. 2) Bill) include proposals to loosen the prohibition on solely automated decision-making by enabling controllers to rely on their ‘legitimate interests’ (instead of just contractual necessity or explicit consent).
Mitigating risks
As well as fair, lawful and transparent processing, UK GDPR imposes further duties and confers further rights. The established way to identify, assess and mitigate potential risks, and demonstrate compliance under the UK GDPR is through a Data Protection Impact assessment. These are mandatory in situations where there is likely to be a high risk to individuals. ‘Likely to be a high risk’ is not a defined term, but controllers need to screen against various criteria that are high risk indicators. Where AI is used for shortlisting, various of these criteria are likely to be engaged.
As well as the data principles, individual rights need to be considered. These include the right to be informed, right of access (data subject access rights) and the right to object.
Flowchart of interaction between lawful, fair and transparency principles with solely automated decision making.
DSARs
Data subjects have a right to request access to personal data being processed and certain information about this processing.
A case before the Court of Justice of the European Union (OQ v. Land Hesse) has looked at the information to be provided to satisfy transparency obligations. The court decision is awaited but the Advocate General’s advisory opinion finds that more than general information about the profiling applied to the applicant needs to be supplied. They find that it should inform data subjects how the criteria were applied to them, including the respective weight given to the individual criteria. They continue that even where trade secrets would need to be protected, this should not be used as a ground for complete refusal to be transparent towards the data subject about the way their data were processed. If the Court follows this opinion (it usually does but not always) then this approach to transparency could well be followed by the UK courts.
Putting this into practice
As the use of AI in employment processes such as a recruitment increases, so too will challenges brought by those on the receiving end of unfavourable decisions.
We have looked in detail at how an allegation of unlawful discrimination in an automated decision might be handled by the current tribunal process. What is clear is that current litigation procedures look ill-equipped to deal with an automated world.
Related items
Related services
Artificial Intelligence (AI) – Your legal experts
AI has been all over the headlines in recent months. Generative AI can already produce written content, images and music which is often extremely impressive, if not yet perfect – and it is increasingly obvious that in the very near future, AI’s capabilities will revolutionise the way we work and live our lives.