Skip to main content
Global HR Lawyers

Ask About... Fashion, Retail and Hospitality

04 February 2020

Many of our clients in the retail, fashion and hospitality sector face similar HR issues. Each month one of the members of our team will identify an issue, consider how it should be dealt with and provide our advice. This month we asked James….

I am a Recruitment Manager for a largescale high street retail chain.

We always receive a high volume of applications and CV’s for various roles across the business, and this has increased significantly recently with the opening of a number of new stores nationwide. To assist our recruitment team, we’ve purchased a new artificial intelligence (AI) system to help with the initial sift. The AI technology goes through the applications and compares them against people who have already been hired and are performing well to identify applicants to put through for an online interview. It has made a huge difference to our ability to manage this part of the recruitment process.

We’ve had a few complaints, one in particular from an unsuccessful female candidate. She has complained after her brother, who also applied with similar academic records and prior experience, was put through to the next stage. She’s contacted HR to ask for the reason she was turned down and believes that her gender was a key factor in why her application was rejected. We have anti-discrimination and harassment policies and would never condone any of our team making a decision on discriminatory grounds. Plus all our recruiting managers receive training on this.

This has raised a lot of questions about the use of AI in our hiring processes and we’re wondering what we need to consider and what the risks are. 

Surely the company can’t be liable for the AI’s decision not to put her application through to the next stage? It was the AI technology that made the decision, and she’s not even our employee!

A.  It makes a difference that this happened during the recruitment process. The individual who made the complaint is only a job applicant and not an employee, so isn’t protected by the Equality Act – you don’t need to worry about it!

B:  Of course the company can’t be liable - it’s not like it chose to discriminate against her, clearly it was the algorithms!! There are many instances where an employer can be liable for discrimination, but this definitely isn’t one of them. The fact that the company trains its managers on this and has an anti-discrimination and harassment policy shows that it wouldn’t discriminate in this way.

C.  In the UK, AI driven employment decisions could be capable of being challenged as directly or indirectly discriminatory on grounds of sex (or any other protected characteristic) under the Equality Act 2010. Under the Equality Act, protection extends to applicants for employment, as well as employees. Arguing that the decision was made by the algorithm is unlikely to succeed, so there is a risk that the company could be found liable.

D.  The whole point of using these systems is to remove the risk of bias in decision-making so it is highly unlikely that the AI system will be biased!

The correct answer is C.

Organisations are increasingly deploying AI solutions to perform traditional HR tasks in a variety of ways, particularly in recruitment. AI technology does not suffer from the same biases and prejudices that humans are prone to suffer from which means that AI technology can be used to make decisions that are theoretically, free from bias prejudice and discrimination. However, AI technologies are only as reliable as the data they are fed (usually, at least in part by humans), and flawed datasets produced flawed AI decision-making. The issue of algorithm bias and legal claims is attracting increasing attention.

In this scenario, the individual making the complaint is protected against discrimination as an applicant for employment. Unsuccessful candidates are not often in a position to compare themselves against a successful candidate and so generally shortlisting decisions are rarely challenged. However if an individual can overcome this, an employer would need to explain how the decision was reached – and this may well be difficult to do if the employer does not understand the algorithm or the way the algorithm works is not transparent.

The company should consider the extent to which the new AI system processes the applications to ensure this is approached in a non-biased way. The extent to which that is possible may well depend on whether it is an “off the shelf” product or whether the company has worked with a third party to develop it and what assurances were received from the supplier in terms of testing for bias. 

Clearly, algorithms which screen individuals based on their protected characteristic (e.g. age, race, disability, sex and sexual orientation, religion or belief etc.) are directly discriminatory. An automated decision making process could also be found to be indirectly discriminatory. Algorithms can also develop through machine learning techniques to reach discriminatory decisions. As algorithms become increasingly complex the task of explaining the way in which a decision was made becomes increasingly challenging. The company should consider what safeguards can be introduced to reduce the risk of unacceptable bias, for example, introducing human oversight to regularly stress-test AI hiring decisions to identify any anomalies.

The company should also be mindful of the requirements of the General Data Protection Regulation (GDPR) which came into force in May 2018. Employers seeking to embrace new technologies will need to consider and balance (i) the interests of the data subjects protected by the GDPR; and (ii) the employer’s interests in investing in new AI technologies within the workplace. In most circumstances, employees and candidates have the right not to be subject to a decision made solely using automated means so employers need to implement suitable measures to safeguard the data subject’s rights and freedoms.

You can find out more on these issues on our Future of Work Hub website here. We are also hosting an event on 18 March 2020 which will touch on the issues of trust and ethics – an increasing area of focus for employers deploying new technologies in the workplace. Click here for more information and to register.

Related items

Back To Top