Skip to main content
Global HR Lawyers

Empathic AI – workplace data privacy and employment issues arising from this emerging technology

04 May 2022

Smile! You’re on camera! And we are detecting your heart rate, pupil dilation, blood pressure, temperature, and other physiognomic measurements… Empathic Artificial Intelligence is a sub-group of Artificial Intelligence (AI) systems which makes use of empathic technology: algorithms that purportedly have the ability to detect human emotions. In this article we consider the data privacy and employment issues for employers arising from this emerging technology.

Empathic AI

AI is becoming more commonplace in the workplace, spurred on by employers trying to maintain connections with and oversight of their workforce in a time when home working and hybrid working have become more prevalent. At the same time, many employers are seeking to improve efficiencies in processes and decision-making.

Empathic AI is a type of AI that discovers links between people’s physiological reactions and their emotional state using a whole range of factors, such as heart rate, pupil dilation, flushing of the cheeks, changes in voice tone etc.If effective, empathic AI could have huge potential in many areas, not least in a workplace context (HR teams would no doubt love to be able to read how a job applicant is feeling in an interview, or more deeply read someone undergoing a disciplinary process).

Current regulatory landscape

AI and machine learning systems are becoming ever more prevalent. With this growth comes increased attention and regulatory intervention. Of course, the UK and the EU already have detailed legislation (respectively the Data Protection Act 1998/UK GDPR and EU GDPR) which set out existing obligations data controllers need to abide by, including fundamental principles of transparency, data minimisation, proportionality, and fairness. However, further regulation and guidance specifically aimed at AI is on the horizon.

Emerging EU framework

In April 2021 the European Commission released a draft AI regulation proposing to harmonise rules on AI systems. This draft regulation uses a risk-based approach to monitor AI systems, with AI in the area of “employment, workers management and access to self-employment” categorised as high risk. This classification falls between the “unacceptable risk” and “low risk” use of AI, and brings with it a number of key obligations for AI system providers, including:

  • maintaining a continuous risk management system;
  • using high quality datasets that are relevant, representative, free of errors, and complete; and
  • allowing for human oversight.

The detailed obligations for providers of high-risk AI systems may be difficult to overcome, and are backed up with hefty penalties that could reach up to €30 million or 6% of total worldwide annual turnover (whichever is higher) for breaching data governance provisions.

The EU proposed Directive regulating platform work, whilst not strictly an AI-related directive sets out obligations for employers in the EU and reflects the EU’s direction of travel with regard to tech-related matters in employment, with the GDPR principles of fairness, transparency and accountability taking centre stage.

Emerging UK framework

The ICO has published guidance on AI and data protection, bringing together the various data protection principles and places them in the context of AI. Key points include:

  • Guidance on the accountability and governance implications of AI, including approaching risk management considerations when undertaking data protection impact assessments and understanding controller/processor relationships in AI.
  • What organisations need to do to ensure lawfulness, fairness, and transparency, such as identifying purposes and lawful bases and addressing risks of bias and discrimination.
  • How to assess security and data minimisation, e.g. mitigating the risks of privacy attacks and what minimisation techniques may be utilised.
  • How to ensure individuals’ rights are preserved in the different stages of the AI lifecycle, how they relate to data contained in the AI model and the role of human oversight.

In September 2021 the UK Government published its 10 year strategy on AI. This was swiftly followed by a government consultation on proposals to reform the UK’s data protection laws, including in relation to regulating AI. In November 2021, the All-Party Parliamentary Group (APPG) for the future of work published a report proposing specific legislation to regulate employers monitoring workers through technology.

In March 2022 the government published its plans to develop a national position on governing and regulating AI, including in relation to algorithmic decision-making, to address the potential risks and opportunities presented by AI technology. A white paper is expected in 2022.

Legal and ethical considerations

As this type of technology gets closer to becoming reality, organisations need to carefully consider a number of wide-ranging data privacy, employment law and ethical implications. Is the data that controllers are processing accurate? Is it necessary? Is it proportionate? How do you eliminate bias? Overcome incorrect outcomes?

Data and privacy

One of the questions controllers who use this technology in hiring scenarios will face is ‘is the processing necessary for achieving the pursued objectives, or is there a less intrusive alternative?’. On the one hand, employers have managed to successfully hire candidates for many years without the need to utilise AI technology. Can empathic AI produce more accurate and reliable results? In order to use this technology, employers will need to be able to demonstrate that empathic AI is necessary for hiring the best candidates and that the use of empathic AI in the hiring process is proportionate.

Consent cannot be used in the employment context under the UK GDPR, which leaves businesses needing to rely on the “legitimate interest” basis when using AI tech in this space (i.e. that the processing of data collected by an AI system is necessary for the organisation’s goal and does not override the fundamental rights and freedoms of the data subject).

Furthermore, the types of data being collected will be considered special data under the UK GDPR, in which case employers will need to show that the use of this type of data is necessary for carrying out obligations in the field of employment or is of substantial public interest. These are going to be high hurdles to overcome.

Any business adopting this kind of technology would need to produce a Data Protection Impact Assessment (DPIA) in order to analyse any potential risks and set out how these may be mitigated. Assessing the risks across the relevant business functions would be beneficial, and would also provide evidence to data regulators that these issues were contemplated at the outset and not as an afterthought. Completing the DPIA will also help the business get to grips with the GDPR’s transparency requirements. The use of empathic AI will require an explanation of how any decision in which it is involved is made, and so businesses will themselves need to understand the complex decision-making processes involved (which are often not easily verifiable).

Businesses will also need to consider the UK’s existing rules on automated decision-making. The EU’s draft regulation on AI requires systems to allow for human oversight, but this human element is already compulsory in many employment situations under the UK GDPR - data subjects cannot have decisions made about them based solely on automated processing if these decisions have significant effects (e.g. determining whether a candidate is suitable for a role). It is likely that in most instances where empathic AI is contemplated a human agent will be on hand as the final decision maker. However, as this technology develops the need for human oversight will remain a necessary component of any decision.

Employment

While transparency is crucial for UK GDPR compliance, it is also essential from an employee relations point of view. If an organisation uses empathic AI, communicating the how and why to those it will affect will be fundamental to its success. IT and HR will need to have a firm grasp of any technology that is used in order to field any questions that may arise. As the technology is still in its infancy, there may be some pushback from data subjects (i.e. employees or potential employees) and this will need to be taken seriously; considering whether there are any less intrusive alternatives should be an important part of the planning and implementation stages, and duly noted in the DPIA.

At their heart, the systems that enable AI (including empathic AI) are based on sets of algorithms that are able to vary themselves based on the data they receive and offer outputs based on a set of guidelines found in their code. We have written before on the topics of algorithms and employment law and the rise of algorithmic management, setting out the potential benefits to organisations, the risk of bias in their use, and recommendations for their implementation in the workplace.

AI is renowned for being excellent at repetitive tasks, but how humans express emotions, react to different situations or behave - whether due to experience, environment, cultural background, or even the type of day they are having – all differ. Vast data sets and ongoing training of the AI is required to rule out biases and incorrect outcomes. Historically, facial recognition algorithms have shown an element of racial bias, being less reliable for people with darker skin tones, with one 2018 paper finding that one facial detection model had a 0% error rate when detecting light-skinned males compared to a 20.8% error rate for dark-skinned females. This particular data set would not be classified as “high quality”.

Disability discrimination is another area of concern - medical conditions or physical impairments can lower the accuracy of an attempt to read a person’s emotions. For example, people experiencing facial muscle complications, those with speech impediments, or those with developmental disabilities may have their emotions misclassified and be adversely scored. Are there any reasonable adjustments that could overcome any such bias? This will need to be considered in the context of any exercise where empathic AI is adopted. Given the vast range of human emotions, and the countless ways that these are expressed, the legal risk is unlikely to ever be reduced to zero.

Where decisions are made based on a person’s emotional “score”, human oversight will be required – not only because of the UK GDPR requirements, but also in order to reduce any risk of discriminatory outcomes. If a person has to review the outcomes of any automated decisions, some may ask is it worth having the system in the first place?

Conclusion

Advancements are still being made in this field and a number of tech companies are seeking to create AI system products that can be widely rolled out in the future. Employers considering the adoption of empathic AI will need to monitor developments closely, whether in the UK, EU or globally. Different countries have vastly different attitudes - both legislative and culturally speaking - to the use of AI in the workplace. This is an area where new regulation is on the horizon so employers will need to ensure a full understanding of the evolving legal landscape and how to mitigate any risks before adopting these tools.

Related items

Back To Top