Skip to main content
Global HR Lawyers

White Paper consultation response: regulating AI in the workplace

20 February 2024

In March 2023 the government published its White Paper on AI regulation. In less than a year, development in the field has been rapid, with regulators and legislators chasing to keep up. The government has now published its response to the consultation on the White Paper. We focus on what this tells us about future regulation for the use of AI in the workplace.

Since the publication of the AI White Paper last March, the UK has sought to position itself at the forefront of global collaboration on AI safety and research. In November 2023 it hosted the world’s first AI Safety Summit, culminating in the Bletchley Declaration which outlined a shared vision on AI safety and ethics. Nevertheless, in its recent response to the consultation on the White Paper, the government has reiterated its 'light touch' approach to regulating AI, favouring – for now - regulatory guidance over legislation. The focus remains on empowering regulators to enforce the White Paper’s five cross-sectoral AI principles. The consultation response does, however, shift slightly towards regulation by recognising the risks and legal challenges that may ultimately mean that legislation is necessary – when the government “is confident that is the right thing to do”.

Notably, just a few days before this was released, the House of Lords published its report on large language models and generative AI. The report stated that the government has been too focused on high‑stakes AI safety and called for a “rebalance” to avoid missing the opportunities presented by AI. However, in the employment context, it’s widely acknowledged that the impact of the technology is potentially “high stakes” for the individuals impacted by AI- supported decisions; the consultation response sheds some light on how this might be impacted by regulation in the future.

Categories of risk

Unsurprisingly, the consultation response does not announce proposed legislation or even a Code of Practice that will impact on the use of this technology in the workplace. The document does, however, outline how the government and regulators are responding to a number of specific risks, a number of which are directly relevant to the world of work. For this purpose, the government identifies three categories of risk: societal harms, misuse risks, and autonomy risks. Some of the measures needed to mitigate these risks will impact on how AI is used in the workplace.

Preparing workers

First among the examples of potential societal harms is labour market disruption and the need to prepare UK workers for an AI- enabled economy. The report recognises the fact that AI is revolutionising the workplace, bringing new and potentially higher- quality jobs. However, in recognition of the associated risks -– such as the potential for increased workplace surveillance and discrimination -– the consultation response indicates that in spring of this year, the Department for Science, Innovation and Technology will be providing updated guidance.

Employers who are still grappling with how best to harness this new technology in the workplace are likely to welcome specific guidance. However, there is no suggestion here of legislative change, and existing laws such as the Equality Act 2010 and the GDPR remain the most relevant.

Training

Many employers have recognised that a key pillar in safely introducing AI into the workplace is ensuring that staff are trained on this technology. To use AI effectively and safely in the workplace, it is crucial to understand its capabilities and risks. The fact that the consultation response indicates that the Alan Turing Institute will soon be publishing guidance on the core AI skills people need may therefore prove helpful. On this point, our recent report by the Future of Work Hub: Strategic Priorities: Shaping the Workforce and HR Agenda in 2024 and Beyond, identified improving digital literacy at all levels as a key step in preparing workforces for a more tech-enabled world of work.

Protecting citizens from AI- related bias and discrimination

As we have written in detail here, AI has the potential to entrench bias and discrimination as the underlying algorithms can “bake in” and magnify bias present in the training data. This is especially relevant in the context of recruitment, where AI tools may be used to screen and evaluate job applicants.

The consultation response notes that it is working with the Equality and Human Rights Commission (EHRC) and the Information Commissioner’s Office to develop new solutions to address this risk. As the government indicates, a key step which has already been taken, is an update to the ICO’s guidance on how data protection laws apply to AI. The EHRC's involvement, however, appears to be at a more preliminary stage, with the consultation response referencing last year’s Fairness Innovation Challenge. This invited the submission of technical solutions to address the risk of bias and discrimination in AI systems, but the outcome of this exercise remains to be seen.

Regulators will, of course, need adequate funding to effectively oversee AI, - a point fed back to the government as part of this consultation process. In recognition of this, the consultation response announces a £10m investment for regulators. However, the stated aim of “future- proofing” regulators’ capabilities will inevitably be a challenge, given the pace of technological advancement.

Reforming data law

As set out in our case study on recruitment, data protection obligations are fundamental to the fair and lawful use of AI in the workplace. It's therefore unsurprising that reforming data protection law is highlighted in the consultation response. As we explained here, the laws regulating automated decision- making are relevant to the use of AI in this context, but their application to this scenario is complex. The consultation response indicates that these rules will be reformed through the Data Protection and Digital Information Bill. Specifically, the bill will expand the lawful bases on which automated decisions that have significant effects on individuals can take place. Given the fact that employment- related decisions will clearly fall into this category, this reform will be highly relevant.

Ensuring AI best practise in the public sector

Yet another key government AI publication came out recently: the Generative AI framework for HMG, and this too is highlighted in the consultation response. This guidance sets out a framework for the procurement and use of generative AI for the UK government. Whilst focused on the public sector, many of the principles set out in the guidance are of more general interest to employers, making it a useful resource on how best to adopt generative AI in the workplace.

This document sets out 10 common principles for the safe, responsible, and effective use of generative AI in government organisations. These principles, particularly those listed below, are also relevant to a wider audience:

  • You know what generative AI is and what its limitations are: as already noted, it’s widely recognised that upskilling the work force is key. Understanding what generative AI can and can’t do helps the technology to be harnessed safely and effectively.
  • You use generative error lawfully, ethically and responsibly: this highlights the need to comply with applicable legal principles, and broader principles relating to ethical and fair use. A key way that employers can address this principle is by defining acceptable uses for generative AI which are specific to their workplace in a written policy.
  • You know how to keep generative AI tools secure: this flags the risks arising from sensitive data being inputted into AI systems. The level of this risk will depend on whether the generative AI tool is part of a closed system, which limits public access to the inputted data, or an open system, which significantly exposes the data to misuse.
  • You have meaningful human control at the right stage: although the need for human review is often flagged as a critical safeguard, this guidance rightly acknowledges that in certain functions (like chatbots, for example) this is not always possible and other processes must be put in place instead. Recognition of the limitations of human oversight is something we explored in more detail here, noting that, for this reason, this safeguard is arguably a “red herring”.

The consultation response also flags the Algorithm Transparency Recording Standard which is being rolled out across government and the wider public sector. This establishes a standardised way for public sector organisations to publish information about how and why they're using algorithmic methods in decision- making. Being able to explain a decision driven by an algorithm is certainly emerging as a key pillar in AI regulation. But precisely how an individual decision can be unpicked – whether it can be understood at a global level or a local level, will be key to the effectiveness of this safeguard.

Longer- term regulatory plans

Although the thrust of the consultation response is that the government is persisting in its light touch approach, it goes on to acknowledge that the growth of AI capabilities is likely to mean that countries will want binding measures to keep the public safe.
Whilst this is not acknowledged anywhere in the consultation response, the most obvious binding measure that will have a wide-reaching impact on the use of AI across the world is the EU AI Act. In terms of the progress on this legislation, EU member states have now rubber-stamped December’s political agreement. Formal adoption of the Act looks likely to follow in April.

Allocating liability

The consultation response sets out what it considers to be some of the key questions that countries will have to grapple with when deciding how best to manage the risks of highly capable general purpose AI systems. An example given in the paper is the challenge of allocating liability across the supply chain, with the scenario of a discriminatory AI recruitment tool purchased by the UK company used to illustrate this challenge.

In the example given, the unsuccessful candidate seeks to bring a discrimination claim against the accountancy firm that she applied to. The discriminatory technology, however, was supplied by a third-party tech company (in this scenario based in the UK). The consultation response recognises the fact that existing equality laws are applicable to these issues but are not necessarily well set up to effectively handle them. This is in fact a scenario we explored in detail in our recent discrimination and bias in AI recruitment, concluding that both existing data protection and equality laws are unsuited for regulating automated employment decisions.

The consultation response notes that it is common for the law to allocate liability to the last actor in the chain. As explained in our case study, it would potentially be possible for a discrimination claim to be brought against the tech supplier in this scenario (even one based overseas) as section 111 of the Equality Act 2010 provides for claims of instructing, causing or inducing discrimination. So, it is not true that the law allocates liability to the “last actor” in the chain; there is no specific legal principle to this effect. That said, the reality is that it is certainly the simplest and most obvious legal option to bring action against that party.

Indeed, a recent EAT case illustrated the potential complications of allocating liability in this kind of situation. The EAT found that a claim of indirect age discrimination, brought by a retired employee against his former employer's parent company complaining about changes to the parent’s LTIP scheme rules (under which the retired employee benefited) after he had left employment, could not succeed as the parent company was not acting as the agent of the employer. Although not directly analogous to the recruitment scenario (and the decision did not explore the allocation of liability under s. 111), this does underline that it is not necessarily straightforward to make a third-party liable for discrimination.

In short, the recruitment example set out in the consultation response is arguably simplifying what is a complicated area of the law. Nevertheless, the underlying principle that, under existing laws and procedures, pursuing a discrimination claim against a third-party tech company will not be a straightforward is difficult to argue with.

What next for employers?

As set out above, although the consultation response does not fanfare any major changes affecting the use of AI in employment, guidance and regulation in this area is clearly developing, with many employers’ priority being preparation for the EU AI Act.

The consultation response purports to address international collaboration, mentioning the G7, G20, OECD, Council of Europe, and UN under a section headed “working with international partners to promote effective collaboration on AI governance”. However, there is no mention of the EU or EU AI Act. The paper also acknowledges the need to achieve “appropriate levels of coherence with other regulatory regimes” and the need to “minimise potential barriers to trade”, yet makes no mention of the EU AI Act - legislation that is likely to set a baseline for global regulation in the way GDPR has influenced the use of data globally.

Regardless of the government’s deafening silence on this point, as “deployers” of AI systems, there are a number of steps that employers will need to take in order to prepare for this EU AI Act and we will be writing about this in more detail soon.

Related items

Related services

Artificial Intelligence (AI) – Your legal experts

AI has been all over the headlines in recent months. Generative AI can already produce written content, images and music which is often extremely impressive, if not yet perfect – and it is increasingly obvious that in the very near future, AI’s capabilities will revolutionise the way we work and live our lives.

Back To Top