We are already seeing AI being harnessed as a powerful tool in litigation, arming litigants in person with a “highly productive” – albeit sometimes very misguided - assistant. In this new world, should the parties be disclosing their use of AI? How does requesting, or volunteering, this information sit within the existing rules and principles?

As we explored in our recent article, employees are increasingly turning to generative AI tools to draft lengthy and often legalistic grievances. Naturally, the use of this technology has extended into the Tribunal process itself, with pleadings, witness statements and correspondence now often carrying the familiar whiff of AI. 

Why does this matter and what do the parties need to do about it?

Why does AI involvement matter?

It’s now widely understood that generative AI is prone to exaggeration and even complete fabrication (hallucinations, for those who adopt the lingo). Whilst this of course carries risk in every context, its use in litigation raises particular concerns:

  • Accuracy: Given AI’s capacity for hallucinations, has it generated false information?
  • Credibility: Is a witness statement generated by AI really in the witness’ own words? When someone has leant heavily on AI to prepare their evidence, do they really understand it and stand by it?  Does the statement reflect their actual knowledge or documents the AI has been fed?
  • Privilege: This raises some tricky and untested principles. Can the prompt itself be privileged, or should the focus be on the document it generates? And if the output is disclosable, what 
    might that give away as to the user’s motivations and the genuine belief in the documents output?
  • Privacy: what information has been fed into the AI tool by the employee and is that in breach of their obligations, and legitimate use of personal data?

For these reasons, there may appear to be tactical advantages in flushing out a party’s use of AI. However, asking questions about whether and how AI has been used of course cuts both ways. Litigation is generally a reciprocal process: if questions are asked of claimants about their use of AI, respondents must be ready to answer them too. 

What must be disclosed?

The standard case management order in Employment Tribunal proceedings is for both parties to send each other a list and copies of all documents in their possession that are relevant to the issues the Tribunal needs to determine. This process is known as disclosure. 

“Documents” has a broad meaning and includes “anything in which information of any description is recorded”. As a class of documents, this could include AI prompts. However, it’s more likely that the document of interest would be the prompt along with the results i.e. the generated output. Evidence of the prompts used to generate a document would (or at least should!) only be shown in a draft version or that document, or more likely in the history of the AI tool itself. However, the key question will be whether they are actually relevant.

Pleadings and witness statements

Questions of privilege aside, drafts of pleadings are not ordinarily disclosed in tribunal proceedings, and the fact that AI has been used would not automatically change that. If, however, there is a genuine question as to authenticity or evidence of fabrication, the drafts or background document such as the AI prompt history, could theoretically themselves be relevant documents. Prompts that, say, seek to position things more aggressively, or address specific allegations could certainly shine a light on these issues. 

But experience would suggest that Tribunals, mindful of the need to ensure the proceedings are dealt with in a proportionate way and keen to avoid satellite disputes around disclosure, would be resistant to this in many cases. Also, although the technology may be new, Tribunals have always had to grapple with questions of accuracy, exaggeration and the reliability of evidence. They are therefore likely to feel equipped to draw conclusions on these points without enquiring too far into how the document was created.

How to get disclosure of prompts 

In most cases, it’s unlikely that the parties will, by default, consider AI prompts as falling within the scope of standard disclosure. It’s likely that proactive steps will be needed to extract this information from the other side. 

How might this be achieved? The first step would be to ask the other side to voluntarily disclose the prompts (and output). Failing that, a formal application would need to be made to the Tribunal for specific disclosure or for further information. The test the Tribunal will apply in these circumstances is whether the disclosure is necessary for the fair disposal of the proceedings. 

It’s no secret that the Employment Tribunal system is significantly overburdened. A tribunal will therefore be reluctant to introduce a potentially broad category of additional documents unless their relevance to the proceedings is clear. This means that a request for disclosure of information that is narrowly framed and clearly linked to the issues in the proceedings is more likely to be successful. A generic request, without further context, for documents relating to the use of AI, or - if requesting further information - a bland “Did you use ChatGPT?”, is likely to be looked upon unfavourably as a ‘fishing expedition’. 

In terms of what to ask for or about, there is a spectrum between (a) asking whether AI was used and broadly for what tasks; (b) seeking the underlying prompts and outputs; and (c) demanding drafts. Unless there is good reason to do so, a tribunal is likely to want to avoid a proliferation of documents and this should be factored into the scope of a request. What might be considered reasonable or proportionate? Framing a question around transparency or accuracy – for example asking whether, for the purposes of transparency, generative AI was used to draft a claim and if so, which sections – is likely to be more reasonable. 

Again, flushing out AI use by disclosure is not the only option. Where AI‑linked exaggeration is suspected on the other side, cross‑examination is often the best way to test whether the witness truly stands behind the wording and the facts. 

Be careful what you wish for…

As flagged above, litigation is usually reciprocal. An order about a litigant in person’s use of AI may prompt scrutiny of the respondent’s internal use of AI to generate letters, investigation reports or consultation scripts, for example. It’s therefore important to factor in the strategic consequences of pursuing this line of enquiry.

What about privilege?

As with any type of document, if the AI prompts (and responses) are privileged they will not have to be disclosed. However, there are no reported cases that closely examine the question of privilege in relation to AI prompts, so it is certainly an area to tread carefully. 

It’s not entirely clear how privilege would apply to a litigant in person in this context. This is likely to come down to the (untested) question of whether inputting information into an AI tool constitutes a communication with a 3rd party and brings it in the scope of litigation privilege. However, even if that hurdle is satisfied, there may be a question over whether privilege has been lost because the information is no longer confidential. This is because inputting privileged information into an AI tool could amount to sharing it with a third party and therefore losing confidentiality. 

Whether confidentiality has been preserved is likely to depend on the type of AI platform used. Open, public platforms, such as freely accessible web-based AI tools, typically operate on shared infrastructure and may store or process user inputs in ways that risk exposing sensitive information. Although this will not mean the prompt, output or any supporting documents is freely available in its original form, these platforms often lack robust controls over data access, retention, and usage and it is certainly arguable that that would be enough.

In contrast, closed AI models - such as enterprise-grade or locally hosted systems - are designed with stricter governance, offering enhanced security features like data isolation, encryption, and controlled access. 

This of course will be a consideration for employers too, who may have generated documents using AI. The distinction between open and closed platforms reinforces the need for employers to implement clear policies on how AI tools should be used in the workplace.

Are there any AI specific rules?

There are currently no specific Employment Tribunal rules requiring parties to certify whether AI has been used to draft pleadings or other documents. More directly relevant to this issue are civil courts materials - such as CPR 32 and Practice Direction 57AC in the Business & Property Courts. These emphasise that witness statements should reflect the witness’s own words and require transparency around how they were prepared. While employment tribunals are not bound by these rules, its principles may carry persuasive weight if questions arise about the preparation of a statement. 

Recent judicial and professional guidance in the UK makes clear that users remain fully responsible for any AI-generated content submitted to a court or Tribunal, and employment tribunals are likely to adopt the same approach. Interestingly, the recent decision in Commerzbank AG v Ajao is a timely reminder of the risks of giving false evidence in the tribunal. In this case, the High Court held that a claimant in employment tribunal proceedings was in contempt of court for lying under oath. Although the Civil Procedure Rules do not apply in the employment tribunal, signing a statement of truth without an honest belief in its accuracy can still amount to contempt of court - a power the High Court can punish. It remains to be seen whether this most draconian sanction will be imposed for over reliance on AI.  

Internationally, some courts have gone further and this approach could potentially shed light on the direction of travel here. For example, policy in parts of Canada requires parties to certify any use of AI in court filings; and in the US, several judges have issued standing orders mandating disclosure of AI assistance and, in some cases, certification that no confidential information was shared with public tools. 

As the Civil Justice Council is considering whether rules are needed to govern the use of AI  for the preparation of court documents in the UK, it’s possible that similar requirements will form part of future domestic rules. However, comments by senior judiciary in this country have been supportive of use of AI by litigants in person, given the scope for the technology to help clarify issues. 

Practical tips

Given the complexities and potential pitfalls, it’s important for employers to take a strategic and informed approach to the use of AI both during litigation and before a dispute has reached that stage:

  • Keep AI use under control: As set out above, there are a range of reasons why the use of AI tools at work needs to be carefully managed by an employer. A policy that restricts the data that can be fed into an AI platform, and which platforms are authorised for use, should mitigate some of those risks. Staff should also be trained to understand the importance of fair, transparent and ethical use of generative AI. And ensuring effective human oversight prevents AI generated documents going out unchecked. 
  • Take stock: At the early stages of a Tribunal process, employers should ensure that they understand whether and how they have used AI to generate documents that are likely to be disclosed in the course of the proceedings. This will enable a proper assessment of whether the duty of disclosure requires the AI prompt to be disclosed.
  • Be realistic: If acting against a litigant in person who you know is putting AI to good use, it’s important to be realistic about whether to make this an issue in and of itself. The Tribunal is also likely to see potential that AI has to ensure an equality of arms between represented and unrepresented parties. For example, AI can help clarify issues and improve coherence. It is also likely to be well versed in determining when a witness statement – whether AI generated or not – does not stand up to scrutiny. The important thing to focus on is the outcome and not the process – the fact that something is exaggerated is more important than establishing how it came to be. 

Authors