Skip to main content
Global HR Lawyers

Sports Q&A - Is there trouble on the horizon for algorithms in sport?

10 September 2020

Algorithms have featured heavily in the news lately, albeit for the wrong reasons. The algorithms used to determine UK school examination results were eventually abandoned after a chorus of derision and condemnation over their apparent discriminatory bias against students from disadvantaged backgrounds and some questionable outcomes.

But algorithms are used in many aspects of our lives and they are here to stay.  Sport is no exception and indeed their prevalence and importance in many sports has been steadily growing for many years.  In light of the ‘A’ level fiasco, the question must now be asked whether the use of algorithms in sport will fall under a similar public microscope and whether legal challenges of one kind or another will increasingly start to emerge.

How are algorithms used in sport and how do they work?

An algorithm is merely computer code used to navigate and often develop a complex decision tree very quickly. Algorithms can be purchased “off the shelf” (for instance some recruitment tools) or may be developed specifically for bespoke purposes using customised data sets. The underlying theory is to harness programming power and data to inform decisions and predict outcomes.  Their sophistication varies enormously too from basic decision-trees (NHS 111 “pathways” and the CEST IR35 tool) to the predictive shopping algorithms used by major online retailers and on to complex programmes which incorporate AI “machine-learning” where the algorithm teaches and refines itself to better achieve the objectives set for it.
 
The many and varied uses of algorithms in sport are fairly obvious from predicting opposition responses to set plays, calculating player transfer values, planning the optimum bowling or pit stop strategy, comparing athlete performance and even making injury risk assessments.  In the climate of soaring financial rewards for sporting success, the use of algorithms has become a critical tool in the quest for marginal gains.  Arguably their most significant use in recent years can be seen in the growth and diversity of the global sports betting market. 

Away from the pitch and track, sports organisations are also turning to algorithms to help them with other aspects of their operations. Recruitment is a prime example, and one where the danger of discrimination and bias rears its heads again.  Bias and, indeed, unlawful discrimination can occur by reason of the objectives set for the algorithm; the data inputted to create the algorithm; the causal links identified by the algorithm; or the data used when running the algorithm.  For example, something clearly went awry in one reported case where one CV screening tool identified being called Jared and playing lacrosse at High School as the two strongest correlators of high performance in the job.

Do they reduce or perpetuate bias?

Academics, especially in the US, debate extensively the pros and cons of algorithms and whether they increase or diminish bias and unlawful discrimination in employment and selection decisions. The proponents point out that, whilst some bias is inevitable, algorithms reduce the subjective and sub-conscious bias involved in decisions made by humans.
 
There is evidence that algorithms are capable of making better, quicker and cheaper decisions than humans. On the face of it, algorithms bring objectivity and consistency to decision-making. However, the Ofqual debacle highlights the potential for automated decisions to go badly wrong. Just because algorithms are capable of making better decisions, does not mean that they always will.

More than 30 years ago, St George’s Hospital in London had developed an algorithm designed to make admission decisions more consistent and efficient. This was found to discriminate against non-European applicants for medical school. However, interestingly, the school, nonetheless, had a higher proportion of non-European students than most other London medical schools, suggesting that the traditional recruitment methods used by the other medical schools discriminated even more.
 
Amazon attracted a lot of attention when, in 2018, it abandoned an AI-developed recruitment tool that reportedly favoured male candidates. The tool had been developed over the previous four years and trained on ten years of hiring data. The AI programme, it was reported, had taught itself to favour terms used by male candidates. It has been said that even though the algorithm was not given the candidate’s gender it identified explicitly gender-specific words such as “women’s” as in “women’s sports” and, when these were excluded, it moved to implicitly gender-based words such as “executed” and “captured” which, apparently, are used much more commonly by men than women.
 
The risk of similar issues arising from the use of algorithms in a sporting context is not difficult to see.  The hiring, selection and assessment of individuals is at the very core of many sports from soccer scouting to athlete selection policies, NFL draft picks to tennis tournament seeding, yet the underlying algorithms that may influence many of these decision making processes remain opaque and largely unscrutinised. 

Are we at risk of legal claims?

Legal cases in the UK, or even the US, challenging algorithm-based decisions have been very rare to date. Inevitably that will change but at the same time the UK courts and other adjudication bodies seem ill-prepared to deal with this.  Discrimination claims are clearly the most fertile area for potential litigation - whether a particular algorithm that was used to make, or assist the making of, a decision has (albeit probably unintentionally) resulted in an unlawfully discriminatory outcome adversely affecting an individual or group of individuals.   Cases are likely to become more common for a number of reasons.  The increased use of and attention paid to algorithm-based decisions being the main one, but the financial stakes in sport could conceivably take this to a whole other level.
 
The true basis on which a decision has been made can normally be determined, albeit not always easily, where it is data-based. Unpicking the true motivations behind human-based decisions is often not possible. There is evidence that people are more likely to mistrust a computer-based recruitment decision than a human-made one, a phenomenon known as “algorithm aversion”. People are more likely to challenge decisions which they do not understand. Though human decisions are not as transparent as they might initially seem as, whatever explanation might be given, there is plenty of evidence that selection decisions made by humans are influenced by sub-conscious factors and rationalised after the event.

Algorithm-based decisions are particularly vulnerable to discrimination claims.   As a rule, discrimination laws were not designed to meet this challenge and are ill-equipped to do so.  Disadvantaged sportspersons or employees might argue that an algorithm-based decision unlawfully directly or indirectly discriminated against them. The organisation may need to prove that it did not discriminate or that the indirectly discriminatory impact of the algorithm is objectively justified. In many cases, the organisation will not understand how their algorithm actually works (or even have access to the source code).  How then will they satisfy these tests? Many suppliers of algorithms reassure clients that their codes have been stress-tested to ensure that they do not discriminate. An court or tribunal is unlikely to accept a supplier’s word for this. Would independent verification be enough? US verification is unlikely to suffice in the UK as UK and European discrimination laws are very different from the US ones.

Would the disclosure of test and verification data or even the code itself be ordered? Algorithm suppliers would no doubt regard these as important trade secrets to be withheld at all cost. Will experts be needed to interpret this information? Can the algorithm supplier be sued for causing or inducing a breach of equality laws or helping it’s client to do so? More often than not the supplier will be based in the US introducing practical and legal complications.  The only conclusion to be drawn for now is that there may be some trouble on the horizon for sports organisations, governing bodies and the courts. 

Do they infringe data protection principles?

The use of algorithms to make decisions about athletes also raises difficult data privacy issues. Subject to some limited exceptions, the General Data Protection Regulation (and the Data Protection Act 2018) prohibits “solely automated decisions, including profiling, which have a legal or similarly significant effect” on data subjects. Any decision that excludes or discriminates against individuals, especially as regards to their employment opportunities, behaviour or choices, are likely to have a ‘legal or similarly significant effect” and should not be undertaken without some human involvement (which needs to be more than token).

In the sports context, organisations may be able to undertake solely automated decisions, even if they have such an effect, if the processing is necessary for the performance of a contract, which would could include an athlete’s employment contract. Algorithms can therefore be used, for example, to determine an athlete’s selection for a particular game, but ideally there would be some form of human decision making at the end. It’s also crucial to make athletes aware of any automated decision making (whether there’s some human involvement or not) and to provide meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing.
 
These and other data protection issues have been highlighted in recently published fresh guidance from the ICO on AI and data protection, which highlights the importance of processing personal data fairly, transparently and lawfully and, hence, in a non-discriminatory manner. The guidance illustrates how discrimination can occur if the data used to train a machine-learning algorithm is imbalanced or reflects past discrimination.
 

Related items

Related sectors

Back To Top