AI needs to be developed and used with real care, to avoid crossing legal red lines and/or incurring avoidable cost further down the line.
Many tech start-ups are experts in AI-driven solutions. However, the technology and regulatory landscape are changing fast. You’ve got to be careful with how you develop and use AI to avoid legal headaches and unnecessary costs down the road. You need to have hit high standards for privacy, security and ethics.
Here are some things to consider:
- Privacy protections: Make sure you’re solid on security and tackling any possible or built-in discrimination.
- Protect your IP: Consider how best to protect your IP. Take care if you or your AI tools are using, or have used, anyone else’s IP.
- Explain AI decisions: Your users need to understand how your AI makes decisions.
We’ve seen various examples of controversy in the past few years. Algorithms have been caught perpetuating bias to determine individual pricing and targeted marketing. And that was before the use of generative AI models such as ChatGPT. Now we’re seeing issues such as concerns about copyright due to scraping data to facilitate machine learning, sophisticated deepfakes and fake news. Plus, these models’ propensity to “hallucinate” inaccurate information means that you need robust processes to double-check their output.
These developments have led to deep scrutiny by regulatory authorities across the UK and EU. So much so that:
- The EU AI regulation is designed specifically to regulate the development and use of AI, underpinned by fines of up to 10% of annual global turnover.
- The EU has also proposed an AI liability regulation which aims to promote the development and roll-out of trustworthy AI in the EU by harmonising certain rules for claims for compensation.
- The UK’s Competition and Markets Authority (CMA) is examining the use of algorithms to determine pricing to consumers as well as in the B2B context.
- The CMA is also investigating “dark patterns” on websites which “nudge” consumers into making decisions they might not have otherwise made. The Information Commissioner (ICO) has also been investigating this area with regard to nudging people to give up their personal data.
- The ICO is consulting about the use of generative AI.
- Both the ICO and EU privacy regulators are considering controls on the use of facial recognition technology in public places.
The ICO has also issued guidance about how to explain the use of AI, covering both processes and individual decisions, plus how to audit its use. If you’re using AI in the UK, you need to give customers the lowdown on your algorithm, why you’re using it, your contact details, the data used, human oversight controls, along with the potential risks and technicalities of the algorithm.
The EU AI Act is laying down the law too. They’re banning AI applications that pose a "clear risk to fundamental rights" – things like the processing of biometric data. AI systems considered "high-risk", such as those used in critical infrastructure, education, healthcare, law enforcement, border management or elections, will have to comply with strict requirements. Low-risk services, like spam filters, will face the lightest regulation.
But heads up, even common HR tools such as CV-sorting software are considered to be high risk. And if you’re working with general all-purpose AI systems, you’ve got to be transparent about the material used to train their models and it must comply with EU copyright law.
The EU’s Platform Workers Directive wants to make algorithms in HR more transparent, ensuring automated systems are monitored by qualified staff and that workers have the right to contest automated decisions.
In March 2023, the UK government published its White Paper on regulating AI which includes five principles:
- Safety, security and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
These should apply across different regulators and sectors, and which are also relevant when considering data protection compliance. The UK government is taking a more hands-off approach than the EU, letting sector regulators handle their own areas, although the new government elected in July 2024 might change this approach.
Dynamic and personalised pricing is on the CMA’s radar too. The EU recently worked with dating app, Tinder to change some of its personalised pricing practices. The CMA has employed a whole team of data scientists to tackle the impact. While they see the benefits, they’re worried about the potentially harmful effects like hidden algorithm use, targeting vulnerable consumers and unfair effects. These issues often come from manipulating consumer choices without them knowing.
The CMA is also concerned about “dark patterns” - where websites set up to make a consumer choose a particular path that they might not actually want to. This normally applies to things like being directed to take out a subscription and needing to press buttons very carefully to avoid committing yourself. There are steps you can take to fall on the right side of the line.
Make sure your AI tools can spot and reduce bias and understand their capabilities and limitations (especially with relation to generative AI). It’s crucial to ensure accuracy and the fair treatment of users.
Last but certainly not least, because of all these risks and controversies we’ve talked about, you’ll likely need to do impact assessments for data privacy and for equality. Plus, have and follow a solid policy on data ethics.
Need help identifying and managing the AI risks? Get in touch with our team.
