Skip to main content

AI regulators getting interested

24 March 2020

It has been reported that Amazon has been removing thousands of listings for bogus items related to the coronavirus. It seems likely that Amazon is using forms of artificial intelligence to help scour its website for such items, and as the significance and range of uses of artificial intelligence grow, it is worth considering a recent flurry of regulatory developments in the sector in the UK and the EU.

Regulators and governments appreciate the upsides of AI but want to deal with the possible downsides too. So what has been happening?

UK developments

The UK government has put AI at the forefront of its industrial strategy and provided funding for autonomous vehicles as well as funding PhDs in AI and related disciplines. The government is also working with industry through initiatives via the Law Commission and the Scottish Law Commission, and the Centre for Connected and Autonomous Vehicles. These include a consultation on driverless cars.  

Centre for Data Ethics and Innovation reports

The UK government has also created the Centre for Data Ethics and Innovation, which was “tasked by the Government to connect policymakers, industry, civil society, and the public to develop the right governance regime for data-driven technologies”. This year, it has published a report calling for an overhaul of social media regulation and a review of online targeting.

The Centre’s recommendations are to make online platforms more accountable, increase transparency, and empower users to take control of how they are targeted. These include:

  • New systemic regulation of the online targeting systems that promote and recommend content like posts, videos and adverts
  • Powers to require platforms to allow independent researchers secure access to their data to build an evidence base on issues of public concern - from the potential links between social media use and declining mental health, to its role in incentivising the spread of misinformation
  • Platforms to host publicly accessible online archives for ‘high-risk’ adverts, including politics, ‘opportunities’ (e.g. jobs, housing, credit) and age-restricted products
  • Steps to encourage long-term wholesale reform of online targeting to give individuals greater control over how their online experiences are personalised

FCA collaboration with Alan Turing Institute

The Financial Conduct Authority has joined forces with the Alan Turing Institute. They are working on a year-long collaboration on AI transparency. They have written a blog post which explains the motivation for their collaboration.  The blog post presents an initial framework to consider transparency needs in relation to machine learning in financial markets:

  • Why is transparency important?
  • What types of information are relevant?
  • Who should have access to these types of information?
  • When does it matter?

They say that although the use of AI may potentially enable positive transformations across the industry, it also raises important ethical and regulatory questions. AI systems must be designed and implemented in safe and ethical ways, especially when they affect consumers significantly. 

ICO guidance on AI auditing framework

The Information Commissioner is also consulting on its draft guidance on the AI auditing framework. It is its first guidance focusing on AI. The guidance contains advice on how to understand data protection law in relation to AI and recommendations for organisational and technical measures to mitigate the risks AI poses to individuals. It also aims to provide a solid methodology to audit AI applications and ensure they process personal data fairly.  Aimed at both technology specialists developing AI systems and risk specialists whose organisations use AI systems, the guidance aims to help to assess the risks to rights and freedoms that AI can cause; and appropriate measures to mitigate them. The consultation ends on 1 April 2020.

Developments at EU level

The European Commission is also looking at AI and has published a data strategy and a White Paper on AI as part of its wider digital strategy. The priorities for the digital strategy are:

  • Technology that works for people
  • A fair and competitive economy
  • An open, democratic and sustainable society

The White Paper on Artificial Intelligence sets out the Commission's proposals to promote the development of AI in Europe whilst ensuring respect of fundamental rights. AI is developing fast, and the Commission says Europe therefore needs to maintain and increase its level of investment. At the same time, a number of potential risks arise, which need to be addressed. The White Paper sets out options to maximise the benefits and address the challenges of AI.  One key issue dealt with by the White Paper is facial recognition technologies, and the Commission wishes to open a debate on the circumstances in which it should be permitted.

The European data strategy aims to enhance the use of data, with the hoped consequence that it will bring benefits to individuals and businesses. It should enable the development of new products and services and lead to productivity gains and resource efficiency for businesses and better services provided by the public sector. It could, for example, help develop personalised medicine for patients, improve mobility for commuters or contribute to Europe becoming the first climate neutral continent by 2050 and help with the EU’s Green Deal.

AI is a fast moving area and it’s well worth any organisation already using, or seeking to deploy AI, such as retailers, keeping an eye on the regulators’ activities over the next few months.

Related items

Back To Top