Skip to main content

Impact of AI in retail

28 November 2019

Chatbots, facial recognition, biometrics and a host of other Artificial Intelligence (“AI”) technologies are being utilised by the retail sector at an increasingly progressive rate and it is predicted by 2020, 85% of customer interactions will be managed by AI.

Nowadays customers don’t need to leave their homes, with the likes of Gap’s DressingRoom App, enabling customers to enter basic details such as height and weight to see what outfits look like in the flesh. Northface have gone one step further in the US by deploying AI to help their customers select the perfect outfit from the comfort of their own home, based on predicted weather and travel plans.

However, it is not just about at home shopping. AI is also being deployed by a number of retailers in store to incentivise greater footfall. Zara trialled holographic mannequins, which appeared and moved around the stores wearing clothes available to buy when customers used the app function on their smartphones. Charlotte Tilbury have developed “Magic Mirrors” whereby consumers can virtually try on 10 different make-up looks in their stores and then choose which make-up they would like to purchase based on which look they liked best. Chatbots, which have more commonly been used by online retailers, are now also increasingly being used in-store. For example, Wholefoods use chatbots to help improve customer experience by helping customers to find where products are positioned in their stores.

Another key way that AI is being used in retail, and particularly in the fashion industry, is through the use of visual recognition. This fashion algorithm recommends similar types of apparel to customers, helping to enhance the quality of search results and providing more personalised recommendations.

Whilst this is all very progressive, there are alarm bells being raised about the amount of data that needs to be processed for the most effective deployment of AI. There are concerns more generally about how this exciting use of technology can be effectively regulated.

How is AI being regulated?

The head of the House of Lords Select Committee on AI has warned that if there were no AI regulation, this could result in companies such as Cambridge Analytica setting a precedent for dangerous and unethical use of the technology. Further, Elon Musk has expressed the view that AI is the rare case where we need to be proactive in regulation, because being reactive could run the risk of being too late.

Despite there being no concrete regulation of AI, privacy regulators in particular are becoming increasingly active in this space.

Key things that are being examined are:

- Opacity: how to explain what is happening, and how people react to AI, particularly where individuals are already generally nervous about decisions made by computers.

- Complexity: the challenge around accountability, especially where it is difficult to understand what is going on.

- Unfairness: how to make sure new technology doesn’t repeat the mistakes we have previously made in society generally. An example of AI reflecting biases is an image search on Google for ‘doctors’ bringing up results mainly for men in white coats; and ‘nurses’ for women in white coats.

The UK data protection regulator, the Information Commissioner’s Office (“ICO”) is particularly active in this area and has been working closely with other UK regulators such as the Gambling Commission and FCA to ensure AI doesn’t prey on the vulnerable.

ICO Auditing Framework

The ICO has spent the last year working with Turing Institute to put together an Auditing Framework for AI. The aim of the framework is to understand what ‘good’ looks like. The ICO has advised that they will be publishing a draft for consultation in early January 2020, with view to delivering a final framework and associated guidance for organisations by Spring 2020.

Despite the framework not yet being published, the ICO has been very transparent around its thinking in respect of AI and has published a series of 8 blogs over the past year around the development of the framework.

In its July blog, the ICO set out a proposed structure of the framework, confirming the focus would be on the following areas:

  1. meaningful human reviews in non-solely automated AI systems;
  2. accuracy of AI systems outputs and performance measures;
  3. known security risks exacerbated by AI;
  4. explainability of AI decisions to data subjects; and
  5. human biases and discrimination in AI systems.

The ICO has assured organisations that they are not expected to redesign their risk management processes. However, they will be expected to review these processes and ensure they are fit-for-purpose if AI is used to process personal data.

So what should retailers being thinking about now in respect of AI?

Although we are still waiting for formal guidance on what ‘good’ looks like when it comes to AI, we have set out below some factors that retailers should be thinking about now from a data perspective when deploying a new AI solution:

- AI governance within the organisation – do you have a governance structure in place? Having a governance structure for data protection responsibilities within an organisation is imperative for good general data protection compliance. The ICO October blog emphasises the importance of the governance structure when it comes to the specific use cases of AI, the users affected by it, the overlapping regulatory requirements and social, cultural and political considerations. Given the complexity of these issues, it is important to have accountability at the top of an organisation (i.e. the C-suite members) and not have the decision making delegated to junior employees such as engineers, technicians etc. Retailers need to ensure this governance structure is in place and that the use of AI is being discussed at the appropriate level of the organisation.

- Risk appetite – what is your data protection risk appetite? The October blog highlighted the importance of organisations developing a mature understanding and articulation of data protection risk. If a retailer considers and sets its risk appetite, it will make more appropriate and meaningful decisions when implementing AI technology.

- Privacy by design/data minimisation – these concepts need to be at the forefront of a retailer’s mind when considering the use of AI. For example, retailers should be thinking carefully about what data is necessary for the effective deployment of the AI. The more data the retailer collects the more responsibility and risk it will take on. Therefore the retailer should assess what personal data is strictly necessary and what is optional/nice to have. Retailers should also be thinking about how data subjects can practically exercise their rights with respect to the personal data being processed by the AI solution. For example, if a data subject makes an access request, how can information be successfully extracted and provided to the data subject? This needs to be considered and built into the solution from the outset.

- Data Privacy Impact Assessments – are you carrying out DPIAs? Most AI solutions are likely to be considered high risk processing activities given the vast quantities of personal data being processed, so retailers should be carrying out DPIAs before deployment. All risk should be properly assessed and documented and, where necessary, regulators should be consulted. The ICO in its October blog emphasised how DPIAs, whilst helping to demonstrate accountability, can be practically useful for a company as they can act as a roadmap helping to identify and control data protection risks in relation to AI.

- Lawful basis – does the retailer have a lawful basis to process the personal data required for the AI solution to work? Retailers should consider whether they are able to rely on their legitimate interests, necessity to perform a contract, or is consent or even explicit consent required due the nature of the personal data being processed or nature of processing activity? If explicit consent is required (for example because the personal data relates to health, ethnic origin or religious beliefs or the AI solution requires the use of biometric data such as facial recognition), retailers need to consider how they can effectively obtain this explicit consent.

- Transparency – how can GDPR transparency be achieved? Prior to deploying the solution, retailers need to know enough about how the data is being processed in order to clearly explain the processing to the user in a way that is easy to understand.

- Security – finally, and arguably most importantly, retailers need to think about what security measures are in place to protect the vast quantities of personal data it will potentially be processing. This will help to prevent data breaches, which could result in significant regulatory fines and, as we are seeing in recent case law developments, potential class actions.

Related items

Back To Top