Artificial Intelligence has firmly established itself as a transformative technology, capable of revolutionising everything from mundane tasks to the development of smart cities. However, the rapid advancement and widespread adoption of AI have also sparked concerns about its inherent risks.

At xCHANGE 2024, we heard from Bryony Long (Partner, Lewis Silkin), JJ Shaw (Managing Associate, Lewis Silkin), Dr Erin Young (Alan Turing Institute), Nick Barron (MHP Group), and Gideon Spanier (Campaign) about how these risks can be managed by building trust, resilience and authenticity in AI.

Building trust in AI

For AI to be successfully integrated into business strategies, it is essential to build and maintain trust. During the panel discussion, Nick Barron highlighted three key elements of trust in AI: 

  1. Competence, which involves starting with a clear problem to be solved and focusing on the long-term benefits of an AI tool to solve that particular problem.
  2. Integrity, which requires companies to be transparent about their AI processes, show their thinking, and work openly, all while avoiding fads and trends.
  3. Benevolence, which emphasises the importance of being socially responsible and ensuring that AI systems are safe, sustainable, and respectful of employees.

Employee trust is particularly crucial, as many workers fear that AI will replace their jobs. To build trust, the panel agreed that businesses should engage employees in open discussions and involve them in AI-related decisions.

Security, explainability, transparency – this is all important for trust, but also to get ahead of nascent AI and digital regulation that we know will come into force in the next few years. This could be done by setting up an AI ethics committee – engaging the workforce, providing an ethical review and AI evaluation of whatever systems you’re putting in place, and making sure these are operationalised visibly with humans at the centre” – Dr Erin Young, The Alan Turing Institute

Additionally, supporting employees through any job transitions that may arise due to AI implementation is vital, alongside broader conversations on AI literacy and AI upskilling and re-skilling.

Transparency in AI

Transparency is another critical factor in building trust but should focus on the consequential aspects of AI, ensuring that stakeholders understand the implications of AI decisions without overwhelming them with unnecessary details. 

Transparency is really, really important but only for the consequential stuff. And I think it’s always something to bear in mind when thinking about communications and systems design and so on, it’s also really important to remember that the trust lies in the brand and not in the technology” – Nick Barron, MHP Group

Nick went on to advise:

  1. Don’t deceive people about the nature of your content
  2. Don’t betray your brand values. Think about the role AI plays and how it’s consistent with your brand proposition.
  3. Don’t discriminate, be careful about Ai making decisions that seem unfair or capricious and so on.
  4. Don’t over-promising on AI capabilities. Disappointment is the greatest betrayal of all.

Ultimately, trust is rooted in the brand rather than the technology itself. It’s essential to focus on building long-term trust in your brand, rather than treating transparency as solely a technology-driven issue.

Fostering resilience in AI

Resilience in AI is about ensuring that AI systems can withstand and adapt to various challenges. One such challenge is the inherent biases which are often built into AI technologies. These biases need to be addressed using a proactive approach and the panel emphasised the importance of diversity of thought and viewpoint in building resilience. 
In addition, businesses can also foster resilience by being prepared for upcoming regulatory changes, such as the EU AI Act. JJ and Bryony often advise companies on the steps they need to take to be compliant with this new legislation which, as JJ explained, involves understanding the current and future uses of AI within the organisation and understanding how these uses fit within the legislative framework.

Ensuring authenticity in AI

Authenticity in AI is about maintaining the genuine nature of AI interactions. The panel advised against anthropomorphising chatbots and other AI tools; keeping AI as robots rather than attempting to humanise them helps manage user expectations and maintains the authenticity of AI interactions.

In conclusion, the discussion provided valuable insights into the critical aspects of trust, resilience, and authenticity in AI. By focusing on competence, integrity, and benevolence, businesses can build trust in their AI strategies. Encouraging diversity of thought and preparing for legislative changes will help foster resilience. Finally, maintaining the authenticity of AI interactions ensures that users have realistic expectations of AI capabilities. As AI continues to evolve, these principles will be essential in navigating the complex landscape of AI adoption and integration.

These are an extract from the panel discussions at xCHANGE 2O24. You can listen to the full discussions here.
 

Download the report