At xCHANGE 2024, we heard from Bryony Long (Partner, Lewis Silkin), JJ Shaw (Managing Associate, Lewis Silkin), Dr Erin Young (Alan Turing Institute), Nick Barron (MHP Group), and Gideon Spanier (Campaign) about how these risks can be managed by building trust, resilience, and authenticity in AI.
Building trust in AI
For AI to be successfully integrated into business strategies, it is essential to build and maintain trust. During the panel discussion, Nick highlighted three key elements of trust in AI:
- Competence, which involves starting with a clear problem to be solved and focusing on the long-term benefits of an AI tool to solve that particular problem.
- Integrity, which requires companies to be transparent about their AI processes, show their thinking, and work openly, all while avoiding fads and trends.
- Benevolence, which emphasises the importance of being socially responsible and ensuring that AI systems are safe, sustainable, and respectful of employees.
Employee trust is particularly crucial, as many workers fear that AI will replace their jobs. To build trust, the panel agreed that businesses should engage employees in open discussions and involve them in AI-related decisions. As Erin explained, this can be achieved by setting up AI ethics committees and conducting ethical reviews and evaluations of AI systems which are operationalised visibly. Additionally, supporting employees through any job transitions that may arise due to AI implementation is vital.
Transparency is another critical factor in building trust but should focus on the consequential aspects of AI, ensuring that stakeholders understand the implications of AI decisions without overwhelming them with unnecessary details.
Ultimately, trust lies in the brand rather than the technology itself. Companies must avoid deceiving people, betraying brand values, discriminating, or over-promising on AI capabilities.
Fostering resilience in AI
Resilience in AI is about ensuring that AI systems can withstand and adapt to various challenges. One such challenge is the inherent biases which are often built into AI technologies. These biases need to be addressed using a proactive approach and the panel emphasised the importance of diversity of thought and viewpoint in building resilience.
In addition, businesses can also foster resilience by being prepared for upcoming regulatory changes, such as the EU AI Act. JJ and Bryony often advise companies on the steps they need to take to be compliant with this new legislation which, as JJ explained, involves understanding the current and future uses of AI within the organisation and understanding how these uses fit within the legislative framework.
Ensuring authenticity in AI
Authenticity in AI is about maintaining the genuine nature of AI interactions. The panel advised against anthropomorphising chatbots and other AI tools; keeping AI as robots rather than attempting to humanise them helps manage user expectations and maintains the authenticity of AI interactions.
In conclusion, the xCHANGE 2024 AI panel provided valuable insights into the critical aspects of trust, resilience, and authenticity in AI. By focusing on competence, integrity, and benevolence, businesses can build trust in their AI strategies. Encouraging diversity of thought and preparing for legislative changes will help foster resilience. Finally, maintaining the authenticity of AI interactions ensures that users have realistic expectations of AI capabilities. As AI continues to evolve, these principles will be essential in navigating the complex landscape of AI adoption and integration.