How do organisations align technical teams, legal experts and business leaders to create effective AI governance and responsible deployment without slowing down innovation or effective decision-making?

As AI adoption accelerates across every sector, leadership teams are being pushed to shift from experimentation to structured, organisation-wide governance. Effective AI use now requires genuine cross-functional collaboration across legal, procurement, technology, HR, compliance and creative teams, rather than isolated or siloed approaches.

Legal stewardship is also becoming fundamental to AI strategy. With AI transforming workflows, content creation, procurement and workplace decision-making, legal teams have a central role in managing risks around data, IP ownership, bias and discrimination and transparency, as well as keeping up with fast-evolving regulatory regimes. 

Start with strategy

Effective AI governance begins with a strategic anchor. For example you need to think about questions like - What is your AI north star? What principles should your AI governance framework follow? What are your strategic priorities when it comes to using AI? What is your tolerance to risk? Only once you have your north star and guiding principles, as well as an understanding of your risk tolerance, can you then begin to build your framework effectively.

Experiment using existing frameworks

Good AI governance is essentially effective management of AI risk. Where organisations have existing risk management frameworks in place, they should, where possible, try to leverage these to ensure that AI risk can be managed alongside other business risks in a way that is familiar to the business.

However, as AI regulation and risk are continually evolving and use cases for AI are expanding at a fast pace, it is important that if AI risk is not being effectively managed by existing risk management frameworks, this can be quickly identified and remedied. In other words, it is important you have the ability to manoeuvre rather than continue to work within a rigid framework that is not fit for purpose and stifles innovation.

Establish clear accountability

Ambiguity around who is responsible generally compounds AI risk. It is important that organisations build clear lines of responsibility within their structure, including ownership of policy, process and compliance. While some organisations might assign one person to oversee AI governance, we are typically seeing many organisations establishing cross-functional AI committees comprising legal, procurement, data protection, IT, HR, compliance and audit. 

While decision-making by committee has the propensity to slow things down, having multi-disciplinary input helps ensure that decisions are made with considerations from every angle, which is essential when assessing and mitigating AI risk. If you adopt this approach of having a committee, one way to avoid the risk of bottlenecking decision-making is to ensure only decisions that carry high risk or have a significant business impact are escalated to the committee, allowing other decisions to be delegated in accordance with an agreed framework. 

In practice, we often see operational teams handle day-to-day risk, while the committee or senior executive approves guidance and controls around decision making as well as high-risk use cases. 

Empower your people

Clear, well-communicated policies are essential to ensure responsible AI use and avoid unauthorised "shadow AI". Making your policies user-friendly, accessible and tailored to the audience will encourage successful AI adoption. 

Along with easy-to-use policies, it is important (as well as a legal requirement in the EU) to have appropriate training programmes in order to ensure a level of AI literacy within the workforce. If you have an AI-literate workforce, staff will be aware of the risks as well as the rewards and, importantly, understand the limitations of AI tools. This knowledge will in turn help to ensure responsible deployment within your organisation. 

Encouraging your staff to provide feedback on the tools and share use cases not only helps ensure the tools are being used as intended, it also helps unlock the tools’ full potential across your organisation as teams learn from each other. 

Know your AI estate

You cannot govern what you cannot see. Creating a central inventory of all AI systems is helpful to map out what AI systems are in place, what levels of risk assessment need to be carried out, and to work out where your high-risk areas are. This will then allow you to adopt suitable policies and processes, as well as implementing relevant AI assurance protocols depending on the nature of the risk or the AI system.

While this requires time and effort to create, once it is up and running it should help reduce the duplication of effort as it should be easy to do a quick cross-check to see if a particular AI system has been previously reviewed and approved.

Carry out appropriate risk assessments and put in place necessary mitigations

It is important that where the need for risk assessments has been identified, these risk assessments are carried out and the necessary safeguards are put in place, e.g. vendor due diligence and ongoing management, contractual assurances and monitoring, as well as staff training and robust assurance mechanisms (see further below).

Build proportionate assurance

Assurance helps ensure that AI systems behave as intended. Assurance techniques include risk assessments, bias audits, score cards, impact assessments and compliance audits, as well as user testing and feedback. Depending on the nature of the AI system, multiple assurance approaches may be needed for a particular system, and may also depend on the context in which the system is used. Assurance is not a one-off activity, it requires ongoing monitoring throughout the AI lifecycle. 

Prepare for incidents

While good AI governance should mitigate the risk of things going wrong, you should still have an incident response process in place. AI incidents extend beyond outages to include model misbehaviour, harmful outputs and performance drift. Developing an incident response process helps ensure you are prepared for the worst when the inevitable happens and allows you to mitigate damage.

So what should you do now?

Design your AI governance for clarity, evidence and adaptability. This way you will be positioned not only to meet today's expectations but be in the best position to deal with tomorrow's challenges while recognising the opportunities that AI brings.

  1. Articulate your AI governance strategy and, where possible, integrate it with existing risk frameworks.
  2. Assign clear accountability to a senior executive and/or establish a cross-functional AI committee, and ensure decision-making is appropriately delegated where required to avoid bottlenecks.
  3. Empower your people by providing training and ensuring easy-to-use policies and processes are in place to encourage responsible adoption. Encourage feedback.
  4. Conduct an audit of your current AI estate and build a centralised inventory.
  5. Carry out risk assessments and implement appropriate mitigations.
  6. Develop proportionate assurance mechanisms.
  7. Document and test your incident response plan before you need it!
  8. Finally create an accessible AI portal as a single source of truth for policies, approved systems and guidance so people know where to find more information if they need it.

 

Authors