On 18 March 2026, we marked the publication of our Commercial, Technology and Regulatory Annual Handbook 2026 with a panel event in Manchester attended by in-house lawyers from leading businesses in Manchester and beyond.
Below, we set out some of the main themes of the discussion that took place after the panel spoke; and some key takeaways to help in-house lawyers manage the issues that attendees raised. All discussions were subject to the Chatham House rule.
Everyone is a lawyer now
The use of publicly-available AI tools means that many of your non-legal colleagues could be dabbling in the law and relying on answers which are often wrong or lacking the necessary nuance.
We set out below some practical tips on how you can deal with this risk.
Practical steps for in-house teams
- Build AI literacy: run a short session showing what AI gets wrong. Use real examples from your sector. Nothing changes behaviour faster than seeing a confident, plausible, but incorrect answer about a regulation your colleagues thought they understood. Check out Lewis Silkin's AI Literacy solution here;
- Name the risk plainly: AI tools don't know your contracts or your business' risk appetite. The key is to make it clear that AI tools shouldn't be used for legal purposes as it ends up creating more work for the in-house team, which in the end means that you have less time to help colleagues with work that really matters;
- Create practical playbooks: playbooks help, but only if they're specific, and actually used. Focus playbooks on the decisions people make most often: and where mistakes cost real money;
- Make legal input easier, not harder: if colleagues turn to AI because legal feels slow or unapproachable, that's a signal. Consider a revamped legal 'front door' or FAQs for common questions. Reduce friction, and you'll reduce the temptation to self-serve on matters that need a qualified eye; and
- Audit what's happening. Ask procurement, HR, and commercial teams how they're using AI. You may find it's already embedded in decision-making without anyone telling you. Better to know now than during a dispute!
Our legal operations services can help your legal team build strategies, streamline workflows and embrace technology to drive real results and lasting impact, providing you with a competitive edge. Details of how we can help you are set out here (legal operations) and here (legal design).
Shadow AI use
A common concern was shadow AI use: ie, the use of publicly-available AI models like ChatGPT, Gemini, or Claude without organisational approval.
Shadow AI exposes businesses to serious data protection and confidentiality breaches when employees feed information into unapproved tools, potentially violating data protection and confidentiality obligations in contracts. It also creates compliance blind spots: you can't govern what you don't know exists.
If you are just starting to look at shadow AI use, some businesses run a short, time‑limited, non‑punitive 'AI-use amnesty' inviting staff to disclose what tools they use and for which tasks, with clear assurances there will be no disciplinary action for past use (save for wilful misconduct or unlawful disclosure). They then use the results to map risk hot‑spots, regularise safe use through approvals, and switch off or replace anything that cannot meet policy and data protection standards.
The process of dealing with shadow AI use can be complex, involving, eg, data protection and privacy; confidentiality and trade secrets; intellectual property; contract and consumer protection; and employment (eg, monitoring rules). It can also touch sectoral regulatory duties under, eg, FCA rules. Get in touch with the team if you need any help on this.
Getting people to actually use the right AI
Shadow AI use is using the wrong AI. So how do you get people to use the right AI?
AI adoption remains a challenge for many businesses. Without a communication plan, even brilliant software risks just become expensive shelfware.
The goal is to flatten adoption's painful J-curve: that dip in productivity before things improve.
We set out below some practical tips on how you get adoption levels up.
Practical steps for in-house teams
- plant champions inside teams: a recommendation from a colleague tends to carry more weight than an email from leadership;
- nudge, don't lecture: think short prompts, quick tutorials, and experiments where nothing is at stake;
- celebrate early wins publicly: visible success tends to build momentum more effectively than internal encouragement; and
- remove friction: every unnecessary click or confusing step costs users.
Do this well and AI becomes routine. It's also when we start to see actual returns on the investment.
If you have any questions about the above, please contact a member of the team.
We look forward to seeing you at the next event!
