Skip to main content

Legal and Compliance Teams as AI Allies, Not Blockers

Most enterprises treat legal and compliance as the AI brake pedal. The smartest ones make them co-designers. Here's how.
14 May 2025·7 min read
Tim Hatherley-Greene
Tim Hatherley-Greene
Chief Operating Officer
I started my career in law. Canterbury's first pro-bono legal service from a commercial firm. So I have a particular vantage point on the AI-and-legal conversation. Most enterprises treat their legal and compliance teams as gatekeepers. The team that says no. The brake pedal on innovation. This framing is wrong, and it's costing organisations the fastest path to trustworthy, scalable AI.

What You Need to Know

  • Legal and compliance teams are the most underutilised asset in enterprise AI programmes
  • Their risk expertise makes AI deployments more robust, not slower, when they're involved early
  • The "compliance as blocker" pattern is caused by late involvement, not inherent conflict
  • The organisations deploying AI fastest are the ones where legal and compliance co-design the governance framework
64%
of enterprise AI initiatives experience significant delays due to compliance concerns raised after development
Source: Gartner, 2024

The Blocker Pattern

Here's how it usually works. The AI team builds a capability. They're excited. It works well. They present it to the steering committee. The steering committee asks: "Has legal reviewed this?" Legal hasn't been involved. A review is requested.
Legal raises concerns. Data privacy. Liability for AI decisions. Regulatory compliance. Intellectual property. Fair and reasonable use of automated decision-making.
The AI team is frustrated. "They're blocking innovation." Legal is frustrated. "They built something without considering the regulatory requirements." The programme stalls for weeks or months while the issues are resolved.
This pattern is entirely preventable.

Structural Separation

In most enterprises, the AI programme sits in technology or innovation. Legal sits across the corridor, metaphorically if not physically. There's no natural touchpoint until a formal review is requested.

Cultural Assumptions

AI teams assume legal will slow them down. So they delay involvement until they have something built, hoping to present a fait accompli. Legal teams assume AI teams don't understand the regulatory landscape. So they prepare for a difficult conversation.
Both assumptions become self-fulfilling prophecies.

Scope Misunderstanding

AI teams think legal review means "check for legal problems." Legal teams think their role is broader: advising on risk, compliance, governance design, and organisational liability. When the scope of legal involvement isn't defined early, both sides have different expectations.

The Co-Design Model

Not from "legal review." From discovery. When the AI programme is scoping use cases and assessing feasibility, legal should be in the room. Their input at this stage is invaluable:
  • Which use cases have the fewest regulatory constraints?
  • What data handling requirements apply?
  • What consent frameworks are needed?
  • Where does automated decision-making regulation apply?
This input shapes the programme scope before anything is built. The use cases that emerge are feasible, compliant, and buildable, because compliance was a design input, not a deployment checkpoint.
Assign a legal/compliance representative to the AI programme team. Not full-time necessarily, but with regular attendance at design reviews and a clear mandate to advise proactively.
Their role: "Here's what you can do within the regulatory framework, and here's how to design it so it's compliant by default." This is fundamentally different from "here's why you can't do that."
With my law background, I've seen both sides of this. The best legal professionals don't want to block innovation. They want to enable it safely. Give them a seat at the design table and they'll find ways to make things work within the rules. Exclude them, and they'll find problems with what you've built.
Tim Hatherley-Greene
Chief Operating Officer

Build Governance Frameworks Together

AI governance, the policies, processes, and controls that govern how AI is used in the organisation, should be co-designed by the AI programme team and the legal/compliance team.
The AI team brings: technical understanding of what the AI can and can't do, practical experience with deployment, and knowledge of how users interact with the system.
The legal/compliance team brings: regulatory expertise, risk assessment capability, policy drafting skills, and understanding of the organisation's liability exposure.
Together, they produce governance frameworks that are both technically accurate and legally sound. Neither team can produce this alone.

The NZ/AU Context

New Zealand doesn't have AI-specific regulation yet. But existing frameworks apply:
  • Privacy Act 2020: Any AI system processing personal data needs to comply. Automated decision-making has specific considerations.
  • Consumer protection law: AI-generated outputs presented to consumers need to meet fairness standards.
  • Employment law: AI used in hiring, performance assessment, or workforce management has specific legal dimensions.
  • Treaty obligations: AI systems used in government or health contexts may have Te Tiriti implications.
Australian developments, including the AI Ethics Framework and ongoing regulatory consultations, signal where NZ regulation is likely headed.
The organisations that build AI governance now, informed by legal expertise, will be ahead when regulation arrives. The ones that treat compliance as a future problem will face expensive retrofitting.

What Changes

When legal and compliance are co-designers rather than gatekeepers:
Speed increases. Compliance concerns are resolved during design, not after build. No multi-week review delays.
Quality improves. AI systems are designed with governance embedded, not bolted on. This produces more robust, trustworthy systems.
Trust builds. When the organisation can say "our AI governance was co-designed by our legal team," it carries credibility with boards, regulators, and clients.
Risk reduces. Legal input during design catches risks that the AI team wouldn't identify. Better to discover a privacy concern during design than after deployment.

If your legal and compliance teams are the brake pedal on AI, the problem isn't legal. It's the timing of their involvement. Move them from gatekeepers to co-designers. The AI programme gets faster, safer, and more trustworthy. And you get governance frameworks that actually work, because the people who understand the rules helped build the system.