Those three numbers tell a story that no one in financial services wants to own publicly. Nearly every major institution is deploying AI. Almost none of them have the governance infrastructure to manage what happens when it goes wrong.

I've spent a significant part of the last two years in rooms with CDOs, chief risk officers, and compliance leads who are genuinely trying to get this right. What I keep encountering is not carelessness — it's a structural mismatch. The teams moving fastest on AI deployment are rarely the same teams responsible for the risk frameworks that govern it. And the risk frameworks themselves were never designed for what generative AI actually is.

This is not a minor update problem. It is an architecture problem. And the financial institutions treating it as the former will find themselves on the wrong side of a regulatory or reputational incident that their existing controls couldn't catch.

"Embedding compliance at the core of agentic AI shouldn't be an afterthought."

Deloitte, Agentic AI in Banking, 2025

Let me be specific about what I mean — and what the data shows about how wide the gap actually is.

The Governance Gap Is Not Small

When ACA Group surveyed more than 200 financial services compliance leaders in 2024, the findings were striking: only 32% had established a dedicated AI committee or governance group, and just 12% had adopted a formal AI risk management framework. That means nearly seven in ten institutions are operating without the structural governance to oversee the AI they're already deploying.

The Financial Stability Oversight Council sharpened this concern in its 2024 Annual Report, explicitly identifying the increasing reliance on AI as both an extraordinary opportunity and a mounting systemic risk — elevating it as a specific area of regulatory focus for the first time.

Why It Matters Now

State lawmakers introduced nearly 700 AI-related bills in 2024, of which 113 were signed into law. The regulatory pace is accelerating faster than most institutions' internal review cycles.

Why Traditional AI Controls Don't Transfer

Traditional AI models in financial services were purpose-built for narrow tasks. Generative AI is categorically different — it generates novel outputs, reasons across multi-step processes, and introduces hallucination risk in high-stakes environments.

"Building trust into the transformation roadmap is critical."

KPMG, Intelligent Banking Report, 2025

Four Risk Domains That Demand a Structural Response

1. Oversight Architecture

Most institutions route all gen AI oversight through a single committee. That model no longer works.

2. Model Risk in a Generative Context

RAG architectures are essential to mitigate hallucination risk in high-stakes use cases.

3. Data Provenance and IP Liability

Institutions often underestimate exposure tied to data lineage and IP ownership.

4. Explainability and Ethical Compliance

Explainability is now a regulatory expectation, not a technical preference.

A Risk Scorecard That Actually Works

A structured risk scorecard helps prioritize governance investments across AI use cases.

Four Controls That Work Together

Business Controls

Govern deployment structure without blocking innovation.

Procedural Controls

Update MRM standards for generative AI behavior.

Manual Controls

Human oversight for high-stakes decisions.

"Early-stage involvement of compliance experts can reduce regulatory risk by over 70%."

PwC, Financial Services AI Study, 2024

What This Actually Requires of Leadership

Governance is the difference between competitive advantage and regulatory exposure.

The Governance Imperative

Institutions with structured AI governance frameworks see significantly fewer regulatory penalties.

Ready to assess your gen AI governance posture?

We work with financial services organizations to build governance frameworks alongside the technology.

Request a Governance Assessment