The plug-and-play myth
The pitch for AI tools in the enterprise (Copilot for Microsoft 365, SharePoint agents, whatever comes next) tends to follow a familiar arc. You already have the licence so just turn that bad boy on and watch productivity soar.
This is not how it works. The Copilot risks nobody mentions are not in the model but are in the information environment the model draws upon.
What the pitch skips is that Microsoft 365 AI governance is not something Microsoft handles for you. AI tools are only as good as the information environment they operate in. The model is not the product. The model plus your content estate plus your governance layer is the product. And for most organisations, the content estate is not in good shape, and the governance layer does not yet exist. These are the AI implementation challenges that derail deployments that looked straightforward on paper.
Turning on Copilot in an unmanaged information environment does not unlock productivity. It unlocks a new, faster, more confident channel for serving up whatever your content estate contains, including the outdated, the contradictory, the regionally incorrect, and the things that should never have been in scope in the first place. The cost of AI governance ignored at this stage is considerably higher to fix after the fact than to address it before deployment. These are also not edge-case AI governance challenges, they are the normal state of large organisations that move fast on AI without first moving on the information layer.
"But we already have AI governance"
If your organisation has invested in AI governance, that is genuinely good. A framework, a policy, a named accountable, perhaps a Chief AI Officer or an AI Governance Lead: these things matter. But there is a specific and important gap in what most of that governance actually covers.
The frameworks that have shaped enterprise AI governance (the EU AI Act, ISO/IEC 42001, GDPR's Article 22, NIST's AI Risk Management Framework) are predominantly calibrated at a different layer of the problem. They concern themselves with model risk: AI making consequential decisions about people's lives, automated processing of personal data, high-risk applications in credit, healthcare, and the criminal justice system. The roles built in response are mostly lawyers, compliance specialists, and risk managers.
That layer of governance is necessary, but it is not the same as governing whether the information your AI is actually retrieving is coherent, authoritative, and current.
Think of it this way: a security guard controlling access to a building checks credentials and enforces the policy, thorough and professional. But they have no idea what is stored inside. Whether two departments have contradictory versions of the same policy but that's OK because it's that is three years out of date and author has left. Controlling access gives you no information about the quality of what is inside the actual policy.
Most enterprise AI is not being deployed in high-risk decision-making contexts. It is being deployed as a knowledge tool: employees asking an AI assistant about HR policies, expense procedures, travel allowances or supplier onboarding. In all these cases, information is being retrieved, synthesised, and presented as authoritative, at scale and continuously, to employees who have no way of knowing whether the answer they just received is current, accurate, or drawn from a source superseded eighteen months ago.
Conventional AI governance frameworks are largely blind to this. They catch whether the system has access to something it should not. They do not catch whether the information is outdated, contradictory, or never written clearly enough to mean one thing reliably. Governing an AI system is not the same as governing what the AI knows.
This is the gap that will produce the most damage.
Get sparks in your inbox
We like to share our experience and models with practical examples and ideas that you can use in your day to day work. Get our low-volume, high-quality newsletter. Interesting, useful and engaging. We promise.
We’ll only use your email for this newsletter. No spam, no sharing. We use MailerLite, and your data is stored securely in the EU. You can find everything openly in our privacy policy and you can unsubscribe at any time.
"Controlling access gives you no information about the quality of what is inside the actual policy."

