
Recently, the Big Four accountancy firms have started offering audits to verify that organisations’ AI products are compliant and effective. We have also seen insurance companies provide AI liability cover to protect companies from risk. These are clear indicators that AI is maturing and customer facing use cases are becoming widespread. There is also clearly an appetite for organisations to protect themselves amid regulatory changes and reputational concerns.
But audits and insurance alone will not fix the underlying issue. They are an effective safety net and an added line of protection against AI going wrong, but by the time an error has been discovered by auditors, or organisations make an insurance claim, the damage may already of occurred. In most cases data and infrastructure that continues to hold organisations back from using AI safely and effectively, so it is a challenge that needs to be addressed.
AI amplifying data issues
Large organisations handle huge volumes of highly sensitive data—whether it’s payroll records, customer information, or intellectual property. Keeping oversight of this data is already a major challenge.
As AI adoption spreads across teams and departments, the associated risks become more distributed. It gets significantly harder to monitor and govern where AI is being used, who’s using it, what it’s being used for, what it’s producing, and how accurate its outputs are. Losing visibility over just one of these areas can lead to potentially serious consequences.
For example, data could be leaked via public AI models—as we saw in the early days of GenAI deployment. AI models can also end up accessing data they shouldn’t, generating outputs that are biased or influenced by information that was never meant to be used.
The risks for organisations are twofold. First, customers are unlikely to trust companies that can’t demonstrate their AI is safe and reliable. Second, regulatory pressure is growing. Laws like the EU AI Act are already in force, with other regions expected to introduce similar rules in the coming months and years. Falling short of compliance won’t just damage reputation—it could also trigger major financial penalties that have the potential to impact the entire business. For instance, the EU has the power to impose fines of €35m or 7% of an organisation’s global turnover—whichever is higher—under the AI Act.

US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataWhile AI liability insurance might help recover some of the financial fallout from AI errors, it can’t win back lost customers. Audits may spot potential governance issues, but they can’t undo past mistakes. Without proper guardrails, organisations are essentially gambling with AI risk—introducing fragility and unnecessary complexity that distorts outcomes and erodes trust in AI-driven decisions.
Protection via private AI
One way to protect against AI-related errors is to regain control through private AI. This approach allows organisations to build and run AI models, applications, and agents entirely within their chosen environment—whether on-premises or in the cloud – ensuring data stays secure and contained. Private AI safeguards two critical assets: proprietary data that’s unique to the business, and intellectual property that gives it a competitive edge.
Open-source AI models form the foundation of private AI, meaning organisations can avoid relying on potentially risky public models and build their own trusted versions, which are trained exclusively on their data. However, for private AI to deliver accurate and trustworthy outcomes, it must be fed a complete set of proprietary data, otherwise outcomes will be distorted by the subset of data used.
To make this possible, organisations need a modern data architecture underpinned by a unified data platform. This ensures private AI has access to the full range of data it requires. It also enables consistent governance across all environments – wherever the data resides – helping organisations stay compliant as regulations evolve.
Audits and insurance as a backstop
The rise of AI audits and insurance cover signals that organisations are moving beyond experimentation and starting to deploy AI in real, customer-facing scenarios. It’s a positive step—but with such high stakes, progress must be matched with proper oversight. Robust checks and balances are essential to ensure AI is deployed safely.
The Big Four firms and insurers can play a supporting role, but they’re not responsible for delivering responsible AI—they’re a backstop, not a solution. Ultimately, accountability for safe AI lies with the organisations building and using it. By putting the right data architecture in place to support private AI, businesses can strike the right balance between innovation and security.