AI adoption is creating hidden operational risks and costs

Key points

  • Rapid and poorly governed adoption of AI is creating hidden risk and costs within organisations, undermining return on investment and reducing company value.

  • These risks can largely be mitigated through AI-related governance, policies, and controls, but assessing the effectiveness of these measures is challenging.

  • Quantifying the effectiveness of AI adoption and risk mitigations is a critical requirement for M&A due diligence with a material bearing on target organisation valuation.


Struggling to quantify downstream costs of AI?

Our Sovereignty and Security Due Diligence service provides independent assurance of AI adoption risk in transactional contexts.

Contact us for a consultation.


Drivers of rapid adoption

In the previous article in this series [LINK] we argued that commercial pressures and competitive dynamics were accelerating the deployment of AI ahead of governance and integration maturity. ‘Fear Of Missing Out’ on AI is driving a rush to adoption that is accumulating operational and security risk, alongside unquantified downstream costs.

Companies are under intense pressure to adopt AI to improve productivity and reduce costs. At Tyburn, we have seen this play out in multiple ways. Private equity-owned businesses are coming under pressure from equity backers to innovate using AI to drive growth. Similarly, companies are switching to ‘AI-enabled’ providers for security-critical capabilities because of lower prices but also to address internal demands for AI adoption.

These and other pressures are contributing to AI ‘FOMO’, driving companies into poorly planned, implemented, or governed AI adoption. Determining the effectiveness and return on investment of these AI solutions is often a secondary consideration.

Assessing effectiveness of AI controls is critical

Effective AI adoption is a demanding process of business transformation that brings considerable risk. Organisations can mitigate these risks by adopting appropriate governance, policies, and controls. In the insurance broker example described above, rationalising the AI models in use alongside updating AI governance and data anonymisation provided significant cost savings and reduced risk. 

Assessing the design and operational effectiveness of AI-related governance, policies, and controls is therefore a critical component of M&A due diligence. Given the rapid and unprecedented pace of AI development, this requires a specialised approach that goes beyond traditional assurance programmes. For example, periodic 'AI discovery' processes will be necessary to map systems in use and to prevent drift from past mappings.

The potential for downstream risk from AI adoption to ‘silently’ accumulate underlines the importance of a long-term view. Investors therefore require an assessment of risk and likely remediation costs for AI-related risks both around the current financial event and in the long-term (five-year plus) timeframe of likely exit. 

Key questions

  • Is your organisation accurately quantifying ROI for AI adoption? Does your assessment include potential downstream costs including rising cost of services and responses to security or data protection incidents?

  • How is your organisation assessing financial and strategic risk of non-adoption of AI, including potential damage to market valuation?

  • How is your organisation assessing the design and operational effectiveness of existing AI-related controls and other risk mitigation measures?

  • What governance, policies, and controls does your organisation have in place to mitigate risks from non-homogenous adoption and ‘shadow AI’?

  • Could you switch off your AI implementation if required, and what would be the costs of doing so?

  • What is your organisation’s strategy for managing data protection in AI adoption, and what would the likely costs of response (including remediation, fines, crisis, comms, and PR management) be in the event of an incident?

Adoption risk

Companies are rapidly adopting AI systems to increase human productivity, automate tasks, and to build new systems for internal usage or as commercial services. AI adoption can provide these benefits yet it also creates strategic, systemic, structural, operational, and ethical risks [LINK]. 

AI adoption amplifies existing risks around security and data protection. It also introduces novel risks, such as potential downstream increases in running costs for web-based services or exposure to political pressure.

Failure to implement appropriate oversight, reporting structures, and controls for business transformation and to regulate use of AI in business as usual operations is another risk driver.

These risks can be exacerbated when companies rapidly adopt AI under pressure. Adoption may occur organically, driven by ambitious staff, or in a top-down manner, driven by ‘digital innovation’ teams under pressure to ‘move fast and break things’. The latter tend to have performance metrics aligned with rapid disruption, and can regard security and risk governance as roadblocks or obstacles to be overcome rather than enablers.

Examples of realised risk include: 

Lack of return on investment

  • Speed of adoption does not translate into speed to efficiency or return on investment. Failure to realise anticipated productivity gains or cost reductions is a serious risk with AI adoption, particularly if rising downstream costs (tokens, power consumption) are not accurately assessed. 

    • IBM’s 2025 Global C-suite CEO Study found that only 25% of CEOs surveyed said that AI initiatives had delivered expected ROI [LINK].

    • Fintech company Klarna announced in May 2025 that it was rolling back AI-adoption driven job cuts and hiring people to improve the quality of customer support [LINK].

  • Organisations also fail to quantify downstream costs (tokens for online models, cost of response to security issues, data breaches), thereby undermining the investment case for AI adoption.

  • RoI can also be undermined by downward pressure on pricing to customers, who will not bear increased profits on an ongoing basis. 

    • Auditor KPMG demanded a price reduction of more than 10% from its accountant on the basis of cost savings from AI adoption, according to a February 2026 report, likely shifting that company’s AI adoption ROI [LINK].

Procurement and integration challenges

  • The process of selecting and integrating AI solutions brings its own risks, and can fail. Risk accrues when organisations adopt multiple, competing systems; partial adoption across the organisation; varied utilisation rates or use cases across organisations. 

    • We are familiar with a case in which an international insurance broker inadvertently procured several AI solutions to solve the same business challenge across several divisions.

  • Organisations may experience the growth of ‘shadow AI’. Personnel may adopt AI independently, either through deliberate use or unintentionally through service providers implementing AI features in existing products and services is a key driver of risk.

Personnel security and insider threat issues

  • Organisation-wide AI integrations pose information management and security risks that can be particularly problematic during periods of increased uncertainty or sensitivity, such as financial events, reorganisation, and redundancies.  

    • We are aware of cases of large companies switching off internal AI systems after discovering that employees were interrogating AI systems to extract sensitive information.

Data protection breaches

  • AI integration is leading to data breaches affecting personal and organisational data, bringing reputational, legal, and financial risk. 

    • In the case of the insurance broker described above, the broker’s innovation team was unintentionally sharing sensitive client details with external AI SaaS solutions through ineffective redaction techniques. 

    • In another case WotNot, an Indian company specialising in developing custom chatbots for clients left over 350,000 sensitive files unsecured and accessible on the Internet. This included passport scans, medical records, and other personal information. This data breach exposed the company to potential fines and contractual remedies or cancellations by clients.

Creation of security vulnerabilities

  • Rolling out AI initiatives without strong awareness of an organisation’s existing digital estate and appropriate security policies and controls is reckless. AI introduces new risks and it can exacerbate existing risks emerging from organisational information management and security posture.

    • Consultancy McKinsey reportedly experienced a security breach in February 2026, after a cyber security company gained external access to the organisation’s in-house AI platform. The company identified sensitive data, internal communications, and information about the AI platform’s model configurations and prompts.

Tech sovereignty, supply chain dependencies, and geopolitics

  •  The integration of models operated by large tech companies into organisations creates a raft of risks, exposures, and dependencies. Tech companies have become consequential geopolitical actors, both associated with national governments and at times at odds with them. Using those companies’ models exposes an organisation to geopolitical and supply chain risks.

Risks from non-adoption of AI

Analyses of AI adoption risk can be inappropriately bear-ish, focusing solely on the downside risk of AI adoption. This neglects the risks inherent in non-adoption of any potentially transformation technology; organisations cannot avoid risk by sitting on the fence.

These risks include:

Damage to market perception and valuation

  • Investors are pricing in significant increases in productivity and profitability from AI adoption. 

  • Companies that cannot demonstrate effective adoption risk a decline in perceived future profitability. 

  • Non-adoption may also be read as a proxy for a lack of technical capability or evidence of significant tech and operational debt.

Loss of operational edge

  • Beyond market perceptions, there is a risk that companies that do not identify effective uses of AI will lose out to more AI-savvy competitors. 

  • Organisations should assess the risk of being undercut by competitors who can significantly lower their cost base and pass some of those savings onto customers.

  • The applicability of this risk will be highly varied by sector and use case.

Lack of organisational transformation

  • AI has already substantially transformed aspects of recruitment, personnel and financial management, IT operations and security, among others. 

  • Companies that miss out on these transformations now may find themselves playing catch-up in the future.

These are assessments of risk; they will not eventuate for all organisations in all circumstances. However, they are considerations that all organisations should assess when considering their strategy for AI adoption.


Need to go beyond standard M&A due diligence?

We provide AI Due Diligence through our Sovereignty and Security Due Diligence service

Contact us for a consultation.


At Tyburn, we specialise at countering evolving threats to risk-sensitive organisations. Our experts bring experience in government, military, and academia to bear in delivering solutions to challenging problems.

Next
Next

Iranian information operations reveal structural changes to information warfare