Boards Are Demanding AI ROI Answers - And Legal Teams Are in the Hot Seat

A new global CIO survey finds board pressure on AI ROI has reached near-universal levels, with legal and compliance teams facing heightened scrutiny over AI governance and spend.

Key points:

  • Board pressure on AI ROI has hit near-universal levels: 98% of CIOs globally report increased scrutiny since 2024, with legal and compliance teams now squarely in the accountability crosshairs.
  • A new Dataiku/Harris Poll survey of 600 enterprise CIOs finds 74% believe their role will be at risk within two years if their organisations cannot demonstrate measurable AI gains — and 71% expect AI budgets to be cut or frozen by mid-2026 if targets slip.
  • Shadow AI and explainability gaps pose mounting governance risks for legal departments, with 82% of CIOs admitting employees are building AI tools faster than IT can govern them.

Corporate legal and compliance teams are entering a more demanding phase of artificial intelligence accountability, as corporate boards accelerate their shift from tolerating AI experimentation to requiring hard proof of business value. The era in which pilot programmes and promising anecdotes satisfied boardroom curiosity is over, according to a major new survey of 600 chief information officers worldwide, conducted by The Harris Poll on behalf of Dataiku between December 2025 and January 2026.

The findings land at a particularly sensitive moment for legal operations leaders. As Corporate Counsel reports, legal and tech teams overseeing AI deployments are increasingly being judged not only on whether AI tools work, but on whether they produce defensible, measurable outcomes — and whether the risks they introduce are being actively managed.

Ninety-eight percent of CIOs surveyed said board pressure to demonstrate measurable AI return on investment has increased since 2024, with 76% characterising that increase as moderate or significant. The pressure is not abstract: 71% of CIOs say it is likely their AI budget will be cut or frozen if performance targets are not met by the end of the first half of 2026. Nearly three-quarters (74%) believe their own role will be at risk if their organisation fails to deliver measurable business gains from AI within two years. Eighty-five percent expect their compensation to be explicitly linked to AI outcomes.

For general counsel and chief legal officers, these findings have direct operational relevance. Legal departments are frequent owners — or co-owners — of AI governance frameworks, and they are increasingly asked to validate AI investments from a risk and compliance perspective. Yet the survey reveals a structural gap between AI ambition and accountability infrastructure that legal leaders should find troubling.

Explainability is emerging as the central bottleneck. Eighty-five percent of CIOs globally report that traceability or explainability gaps have already delayed or stopped AI projects from reaching production. Nearly three in ten (29%) say they have been asked six or more times in the past year to justify an AI outcome they could not fully explain. Seventy percent of CIOs believe new formal AI audit and explainability requirements will arrive within the next twelve months — a timeline that sits squarely within current budget and planning cycles.

The legal implications are compounding. Shadow AI — employees using unsanctioned tools without IT or legal oversight — is already widespread. Fifty-four percent of CIOs report discovering employees using unsanctioned AI applications for work tasks, and 82% agree that employees are creating AI agents and applications faster than IT can govern them. For legal teams that are simultaneously managing privilege concerns, data privacy obligations, and vendor contracting, ungoverned AI usage is not merely a technology problem; it is a legal exposure.

The governance challenge is underscored by separate data. Thomson Reuters' 2026 AI in Professional Services Report found that while more than 80% of organisations now use generative AI weekly, only 18% actively measure ROI — and 40% do not know whether their organisation tracks any AI performance metrics at all. The gap between adoption and measurement is precisely where legal teams face the greatest exposure when boards begin asking pointed questions.

Vendor selection decisions are also drawing increasing scrutiny. The Dataiku survey found that 74% of CIOs regret at least one major AI vendor or platform selection made in the past eighteen months, and 62% say their CEO has questioned or challenged AI vendor decisions at least once in the past year. Forty percent say vendor lock-in or pricing volatility is having a major impact on their AI budget. For legal departments that negotiated those vendor contracts — or are being asked to renegotiate them — this creates both immediate workload and reputational pressure.

The accountability shift is reshaping what it means to be a legal technology leader. General counsel who positioned AI tools as efficiency investments must now contend with a board expectation that those investments produce traceable, auditable returns. Those who have not yet built governance frameworks that can withstand external scrutiny face a compressed timeline: the Dataiku survey found that 61% of CIOs expect legal and regulatory exposure within the next year, rising to 71% within two years.

The practical implication for in-house legal teams is straightforward, if not simple: AI governance can no longer be a background workstream. Boards that are pressing CIOs for performance answers will shortly press their general counsel for governance answers. Legal departments that can demonstrate clear AI policies, vendor accountability structures, data handling protocols, and measurable risk mitigation will be better positioned — both with their boards and, when required, with regulators.

Legal.io Logo
Welcome to Legal.io

Connect with peers, level up skills, and find jobs at the world's best in-house legal departments