Why Your AI Vendor's ROI Promise Is Probably Wrong
Gartner's April 2026 survey found only 28% of AI projects fully deliver ROI. Three systematic errors explain why vendor projections collapse against actual results: baseline neglect, cost exclusion, and attribution confusion. A five-question framework for evaluating AI vendor claims before you commit.
A five-question evaluation framework for testing AI vendor ROI claims against your actual operations. The framework covers baseline measurement, total cost accounting, comparable evidence, attribution methodology, and outcome-based pricing.
AI vendor ROI claims are the projected return-on-investment figures that consulting firms, SaaS platforms, and automation vendors present during sales. They express expected savings or revenue gains as percentages or dollar amounts. The gap between these projections and actual measured results is the central measurement problem in enterprise AI adoption.
Gartner published data on April 7, 2026 from a survey of 782 infrastructure and operations leaders: only 28% of AI projects fully succeed and meet ROI expectations (Gartner, April 2026). One in five fails outright.
McKinsey's 2026 Global AI Survey tells a parallel story: 88% of organizations now use AI in at least one function, but only 39% report any impact on enterprise-wide EBIT (AI Governance Today, citing McKinsey 2026). Adoption is widespread, but measured returns remain scarce.
AI works. The vendor ROI projections surrounding it are structurally flawed, because they contain three systematic errors that inflate expected returns before a single line of code runs.