The AI cost calculator your vendor doesn't want you to see
Three real client engagements. Actual costs, actual returns — including the one that lost money. We're showing you all three because the vendor who only shares the wins is the vendor you should not trust with your capital.
Client A: The clear win
A professional services firm with a six-person outbound sales team. They wanted AI-assisted lead qualification. We built an agent that scored inbound leads, drafted personalised outreach, and flagged high-priority prospects for human follow-up. Build cost: £28,000. Monthly operating cost: £1,400. Monthly senior sales time recovered: 84 hours. At their billing rate, that's £12,600 per month of senior time freed for higher-value work. Payback period: three months. This is the case study vendors show you. It's real — and it is not typical.
Client B: The break-even
A retailer wanted AI-powered demand forecasting to reduce overstock and stockouts. Build cost: £45,000. Monthly operating cost: £2,800. Inventory cost reduction in year one: £38,000. Gross saving against total first-year cost of £78,600: not profitable in year one. In year two — without the build cost — clearly positive. We modelled this upfront and they proceeded, correctly: the forecast model compounds value as it learns more data. But year one on paper looks bad. Expect stakeholder questions, and answer them with the multi-year model you agreed before you started.
Client C: The loss — and the lesson
A financial services firm wanted AI-generated regulatory reports to replace a contracted compliance function. Build cost: £62,000. It worked. Then their regulator changed the reporting format. Rebuild: £22,000. Then a key data source changed schema without notice. Another £8,000. Total spend: £92,000. Expected annual saving: £30,000. Break-even: somewhere around year four. The client cancelled at month 18. The lesson: AI systems in regulated environments with frequently changing requirements carry a hidden maintenance cost that most vendors fail to model — and that we under-modelled in this engagement.
Honest conclusion
The three conditions for positive ROI
AI pays off when: (1) the workflow is stable, (2) the data is clean, (3) the output is measurable. When any of these three is absent, the economics are harder than the pitch deck suggests. Ask your vendor for their worst-performing case study. If they don't have one, they haven't shipped enough to know what failure looks like.
How to model it properly
We use a four-part framework before committing to any AI build. First: identify the measurable output — not 'productivity improvement' but specific hours, error rates, or revenue figures. Second: model the maintenance cost — plan for 20 to 30% of build cost per year for ongoing changes and schema updates. Third: run a sensitivity analysis — what if data quality is 20% worse than expected? What if adoption is 60% of target? Fourth: agree the success metrics before you start, in writing, with sign-off from the stakeholder who will judge the outcome.
The framework
Six questions to ask before you commit
'What is the specific metric that changes? What is its current value? What is our target? What is the value of that change per month? What is total expected cost including three years of maintenance? What are the three most likely failure modes?' If a vendor cannot answer all six before you sign, they are guessing. So are you.