There is a predictable arc to failed FinOps programs. An organization decides it has a cloud cost problem. Leadership approves budget for tooling — Apptio Cloudability, CloudHealth, native AWS Cost Explorer, or one of half a dozen alternatives. A team is stood up, dashboards are configured, and within a few months the organization has more data about its cloud spend than it has ever had. It can see spend by account, by service, by tag, by region. It can see trend lines. It can see anomalies.

And then someone asks: which business unit is responsible for this cost? Which application? Which product? And the whole thing falls apart.

The tool didn't fail. The tool was working exactly as designed. What failed was the cost model — the set of decisions about how technology costs should be categorized, attributed, and allocated to the parts of the organization that consume them. That model either existed and was wrong, or it didn't exist at all. Either way, the result is the same: reports that generate arguments instead of action, dashboards that finance doesn't trust, and a FinOps program that produces a lot of output and very little decision support.

What a cost model actually is — and why most organizations don't have one

A cost model is not a report. It is not a dashboard. It is the structural logic that sits underneath all of those things — the taxonomy of how technology spend is organized, the rules that govern how shared costs are distributed, and the definitions that make it possible to compare numbers across time and across the organization.

The Technology Business Management (TBM) standard defines this taxonomy in terms of towers and sub-towers — IT cost categories organized in a hierarchy from raw infrastructure through applications to business capabilities. The TBM Council, which developed this standard, provides the reference model that most mature ITFM implementations are built on. The value of the standard is not the taxonomy itself — it's the shared definitions. When Finance asks what infrastructure costs, and IT can answer with a number that both functions agree is calculated the same way every time, in a structure that maps to how the organization actually operates, that's the output of a working cost model.

Most organizations don't have this. They have a chart of accounts. They have a cost center structure. They have tags — some of them, inconsistently applied. They have vendor invoices. What they don't have is a deliberate set of decisions about how those inputs should be organized and allocated to produce numbers that are defensible, consistent, and useful for the decisions that actually matter.

The core distinction

A cost allocation exercise takes existing cost data and distributes it across a predefined structure. A cost model defines that structure — and the principles governing how costs should move through it. Allocation without a model produces numbers. A model produces insight.

The three allocation decisions that determine whether a model works

Every cost model, regardless of the organization or the tooling, requires three categories of deliberate decision. Most programs either make these decisions implicitly — encoding assumptions into configuration without surfacing them for review — or skip them entirely and attempt to paper over the absence with technology. Neither approach produces defensible results.

1. How shared infrastructure costs are distributed

Most technology environments have significant shared infrastructure — data centers, network, shared platforms, security tooling, identity management. These costs are real and they are not trivially attributable to any single application or business unit. The question is how to distribute them.

The options are a spectrum from arbitrary to precise: equal-share allocation (politically untenable the moment any business unit believes it's subsidizing another), manual percentages (negotiated annually, usually encoding last year's politics rather than this year's consumption), and consumption-based allocation (precise but requiring the underlying consumption data to actually exist). Most organizations use some combination. The important thing is not which method is chosen — it's that the method is explicit, documented, consistently applied, and understood by the stakeholders who will be held accountable for the resulting numbers.

When allocation logic is buried in configuration and nobody can articulate the rules, the first time Finance or a business unit leader disputes a number, there is no defensible answer. The number is correct by accident of configuration, not by deliberate design. That is not a position any ITFM program can sustain.

2. How IT's own operational costs are handled

IT organizations consume technology too. The infrastructure that supports the service desk, the platforms the development teams work on, the tools IT uses to manage the environment — these are real costs that appear in the same cost pool as the infrastructure that serves the business. How they are handled is consequential.

If IT's own consumption is allocated back to the business units alongside the services those units consume, IT appears cheaper than it is. If it's held separately as an IT overhead category, business unit costs look artificially low. If it's excluded entirely from the model, the numbers don't reconcile to the financial statements. Every approach has implications. The right one depends on what the organization is trying to accomplish with the data — are they building a showback model for awareness, a chargeback model for financial accountability, or a cost-per-service model for unit economics? The answer changes how IT's internal consumption should be treated.

3. The definition of a "service" and the granularity of the model

This is the decision most organizations make too late — after months of configuration work — and then have to revisit. How finely should the cost model be sliced? Should "cloud infrastructure" be a single tower, or should it be broken into compute, storage, networking, and database separately? Should each application have its own cost category, or should applications be grouped by platform or business function?

The answer is almost always: less granular than the technology team wants, more granular than Finance thinks is necessary, and precisely as granular as the organization can maintain with the data quality it actually has. A model that is theoretically precise but practically unmaintainable degrades immediately. A model that is too high-level produces numbers that are accurate in aggregate but useless for decisions about specific services or applications. The calibration is one of the most consequential early decisions in any ITFM program.

Why tooling selection happens before model design — and why this is backwards

The sequence of events in most FinOps and ITFM implementations reflects how procurement works, not how program design should work. Budget is approved for a platform. The platform is selected. Implementation begins. The cost model emerges from configuration decisions that are made under deadline pressure, by implementation consultants who have a template and a go-live date and a limited appetite for the organizational conversations that model design actually requires.

This is backwards. The cost model should be designed before a platform is selected, or at minimum before implementation begins. The reason is that platform configuration encodes the model. Once Apptio is configured with a particular allocation hierarchy and a set of allocation rules, changing that hierarchy is not a minor adjustment — it requires reconfiguration, retesting, and produces historical discontinuities that make trend analysis unreliable. The cost of getting the model wrong after implementation is disproportionately high.

The conversations that produce a workable cost model are not technology conversations. They are organizational conversations: what decisions do we actually need this data to support? Who is accountable for what costs? What does Finance need to be able to reconcile to? What level of granularity can our tagging discipline actually support? These questions require the participation of Finance, IT leadership, and business unit stakeholders. They typically require someone with enough ITFM methodology experience to facilitate the conversation toward decisions rather than letting it meander through competing preferences.

The most expensive configuration work in any ITFM implementation is not the work that was done. It's the rework required when the model that was built doesn't match the decisions the organization actually needs to make.

What a working model produces that a broken one doesn't

The difference is not cosmetic. Organizations with a working cost model — one where the taxonomy is deliberate, the allocation logic is documented, and the definitions are stable — can answer questions that organizations without one cannot.

They can answer the unit economics question: what does it actually cost to deliver this service, and how has that cost changed over the last three years? They can answer the comparison question: is this business unit's technology spend proportionate to what it's getting, or is there a subsidy flowing in one direction or the other? They can answer the budget defense question: here is what we spent, here is why, and here is what we would need to cut or grow to change that number by 10%.

These are not exotic questions. They are the questions that every CIO eventually gets asked by Finance, by the CFO, by a board member evaluating a technology investment decision. The difference between being able to answer them confidently and needing two weeks and a spreadsheet to produce a number nobody fully trusts is, almost entirely, a function of whether the cost model was built right before the tools were configured.

That is the allocation model problem. It's not a technology problem. It's a sequence-of-decisions problem — and it's fixable, but only if the right conversations happen before the configuration work begins.


Matter + Energy's Technology Spend Intelligence practice is built on the TBM Council standard — our founding-level involvement in that framework means our ITFM methodology is built from the standard, not derived from it. If your cost model needs to be rebuilt or your FinOps program is producing data that Finance doesn't trust, start with a conversation →