Technology consulting has two failure modes that pull in opposite directions, and both of them are expensive. The first is over-engineering: arriving at an engagement with a preferred architecture, a flagship platform, or a methodology that has worked before, and fitting the client's problem to that solution rather than the other way around. The second is under-engineering: defaulting to the simplest billable scope, avoiding the organizational conversations that the problem actually requires, and delivering something technically correct that does not address the real issue.

Both failure modes share a common root. They substitute the consultant's convenience — the familiar solution, the manageable scope — for the client's actual need. And because technology programs are complex enough that it takes months or years for the gap between a solution and its problem to become undeniable, both failure modes can survive long enough to be called successful before the evidence catches up.

The principle that the best solution is the one that fits the problem sounds obvious. It is not, in practice, how most technology programs are designed. This piece is about what it actually takes to operate that way.

The over-engineering trap and why smart consultants fall into it

Over-engineering is not usually the product of bad intent. It is the product of genuine expertise misapplied. A consultant who has built a successful ITFM program at three Fortune 500 companies knows what a mature cost model looks like, what the platform configuration decisions are, and what the change management requirements will be. That knowledge is valuable. The trap is applying it before understanding whether the client is ready for a mature ITFM program — or needs one at all in the form that prior engagements took.

The diagnostic questions that distinguish the right scope from the preferred scope are not complicated, but they require the discipline to ask them before recommending anything. What decisions does the client need this program to support, and in what timeframe? What data quality and organizational readiness do they actually have, as opposed to what they say they have? What is the cost of getting a good-enough answer in 90 days versus a perfect answer in 18 months? What happens to the program if the executive sponsor changes or the budget is cut before implementation is complete?

These questions sometimes produce an answer that looks smaller than what the consultant knows how to build. A phase-one scope that delivers a working cost allocation model without the full TBM tower hierarchy. An ITFM implementation that produces reliable showback in year one rather than full chargeback. An AI deployment that automates three high-value workflows instead of orchestrating the entire enterprise process landscape. Smaller is not worse. A solution that gets used and trusted is worth more than a solution that is technically superior and organizationally inert.

The under-engineering trap and why it is harder to see

Under-engineering is more insidious because it presents as pragmatism. Scope is reduced to what is achievable. Organizational complexity is treated as out of scope. Hard conversations about data quality, process ownership, and accountability are deferred to a later phase that never quite arrives. The deliverable is technically complete. The problem is not solved.

The most common version in ITFM and FinOps work is an allocation exercise that produces a cost report without addressing the cost model underneath it. The numbers are computed. They are probably wrong in ways nobody can easily identify. Finance does not trust them. Business units dispute their allocations. The report gets produced monthly and ignored quarterly. The program is technically running; it is not producing the decision support it was supposed to provide.

The most common version in AI work is a pilot deployment that demonstrates a capability without establishing the governance, measurement baseline, or organizational accountability structure required to expand it. The pilot succeeds on its own terms. It does not scale. The program office reports completion of the pilot milestone and moves on, and the capability never reaches the users it was supposed to serve.

In both cases, the under-engineering was visible from the beginning to anyone willing to look at it. The scope did not include the organizational work. The organizational work was the hard part. Avoiding it was a choice, usually a comfortable one, and the client paid for it later.

The question we ask before scoping anything

What decision does this program need to support, and who is the person who will be held accountable for making that decision? If the answer is clear, the scope follows from it. If the answer is unclear, scoping before answering it is how programs get designed for the wrong problem.

What it actually takes to find the right scope

Finding the right scope requires three things that are in tension with the commercial incentives of most technology consulting engagements.

The first is a genuine diagnostic before any recommendation. Not a discovery phase that produces a report confirming the solution the consultant already had in mind, but an honest assessment of what the organization is trying to accomplish, what constraints it is operating under, and what level of organizational readiness it actually has. This sometimes means telling a client that they are not ready for the platform they want to buy, or that the problem they described is not the problem that needs solving first.

The second is the willingness to recommend a smaller scope when a smaller scope is right. This is commercially uncomfortable. A well-scoped phase-one engagement that delivers measurable value in 90 days and earns the right to phase two is a better outcome than an over-scoped program that takes 18 months and disappoints. But it requires accepting a smaller initial engagement and trusting that demonstrable results earn the next conversation. That trust is not universal in consulting relationships.

The third is the organizational honesty to surface the hard problems, not just the technical ones. Most technology program failures are organizational failures — accountability gaps, data quality problems, process ownership disputes, change management failures — that manifest as technology failures because the technology is the visible artifact. A consultant who scopes around these problems is not solving them. They are scheduling them for later, when they will be more expensive and more entrenched.

The discipline of fitting the solution to the problem rather than the problem to the solution is what separates programs that deliver from programs that report. Reporting completion is easy. Delivering the decision support, the cost visibility, or the workflow automation that the organization actually needed is harder — and it starts before any configuration work begins.

Why this is a competitive position, not just a principle

Matter + Energy does not scope before we understand the problem. That is not a methodology claim — it is a precondition for not wasting a client's time and budget on the wrong solution. The diagnostic always comes before the recommendation. The principle applies regardless of context.

Organizations that have been through over-engineered implementations that did not deliver, or under-scoped engagements that produced technically correct outputs nobody used, recognize this approach quickly. They have seen what happens when a consultant arrives with a solution already in hand. They know what a scoping conversation that is actually a scoping conversation feels like, as opposed to one that is a proposal dressed up as discovery.

The best solution is the one that fits the problem. That statement is a competitive position because most of the market does not operate that way. Finding the right scope requires intellectual honesty about what the client needs, what they are ready for, and what will actually get used. It is not complicated. It is just uncommon.


We do not recommend a solution before understanding the problem. If you want to have that diagnostic conversation before committing to a scope, start a conversation →