Who We Are
Matter + Energy brings together deep domain expertise, active AI research, and a broad network of TBM practitioners, software engineers, and delivery specialists — all organized around a founding conviction: the solution should fit the problem, not the other way around.
The same problem — two very different right answers
Federated cost model across 14 business units, 3 clouds, legacy mainframe
Requires layered allocation logic, a custom taxonomy, and six months of data normalization. The elegant solution would break at scale.
Single-tenant SaaS with one cost center and clean billing data
A simple, well-configured model delivers the same visibility. Adding complexity here would waste budget and slow the team down.
We size the solution to the actual problem — not to the size of the fee, and not to the complexity that looks most impressive in a slide deck.
How We Think
There's a version of technology consulting that works like a vending machine: describe your problem, receive the standard solution. It's efficient. It's also wrong roughly half the time — because it conflates problems that share a category label but have almost nothing else in common.
An IT financial management problem at a 40,000-person federal agency with seventeen funding streams and a legacy mainframe is not the same problem as ITFM at a 2,000-person SaaS company with clean cloud billing and a single cost center. They get the same label. They require completely different approaches. We notice that difference — and we think it matters enough to build our entire methodology around it.
The same is true in AI adoption. A document-processing use case for a procurement team is not the same problem as an agentic reasoning system for operational decision-making. One calls for a well-configured workflow. The other calls for rigorous evaluation, governance architecture, and a much more careful conversation about what happens when it's wrong. Treating them the same isn't just inefficient — it's how organizations end up with solutions that are either dangerously over-engineered or embarrassingly underbuilt.
How approach varies across problem types
Well-defined · Bounded
Single-entity cost model with clean data
Simple configuration → fast value
A well-configured Apptio instance with standard taxonomy. No custom allocation logic, no multi-layer hierarchy. In production in weeks with minimal change management.
Structured · Multi-layer
Enterprise with hybrid cloud and multiple business units
Custom taxonomy + phased rollout
Requires a cost model that reflects how the business actually operates. Allocation logic built bottom-up from real billing data. Phased by business unit to manage change.
Ambiguous · High-stakes
Federal agency with appropriated funding, IG oversight, legacy systems
Compliance-first architecture
Compliance is architectural, not cosmetic. Allocation methodology must survive IG scrutiny. Reporting structures mapped to appropriation categories, not just cost centers.
Complex · Novel
Multi-agency portfolio with cross-fund cost sharing and OTA structures
Custom methodology + extended discovery
No standard template applies. Requires a discovery session engagement to map the cost structure before any platform work begins. The discovery is the deliverable at this stage.
The best solution to a problem is the one that solves that specific problem — not the most sophisticated solution we know how to build, and not the simplest one we can justify billing for. We find the distinction genuinely interesting to work through. That's not a line. It's why the work doesn't get boring.
Matter + Energy — how we approach every new problem
Automation & Innovation
Innovation in automation doesn't always arrive wearing a large language model. Sometimes it's a well-designed workflow that eliminates three manual steps. Sometimes it's a trigger-based integration that connects two systems that have always been disconnected. Sometimes — and this is genuinely important to say clearly — a Python script, a scheduled job, and a well-structured API call will outperform a generative AI solution on cost, reliability, latency, and auditability. Combined.
We are genuinely excited about agentic AI, about the ways autonomous systems are changing what's possible in operations and decision support, and about the governance frameworks that make those systems trustworthy. We are also committed to choosing the right tool. That commitment sometimes means recommending something less novel — and we think that's a sign of intellectual honesty, not lack of ambition.
Trigger-based workflows, conditional routing, scheduled processes. These approaches are fast to implement, easy to audit, and produce deterministic results. For high-volume, well-defined processes, they frequently outperform AI alternatives on every relevant metric — cost, speed, error rate, and explainability. We build them without apology.
Much of what passes for an "automation problem" is actually a data connectivity problem. The right intervention is a well-designed integration layer — APIs, event streams, transformation pipelines — that makes information available where and when it's needed. Often this unlocks more operational value than any AI overlay could.
When the process involves ambiguous inputs, multi-step reasoning, or decisions that can't be pre-encoded in a rule set, agentic AI becomes genuinely valuable. We design, deploy, and govern these systems through our AI Adoption practice — with our agentic automation platform for workflow orchestration and our AI governance platform to ensure they remain auditable and controllable.
The number that matters
of AI initiatives scaled
to enterprise-wide deployment
IBM Institute for Business Value, 2025 · 2,000 CEOs
Governance gaps, unclear success criteria, no baseline measurement before deployment, integration failures, change management that was never scoped. These are not AI problems. They are program management, data architecture, and organizational design problems that happen to involve an AI component. Our team has seen this pattern enough times to build around it — and we find the challenge of solving it well genuinely interesting.
The Team
Our core team brings together credentials that are genuinely rare in combination: founding-level involvement in the TBM standards that define enterprise IT financial management, active AI research, McKinsey-caliber strategic thinking, and hands-on delivery experience across complex enterprise and federal programs.
Behind that core is a broad network — TBM practitioners, software engineers, delivery specialists — who extend our reach across programs without diluting the quality of what gets delivered. We draw on that network deliberately: the right expertise for the specific problem, not the nearest available resource.
The TBM Council established the cost model taxonomy and measurement standards that now underpin enterprise IT financial management globally. Having a founding member on the team means our ITFM methodology isn't derived from the standard — it helped write it. That's a different kind of depth.
The analytical rigor, stakeholder communication discipline, and structured decomposition of complex problems that McKinsey develops over years of enterprise engagements is embedded in how we approach every problem — from initial discovery through the conversation that closes the work.
AI governance, agentic system design, and responsible deployment require an understanding of how these models actually work — their failure modes, their limitations, their susceptibility to specific kinds of misuse — not just familiarity with the APIs. Our AI practice is grounded in research, which makes our governance frameworks substantive rather than performative.
The gap between a technically sound solution and one that actually gets adopted is almost always a program management problem. Our delivery team brings experience running complex, multi-stakeholder technology programs — the kind where scope changes, budget conversations get hard, and organizational dynamics threaten the outcome. We've navigated those situations. We know what they look like early.
The combination of founding-level standards expertise, active research, and a broad delivery network means every engagement gets the right depth — without the overhead of an organization built to look large rather than perform well. That's the model we've built, and it's deliberate.
Certifications
Our certifications span the full range of disciplines our practice areas demand — from financial management frameworks to AI governance, cloud architecture, and federal security compliance.
SAFe
Scaled Agile Framework
Agile
Certified Practitioner
FinOps
FinOps Framework Certified
ITFM
IT Financial Management
AI Governance
AI Governance Professional
AI Platform
IBM watsonx Platform
AWS
Cloud & Solutions Architecture
CMMC
Cybersecurity Maturity Model
NIST
NIST Frameworks — AI RMF · CSF
Registrations
WOSB · EDWOSB · SDB · SWaM · DBE
UEI
KFTRUXU6KYA4
CAGE
9TFF7
Explore Further
The pages below go deeper into the platforms we work with, the roles we're hiring for, and how to start a conversation.
We work on problems that matter for organizations that can't afford to get it wrong. If you bring specific depth in TBM, software delivery, AI, or enterprise IT — and want to work on problems that are genuinely hard — we'd like to hear from you.
Whether you have a specific opportunity, a procurement question, a press inquiry, or you want to understand whether there's a fit — this is the right place to start.