Insights

What we know about the problems you're working on.

Perspectives on technology financial management, AI adoption, federal procurement, and the intersection of cost intelligence and measurable outcomes — written for practitioners and leaders who need substance, not filler.

Perspectives Case Studies Resources Events ITFM & FinOps AI Governance Federal Procurement OTA & Prototyping Cost Transparency Agentic AI Perspectives Case Studies Resources Events ITFM & FinOps AI Governance Federal Procurement OTA & Prototyping Cost Transparency Agentic AI

Written by practitioners.
For people doing the work.

These are not trend pieces. They are working arguments about problems we encounter repeatedly — on cost modeling, AI governance, federal acquisition, and the operational discipline that separates programs that deliver from programs that report.

5 min read
AI Governance

The Governance Gap Is Not a Technology Problem

84% of AI initiatives fail to scale enterprise-wide. The technical capability exists. The models work. What's missing is the accountability infrastructure that makes deployment defensible to leadership, auditors, and oversight bodies.

Read the piece
7 min read
Federal Acquisition

OTA Prototype Authority: What Program Managers Get Wrong

Most program offices treat OTA as a contracting workaround. It isn't. It's a congressionally designed acquisition strategy for exactly the kind of commercial AI and software capabilities agencies are trying to deploy now.

Read the piece
4 min read
Strategy

The Best Solution Is the One That Fits the Problem

Not the most sophisticated solution we know how to build. Not the simplest one we can justify billing for. The calibration between those two poles is where most technology programs succeed or fail before a line of configuration is written.

Read the piece
5 min read
ITFM · Federal

What the FITARA Scorecard Measures — and What It Misses

Federal agencies have spent years chasing FITARA grades. The scorecard measures CIO authority, data center consolidation, and cyber hygiene. It does not measure whether leadership can answer a basic question: what does our technology portfolio actually cost?

Read the piece
4 min read
Automation · AI

Automation Without AI: The Case for Keeping It Simple When Simple Works

Not every automation problem needs an AI component. Some of the highest-value workflow improvements are built on rule-based logic and conditional routing — no model required. The compulsion to add AI where it isn't needed is its own form of gold-plating.

Read the piece
6 min read
AI Adoption · Federal

Why AI Pilots Don't Scale: The Program Management Argument

The 84% failure-to-scale rate isn't a model problem. It isn't a data problem. It is almost entirely a program management, organizational design, and governance problem — and the fix doesn't start with the technology.

Read the piece

What this class of work
produces.

The cases below illustrate outcomes from ITFM and AI programs built on the platforms and methodology Matter + Energy deploys. Client names are withheld. Original M+E case studies will be published as engagements complete and clients approve their stories.

ITFM · TBM

DoD Combatant Command

From Spreadsheets to a Single Source of Truth for a $20B IT Portfolio

A global warfighting command managing over $20 billion in IT assets and 90 mission-critical services had no defensible cost model. Budget reviews required 10–20 hours of manual spreadsheet work per service manager per week. Variances of $5 million or more in unfunded resource requests were routine. The command implemented TBM-standard cost modeling and IT financial management tooling to create a single authoritative source for all financial planning and execution data.

Budget review prep reduced from days to a few hours per cycle
Monthly and annual leadership reviews shortened from two weeks to three days
Budget variance reduced 20–30% with real-time obligation tracking
Engineering time redirected from reporting to mission delivery
Discuss a similar engagement

ITFM · TBM

Federal Civilian Agency

TBM as a Budget Defense Tool Under Declining Funding

A federal agency facing decreasing appropriations while increasing technology dependency needed to justify its IT portfolio to mission partners and oversight bodies. Spending more than half its budget sustaining legacy systems, it had no structured way to make the modernization case or defend investment decisions. TBM taxonomy implementation gave Finance and IT a common language, automated previously manual reporting, and created the cost transparency required for credible budget defense.

95% reduction in time spent on asset refresh monitoring and reporting
Real-time cost monitoring across 100+ cloud-migrated applications
Standardized IT spend reporting legible to OMB and oversight agencies
TBM taxonomy used for active investment decision support
Discuss a similar engagement

AI Adoption · Agentic Automation

Large Enterprise Organization

Agentic AI Deployed at Scale Across the Enterprise Workforce

A large enterprise deployed an agentic AI platform to automate multi-step workflows across business functions — eliminating repetitive manual tasks and routing work that previously required human coordination. The deployment was designed for production from the start: governance architecture in place before launch, measurement baseline established before the first interaction, operational ownership assigned to business units rather than an IT pilot team. The result was adoption at a scale that most AI programs never reach.

300,000+ monthly automated interactions at steady state
16,000+ hours of employee time recovered per month
73% workforce adoption — sustained, not just launched
Full audit trail and governance controls from day one
Discuss a similar engagement

AI Governance · watsonx

Global Technology Organization

Governing 2,700+ AI Use Cases Without Slowing Deployment Velocity

A global technology organization scaling AI across functions and geographies faced the governance problem that stops most programs: oversight requirements that create bottlenecks rather than enabling scale. By deploying AI governance tooling as infrastructure rather than compliance overlay — automated lifecycle monitoring, bias detection, model performance tracking, and policy enforcement built into the deployment architecture — the organization achieved operational efficiency gains while maintaining the accountability chain required by leadership and regulators.

150% increase in operational efficiency across AI-governed workflows
2,700+ AI use cases processed with consistent governance controls
Bias and drift monitoring automated across the full model portfolio
Governance deployed as infrastructure, not post-deployment retrofit
Discuss a similar engagement

These outcomes reflect patterns from deployments on platforms Matter + Energy implements — our ITFM platform for ITFM/TBM and IBM watsonx for AI adoption and governance. Original M+E case studies will be published as engagements complete and clients approve their stories. Client names are withheld by policy.

Curated research from the sources
that inform how we work.

These are the frameworks, reports, and reference materials we return to — from standards bodies, government oversight agencies, and research organizations. Each one is annotated with why it matters and what it means for the problems our clients are solving.

ITFM & FinOps

AI Governance & Adoption

NIST

AI Governance · Risk Management · Framework

AI Risk Management Framework (AI RMF 1.0)

NIST's framework for identifying, assessing, and managing AI risk across the development and deployment lifecycle. The de facto governance reference for federal AI programs — and an increasing number of enterprise deployments. The four core functions (Govern, Map, Measure, Manage) provide the accountability structure that most organizations are missing when AI pilots stall.

Download Framework

DoD CDAO

AI Governance · Defense · Responsible AI

DoD Responsible AI Strategy & Implementation Pathway

The DoD's authoritative framework for responsible AI — covering the five principles (responsible, equitable, traceable, reliable, governable) and the implementation requirements defense AI programs must satisfy. Directly relevant to any agency AI initiative that requires IG, CDAO, or congressional accountability. The governance gap that prevents most defense AI from scaling lives in the distance between these principles and actual program architecture.

Download Strategy

IBM Institute for Business Value

AI Adoption · Enterprise · Research

CEO Study: The Enterprise Guide to AI Agents

IBM IBV research tracking enterprise AI adoption across thousands of organizations globally. The source for the 16% enterprise scale rate and 68% governance-to-ROI correlation. Reading the data carefully reveals that the gap between organizations that scale AI and those that don't is almost entirely explained by governance maturity — not model capability or budget.

Read Research

Federal Acquisition & Oversight

GAO

Federal IT · Oversight · High Risk

High-Risk Series: IT Acquisitions and Operations

GAO's ongoing assessment of the federal government's highest-risk programs. IT acquisition and operations has appeared on this list every cycle since 2015. The diagnostic framework identifies exactly where federal technology programs break down — requirements definition, cost estimation, schedule management, and contractor oversight. The patterns are consistent. The solutions are not simple.

View Series

DoD

OTA · Prototyping · Acquisition

DoD Guide to Other Transactions (2023 Edition)

The authoritative DoD reference for OTA prototype and production authority — eligibility requirements, participation structures, IP considerations, cost-sharing arrangements, and the follow-on production pathway. Essential reading for any program office considering OTA for a commercial technology or AI acquisition. Most misreadings of OTA authority that slow or derail programs are answered in this document.

Download Guide

McKinsey Global Institute

Digital Transformation · Program Management

Delivering Large-Scale IT Projects on Time, on Budget, and on Value

The research behind the 70% transformation failure rate that appears throughout our work and the work of most serious ITFM practitioners. McKinsey's analysis identifies the distinguishing factors: scope discipline, executive sponsorship, agile execution, and — critically — the decision to measure value delivery against outcomes, not milestones. The gap between what programs report and what they deliver is the central problem this research quantifies.

Read Research

External resources are linked directly to their original sources. Matter + Energy does not reproduce copyrighted material. Annotations reflect our editorial interpretation of each source's relevance — not endorsement of all findings or positions. Links are verified periodically; if a link is broken, contact us and we'll update it.

Sessions worth
showing up for.

Webinars, briefings, and working sessions on the problems our clients are actively navigating. We don't run events on a schedule — we run them when we have something specific worth saying.

Q2 25 2025
Webinar · Federal & Defense

From Prototype to Production: Navigating OTA Authority for AI Programs

A practical session for federal program managers and contracting officers on structuring OTA prototype agreements for AI and software capabilities — participation requirements, IP considerations, success criteria definition, and the follow-on production pathway. Concrete. No slide decks that restate the statute.

Register Interest
Additional events are scheduled as topics develop. If there is a specific subject you'd like us to address in a working session or briefing, let us know →