Turning Analytics Into Decisions

A Decision Science framework for executives.

By Michael Bagalman — VP of Business Intelligence and Data Science at Starz, Professor of Practice at the University of Oklahoma.

Framework Summary — At a Glance

Decision Science is the discipline of identifying which questions are worth answering with data and building the systems that translate findings into action.

The gap between analytics capability and business impact is not technical. It is conceptual. Organizations that solve for this gap outperform those that don't, regardless of how sophisticated their models are.

What Is Decision Science?

This framework builds on the broader philosophy of Decision Science described here. What follows is the practical, step-by-step implementation.

Decision Science is the discipline of identifying which questions are worth answering with data and building the frameworks that translate analytical findings into better organizational decisions.

Data Science tells you how fast the car can go. Decision Science asks whether you're driving toward a gold mine or a cliff.

Most organizations invest heavily in the first and almost nothing in the second. The result is analytically sophisticated teams producing work that describes what happened last quarter without meaningfully improving what will happen next quarter.

The gap is not technical incompetence. It's a conceptual starting point problem.

Why Analytics Projects Fail to Change Decisions

The failure mode is consistent across industries, company sizes, and analytical maturity levels. Analytics projects fail to change decisions for one or more of the following reasons:

They start with data, not decisions

Most analytics projects begin: What data do we have? What can we analyze? Can we build a model?

The right starting point: What specific decision are we trying to improve? What is the cost of being wrong? What uncertainty is currently blocking action?

Starting with data produces analysis. Starting with decisions produces improvement.

They measure what's easy, not what matters

Dashboard metrics are frequently chosen for availability and impressiveness rather than decision relevance. The result is measurement systems that trend in directions leadership likes, and tell you nothing about what to do differently. Vanity metrics are not just harmless noise. They crowd out the analytical work that would actually improve decisions.

They produce insight without a recipient

An analytical finding without a pre-specified decision it supports is a fact in search of a use case. Organizations that commission analysis without first defining who will act on it, under what conditions, and by when, reliably produce expensive reports that get filed and ignored.

They optimize for rigor, not trust

A statistically rigorous model that executives don't understand and can't interrogate is less useful than a simpler model that they trust and actually use. Complexity is a cost. If you cannot explain the model to the board, you do not understand the risk; neither do they.

The Missing Layer: Decision Rules

The mechanism that closes the loop between analysis and action is the Decision Rule.

A Decision Rule is a pre-defined framework that specifies:

Decision Rules force the conversation that most organizations skip: the conversation about what the analysis is actually for. They prevent the most common failure mode in analytics: the project that produces a correct answer to a question nobody was going to act on.

Building a Decision Rule before building a model is the single highest-leverage habit change for an analytics organization. It filters out low-value work before resources are committed, and it ensures that high-value work has a clear path from finding to action.

A Practical Framework: Five Steps

Step 1 — Start with the decision

Every analytics project should begin with a decision statement: "We are trying to decide [X]. The stakes are [Y]. The decision will be made by [Z] by [date]." If you cannot complete that sentence, you are not ready to start the analysis.

Step 2 — Define "different"

Before any data is pulled, answer: What would we do if the result comes back high? What would we do if it comes back low? If the answer to both questions is "we'd do the same thing," the analysis is not decision-relevant. Abandon it or reframe it.

Step 3 — Quantify the cost of uncertainty

Most organizations frame the analytics decision as: "Is this model good enough?" The better question is: "What is the cost of acting without this analysis versus the cost of delay?" Uncertainty is manageable. The failure to manage it is optional, and expensive.

Step 4 — Build the Decision Rule first

Specify the thresholds, the decision-maker, and the response scenarios before building the model. This prevents scope creep, ensures stakeholder alignment, and creates accountability for acting on results.

Step 5 — Close the loop

Track whether the decision actually changed. This is the step most organizations skip, and it's the only way to know whether the analytics investment is generating return. If analysis consistently fails to change decisions, the problem is usually upstream: the Decision Rule was not defined, or the wrong question was being answered.

What This Looks Like in Practice

At Starz, a subscriber growth target created a budget allocation question: where should several million dollars in digital advertising go? The president wanted the analysis before committing to the reallocation.

The decision was clear. The analysis question followed directly from it: which platforms had the greatest remaining penetration into the qualified prospect market? The finding was specific. The budget moved. That is Decision Science in practice, not because the analysis was especially sophisticated, but because it was built backward from a decision that needed to be made.

See more examples of Decision Science in practice →

What Good Looks Like

An organization practicing Decision Science at the executive level exhibits these behaviors:

Most organizations have some of these. Very few have all of them. The gap between "some" and "all" is the difference between an analytics function that generates cost and one that generates competitive advantage.

This framework is not theoretical. Here are examples of how it has been applied in practice across streaming analytics, pharmaceutical marketing, and experimentation systems.

Further Reading

The full philosophical argument behind this framework is in What Is Decision Science? And Why Most Companies Get It Wrong.

If you want to see how this framework plays out in practice, the Impact page has real examples from streaming analytics, pharmaceutical marketing, and consumer goods — each structured as problem, approach, and outcome.

Related writing in the Data Science Rabbit Hole publication covers the specific failure modes (dashboard theater, vanity metrics, A/B testing culture) in more depth.