Framework Summary — At a Glance
Decision Science is the discipline of identifying which questions are worth answering with data and building the systems that translate findings into action.
The gap between analytics capability and business impact is not technical. It is conceptual. Organizations that solve for this gap outperform those that don't, regardless of how sophisticated their models are.
- Step 1: Start with the decision, not the data.
- Step 2: Define what "different" looks like: what would change if the answer came back high vs. low.
- Step 3: Quantify the cost of uncertainty, not just the cost of being wrong.
- Step 4: Build a Decision Rule before building a model.
- Step 5: Close the loop: measure whether the decision actually changed.
What Is Decision Science?
This framework builds on the broader philosophy of Decision Science described here. What follows is the practical, step-by-step implementation.
Decision Science is the discipline of identifying which questions are worth answering with data and building the frameworks that translate analytical findings into better organizational decisions.
Data Science tells you how fast the car can go. Decision Science asks whether you're driving toward a gold mine or a cliff.
Most organizations invest heavily in the first and almost nothing in the second. The result is analytically sophisticated teams producing work that describes what happened last quarter without meaningfully improving what will happen next quarter.
The gap is not technical incompetence. It's a conceptual starting point problem.
Why Analytics Projects Fail to Change Decisions
The failure mode is consistent across industries, company sizes, and analytical maturity levels. Analytics projects fail to change decisions for one or more of the following reasons:
They start with data, not decisions
Most analytics projects begin: What data do we have? What can we analyze? Can we build a model?
The right starting point: What specific decision are we trying to improve? What is the cost of being wrong? What uncertainty is currently blocking action?
Starting with data produces analysis. Starting with decisions produces improvement.
They measure what's easy, not what matters
Dashboard metrics are frequently chosen for availability and impressiveness rather than decision relevance. The result is measurement systems that trend in directions leadership likes, and tell you nothing about what to do differently. Vanity metrics are not just harmless noise. They crowd out the analytical work that would actually improve decisions.
They produce insight without a recipient
An analytical finding without a pre-specified decision it supports is a fact in search of a use case. Organizations that commission analysis without first defining who will act on it, under what conditions, and by when, reliably produce expensive reports that get filed and ignored.
They optimize for rigor, not trust
A statistically rigorous model that executives don't understand and can't interrogate is less useful than a simpler model that they trust and actually use. Complexity is a cost. If you cannot explain the model to the board, you do not understand the risk; neither do they.
The Missing Layer: Decision Rules
The mechanism that closes the loop between analysis and action is the Decision Rule.
A Decision Rule is a pre-defined framework that specifies:
- What question the analysis is answering
- What evidence threshold would change the decision
- Who makes the call
- What the response will be across different outcome scenarios
- When the decision needs to be made
Decision Rules force the conversation that most organizations skip: the conversation about what the analysis is actually for. They prevent the most common failure mode in analytics: the project that produces a correct answer to a question nobody was going to act on.
Building a Decision Rule before building a model is the single highest-leverage habit change for an analytics organization. It filters out low-value work before resources are committed, and it ensures that high-value work has a clear path from finding to action.
A Practical Framework: Five Steps
Step 1 — Start with the decision
Every analytics project should begin with a decision statement: "We are trying to decide [X]. The stakes are [Y]. The decision will be made by [Z] by [date]." If you cannot complete that sentence, you are not ready to start the analysis.
Step 2 — Define "different"
Before any data is pulled, answer: What would we do if the result comes back high? What would we do if it comes back low? If the answer to both questions is "we'd do the same thing," the analysis is not decision-relevant. Abandon it or reframe it.
Step 3 — Quantify the cost of uncertainty
Most organizations frame the analytics decision as: "Is this model good enough?" The better question is: "What is the cost of acting without this analysis versus the cost of delay?" Uncertainty is manageable. The failure to manage it is optional, and expensive.
Step 4 — Build the Decision Rule first
Specify the thresholds, the decision-maker, and the response scenarios before building the model. This prevents scope creep, ensures stakeholder alignment, and creates accountability for acting on results.
Step 5 — Close the loop
Track whether the decision actually changed. This is the step most organizations skip, and it's the only way to know whether the analytics investment is generating return. If analysis consistently fails to change decisions, the problem is usually upstream: the Decision Rule was not defined, or the wrong question was being answered.
What This Looks Like in Practice
At Starz, a subscriber growth target created a budget allocation question: where should several million dollars in digital advertising go? The president wanted the analysis before committing to the reallocation.
The decision was clear. The analysis question followed directly from it: which platforms had the greatest remaining penetration into the qualified prospect market? The finding was specific. The budget moved. That is Decision Science in practice, not because the analysis was especially sophisticated, but because it was built backward from a decision that needed to be made.
What Good Looks Like
An organization practicing Decision Science at the executive level exhibits these behaviors:
- Analytics requests begin with a decision statement, not a data request
- Stakeholders can articulate what they would do differently under different analytical outcomes, before the analysis runs
- Model complexity is calibrated to the decision, not to impressiveness
- Dashboards are organized around decisions, not metrics
- The analytics team tracks decision outcomes, not just model accuracy
- The question "compared to what?" gets asked before any result is presented
Most organizations have some of these. Very few have all of them. The gap between "some" and "all" is the difference between an analytics function that generates cost and one that generates competitive advantage.
This framework is not theoretical. Here are examples of how it has been applied in practice across streaming analytics, pharmaceutical marketing, and experimentation systems.
Further Reading
The full philosophical argument behind this framework is in What Is Decision Science? And Why Most Companies Get It Wrong.
If you want to see how this framework plays out in practice, the Impact page has real examples from streaming analytics, pharmaceutical marketing, and consumer goods — each structured as problem, approach, and outcome.
Related writing in the Data Science Rabbit Hole publication covers the specific failure modes (dashboard theater, vanity metrics, A/B testing culture) in more depth.