Writing

Anti-hype, practitioner-tested, and written for people who want clarity over comfort.

I publish through the Data Science Rabbit Hole on Medium, a regular column at All Things Insights, and a newsletter and frequent posts on LinkedIn. The through-line: stripping away the performative complexity from data science and AI to ask what these tools actually do, and what they don't.

Selected Essays

A few pieces you might find interesting. The full archive is on Medium.

Future Outlook • Medium • 2025

2026 AI Predictions: When the Science Fair Closes and the Factory Turns the Lights On

The AI hype cycle is transitioning from "look what it can do" to "does it actually work at scale?" This piece argues that 2026 marks the shift from experimentation theater to operational discipline, and what that means for organizations that have been running science fairs instead of building factories. One of the most-read pieces in the DSRH archive.

For Executives For Practitioners
Hiring & Leadership • Medium

Hire the Old Dude

A contrarian case for experienced practitioners in an industry obsessed with novelty. Pattern recognition doesn't come from reading papers; it comes from watching the same mistakes get made in different organizations under different names for thirty years. This piece generated significant response from practitioners and hiring managers alike.

For Executives Leadership & Culture
Industry Analysis • Medium

Netflix’s Generative Search Revolution: Now with 40% Less Fairy Dust

Moving beyond the generative AI hype to understand what Netflix's content discovery overhaul actually means: for the business, for the user, and for the industry watching from the sidelines. A concrete dissection of a real strategy change rather than another think piece about AI's unlimited potential.

For Practitioners For Executives
Probability • Medium • Most Read

Predicting Sports Outcomes with a Simple Probability Model

The most-read piece in the DSRH archive. A worked example of how a straightforward probabilistic model can outperform intuition, using sports because the feedback loop is clean and the data is public. The underlying logic applies well beyond the scoreboard.

For Practitioners
Statistics • Medium

Battleground State Polls Are Useless (And Math Can Prove It)

A quantitative argument that the data source everyone treats as authoritative during election season is, by any rigorous standard, too noisy to support the conclusions drawn from it. The math isn't complicated. The willingness to follow it to an uncomfortable place is the point.

For Practitioners For Executives

All Things Insights Column

My regular column at All Things Insights covers data science strategy, AI for business, and decision science for an audience of marketing and analytics executives. Selected recent columns:

Strategy • All Things Insights

The AI Renovation Playbook

Why tearing the data stack down and rebuilding is almost never the right answer, and a practical framework for executives navigating legacy systems without halting operations. The renovation analogy turns out to be remarkably precise.

For Executives
Decision Science • All Things Insights

The Missing Layer: How to Turn Analytics into Better Decisions

Most analytics investments improve the quality of information without improving the quality of decisions. This column argues that the gap between the two is structural, not technical, and identifies what organizations are typically missing.

For Executives

View all columns at All Things Insights →

Older columns at All Things Innovation →

Core Themes

Three arguments show up in nearly everything I write. Not because I planned it that way, but because they keep being the answer to whatever question I'm actually looking at. They're also the core of the Decision Science framework this site is built around.

AI Pragmatism Over AI Evangelism

The AI industry has a marketing problem. Every few years, a new capability arrives and the consensus narrative leaps straight from "this is technically interesting" to "this changes everything." The gap between what the technology demonstrably does and what the industry claims it does keeps widening — and organizations that build strategy around the claims rather than the reality pay for it in wasted investment, demoralized teams, and the quiet embarrassment of pilots that never shipped.

My argument is not that AI is overhyped in the abstract. It's that the specific thing being hyped changes constantly while the hype mechanism stays identical. GenAI is the latest instance of a pattern I've watched play out with Big Data, deep learning, blockchain for enterprise, and a handful of others. Recognizing the pattern doesn't make you a skeptic. It makes you someone who can distinguish signal from noise — which is, not coincidentally, also what good data science requires.

Read: 2026 AI Predictions — Closing the Science Fair →

Pattern Recognition as an Executive Skill

Pattern recognition is the thing you can't teach in a bootcamp or acquire from a paper. It comes from watching the same failure mode appear in five different industries under five different names. From knowing, before the slide deck is finished, which assumptions are going to fall apart in the boardroom. Experience doesn't just expand the toolkit; it changes what you see when you look at a problem.

The most valuable thing an experienced practitioner brings to an organization is not a larger toolkit. It's the ability to recognize which problems are actually new and which are familiar problems wearing new clothes. That distinction matters because the solutions are different. New problems require experimentation. Familiar problems require the wisdom to not re-invent what already works.

Dashboard Theater and the Measurement Trap

There is a category of organizational behavior I think of as analytics performance: the construction of impressive-looking measurement systems whose primary function is to signal rigor rather than produce it. Dashboards with 47 metrics. Quarterly business reviews organized around charts that everyone looks at and nobody acts on. KPIs chosen because they trend in a direction leadership likes rather than because they're connected to decisions anyone is making.

This is not a small or harmless problem. Dashboard theater is expensive, it crowds out the analytical work that would actually improve decisions, and, perhaps most damagingly, it trains organizations to equate measurement with strategy. When the measurement system is beautiful and the strategy is still blurry, everyone has been looking at the wrong problem. The fix is not a better dashboard. It's a clearer decision.

Read: What Is Decision Science? — The full argument →

Follow the Work

New pieces appear on Medium and All Things Insights, typically a few times a month. The fastest way to keep up:

Book

Book • Amazon

Data Science for Decision Makers

Available on Amazon. A practitioner's guide to the discipline of decision science, written for executives who need to understand what their data teams are actually capable of, and for data scientists who need to understand what executives actually need.

For Executives For Practitioners

Video & Lectures