Blog
What is experimentation programme management?
Programme vs platform. What it is, why it matters, and what happens when you don’t have it.
Most people know what an A/B testing platform is: Optimizely, VWO, Google Optimize (RIP), or one of the many others. They serve the variants, collect the data, and sometimes run the stats. That’s running experiments.Experimentation programme management is different. It’s the system for managing the programme around those tests: what you’re going to test, why, what you learned, what you decided, and what you did next.
Programme vs platform
A testing platform answers: “Did variant B beat control?” It doesn’t answer: “Why did we run this test?” “What research supported it?” “What did we decide and why?” “What’s the follow-up?” “How does this fit into our roadmap?” Those questions belong to programme management. The testing tool is the engine; the programme is the workflow, the backlog, the decisions, and the institutional memory that keeps the whole thing from turning into a random list of tests.
The lifecycle
A proper experimentation programme has a clear lifecycle. It usually looks something like:
- Research: Surveys, interviews, heatmaps, session recordings. The raw material.
- Hypotheses: “If we do X, then Y will happen, because Z.” Testable, prioritised, linked to research.
- Experiments: The actual A/B (or MVT) test. Designed, run, analysed.
- Results & decisions: What happened, what you decided (ship, iterate, stop, investigate), and why.
- What you did next: Follow-up tests, iterations, or new hypotheses. The chain that turns one test into a programme.
When that lifecycle lives in one place, with links from research to hypotheses, hypotheses to experiments, and experiments to decisions and follow-ups, you get iteration chains and institutional knowledge. When it’s scattered across spreadsheets, Slack, and the testing platform’s native reports, you get the opposite.
Why it matters
Without programme management, you risk: re-testing the same thing because nobody can find the first result;losing years of learnings when someone leaves; running tests with no clear hypothesis or no link to research; decisions that vanish into Slack so the next person has to guess; andno defensible story for ROI when leadership asks what the CRO programme is delivering.
With it, you can show what you tried, what you learned, and what you did next. You can prioritise from a backlog that’s actually connected to evidence. You can report revenue impact with an audit trail. And when someone new joins, the programme doesn’t start from zero.
What programme management software does
Purpose-built programme management ties the lifecycle together. You get: a research libraryyou can link to hypotheses; hypothesis tracking with prioritisation (PIE, ICE, PIG, or your own);experiment tracking that connects to your testing platform (or stands alone);Bayesian analysis and stopping rules so you don’t p-hack or declare winners too early;decision capture with rationale and optional revenue impact; and iteration chainsso follow-ups are first-class and the “what we did next” is never lost.
You can approximate some of this with Airtable or a well-designed spreadsheet. But lifecycle, statistical rigour, iteration, and revenue modelling are not what those tools were built for. Sooner or later, teams that run serious programmes hit the limits. That’s when they look for something purpose-built.
Run your programme in Experiment OS
Research, hypotheses, experiments, analysis, and decisions. One system, no spreadsheet of doom.
Start free trial