How the DORA Metrics Became the Standard for Measuring Software Team Performance

Published on 28 August 2025 by Zoia Baletska

In recent years, if you’ve talked about measuring software delivery performance, you’ve probably heard of the DORA metrics—a set of four key indicators that help organisations understand how effectively their engineering teams deliver value. But where did these metrics come from? And why have they become the industry’s gold standard?
Let’s look back at the origins of DORA metrics, what they measure, and why they matter today more than ever.
📖 The Backstory: From Gut Feeling to Data
Before DORA, conversations around software team performance were often anecdotal or qualitative. Managers and teams relied on gut feeling, velocity points, or output-based metrics like lines of code or number of commits. But these approaches lacked a clear, research-backed link to business outcomes.
Around 2013, a group of researchers and engineers decided to change that.
Enter DORA—short for DevOps Research and Assessment—a research program founded by Dr. Nicole Forsgren, Jez Humble, and Gene Kim. The trio came from diverse backgrounds (academia, DevOps practice, and enterprise consulting) but shared a belief:
That software delivery performance could be measured scientifically, and that doing so could improve both engineering and business outcomes.
🧪 The Research: Looking at What Really Matters
The DORA team started a multi-year research project involving thousands of IT professionals across industries. They wanted to know: What actually makes software teams high-performing?
They tested dozens of potential indicators, from deployment frequency to test coverage to company culture. Using rigorous statistical analysis, they found a small set of metrics that actually correlated with better performance and higher organisational success.
The result? The Four DORA Metrics:
-
Deployment Frequency – How often an organisation successfully releases to production
-
Lead Time for Changes – How long it takes a commit to get into production
-
Change Failure Rate – What percentage of changes fail in production
-
Time to Restore Service – How quickly a team recovers from a production failure
These weren’t chosen arbitrarily. Each one was statistically validated to predict both technical and business success.
📈 Why These Four Metrics?
Let’s break down why these particular four matter:
-
Deployment Frequency reflects how agile and responsive a team is.
-
Lead Time shows how quickly new ideas or fixes are delivered to users.
-
Change Failure Rate is a proxy for code quality and testing discipline.
-
Time to Restore is about resilience—how well a team handles unexpected problems.
Together, these metrics provide a balanced view. You’re not just shipping fast; you’re doing it reliably. And critically, they’re actionable—teams can directly improve them through changes in processes, tooling, or team dynamics.

DORA Metrics in Agile Analytics
📚 From Research to Mainstream
The DORA team published their findings annually in the State of DevOps Report, which became a widely respected benchmark in the software industry.
In 2018, the research was distilled into a best-selling book: Accelerate: The Science of Lean Software and DevOps (by Forsgren, Humble, and Kim). This book made the research accessible to executives and practitioners alike and played a big role in making DORA metrics part of the mainstream conversation.

Then in 2018, Google acquired DORA, cementing its role in the broader DevOps ecosystem.
🚀 The DORA Metrics Today
Today, DORA metrics are more than just a set of KPIs. They’ve become a language teams use to align engineering practices with business goals. They’re used by:
-
Startups trying to scale development without chaos
-
Enterprises tracking transformation success
-
Platform teams building internal developer tools
-
CTOs communicating value to the board
And importantly, they’ve sparked a wider interest in Developer Experience (DevEx)—with DORA metrics as one part of a bigger picture that includes cognitive load, satisfaction, and internal tooling.
🔍 Criticism and Caution
Like any framework, DORA metrics aren’t perfect. Critics warn that:
-
They can be gamed if used for performance reviews
-
They don’t measure everything (e.g. security, innovation, team health)
-
Context matters—what “good” looks like varies across teams
Still, when used wisely, they’re powerful guiding metrics, not vanity metrics.
Conclusion: A Metrics Revolution
The rise of DORA metrics marks a shift from guessing to knowing—from gut-feel engineering to evidence-based software delivery.
By grounding performance measurement in real-world research, the DORA team gave engineering leaders a way to have meaningful, data-driven conversations about improvement.
And in doing so, they didn’t just create a set of metrics. They changed how we think about building and shipping software.
Supercharge your Software Delivery!
Implement DevOps with Agile Analytics
Implement Site Reliability with Agile Analytics
Implement Service Level Objectives with Agile Analytics
Implement DORA Metrics with Agile Analytics