Measuring Outcomes
Every experiment we introduce must include a way to test and validate a hypothesis for desired outcome(s). Introducing the changes we have outlined in this playbook for people, process and technology are no different. Because your cATO experiment will be touching all three factors at any given time, you’ll need a balanced set of leading and lagging indicators to validate your desired outcomes. We recommend starting with the following combination of outcome types and metrics, adjusting where appropriate for your local context, and ideally confirming a baseline for each:
Mission Outcomes
- Expect product teams leveraging your cATO, to capture their desired user outcome(s) and business impact(s) metrics for their mission(s).
- User outcomes represent what their intended users (i.e. warfighters, operators and civilians) will do with the software.
- Business impacts represent the results we expect to generate for the organization or agency.
- These demonstrate the actual value proposition(s) of your cATO.
cATO Outcomes
- Security/Privacy Incidents in Prod
- Time to value (e.g. Time-to-ATO and Time-to-Assessed-Task)
- Security vulnerabilities in Prod
- Security vulnerability Mean Time to Remediation (MTTR)
- POAM count and aging
Workforce Happiness Outcomes
- Short surveys representing interaction points between product, security, privacy and authorizing officials measuring engagement, psychological safety and satisfaction
DevOps Performance Outcomes (Learn more at https://dora.dev/)
- Lead time for changes
- Deployment Frequency
- Change Failure Rate
- Mean Time to Restore (MTTR) after incident, outage or service degradation