While others spend the month taking in NCAA’s March Madness, as a self-proclaimed Quality Nerd I was truly fortunate to spend a day in DC focused instead on Metric Madness. I was invited to Washington by Peggy O’Kane and her core team from NCQA, the group known for (among other things) driving value in healthcare through health plan accreditation and pioneering Primary Care Medical Home certification. But I wasn’t in town to discuss HEDIS or PCMH – instead, the NCQA team wanted to discuss one of the most pressing issues facing continuous improvement in healthcare today – measurement burden.
Before we go further it’s important that I clarify one thing – I am an enormous believer in the power of measurement. As a Chief Quality Officer, my role description quite literally requires me to spread the gospel of data-driven quality improvement 24 hours a day, 7 days a week, 365 days a year. And yet I acknowledge that the problem of over-measurement is real, and it threatens to stall the momentum of any continuous improvement flywheel.
To give you a flavor of what we covered, I’ll outline just three of the major universes of metric programs in which an integrated healthcare delivery system participates:
To begin, let’s look at just the regulator-driven metrics that hospitals and physician groups report as part of CMS and Joint Commission’s various programs. These programs combine claims-based measures (metrics automatically generated through tracking individual patient bills) along with abstracted measures. Despite advances in electronic submission, clinical abstraction remains largely the work of individual people as the promise of fully-automated electronic reporting remains in its early stages. And some of these manually-abstracted measures require finding no less than 80 discrete data points in a medical record to submit just one metric for one patient. For infection-prevention related metrics, the data definitions alone can run 30 pages.
Moving into the second ring of metric reporting, we arrive at the groups of measures associated with contracts that hospitals, physician groups, and networks sign with both government and commercial payers. These payer contracts range from arrangements to join an Accountable Care Organization, to relationships with Medicare Advantage plans, to measures likely to be in future population health contracts – like those proposed for North Carolina’s transition to managed Medicaid – that delivery systems must be already measuring internally to drive improvement.
Despite efforts to harmonize metric sets across payers, each contractual relationship carries with it different reporting requirements, different methods of submitting data, different ways of counting attributed patient populations, and even different interpretations of exactly the same metric definition. This menagerie occasionally results in the same metric being measured, analyzed, reported and graded in three totally different ways for three different payers. If that’s not a recipe for chaos I don’t know what is.
Finally, moving past the metrics we have to report brings us to the metrics that health systems choose to measure. These are the metrics systems use to drive local improvement at the hospital, unit, region, group, practice, provider and patient levels. They include an enormous universe of metrics crossing every aspect of the continuous improvement landscape, including effort to drive clinical outcomes, operational outcomes, access to care, and patient and team member experience. These metrics are the bread and butter of the DMAIC continuous improvement process and conservatively add thousands more metrics to the previous total.
Why, you may ask, should we care if health systems measure thousands of metrics across hundreds of providers and hundreds of thousands of patients? The answer is simple – measurement is only ONE step in the define, measure, analyze, improve and control process of continuous improvement. We need hospitals, health plans, providers and patients to focus their energies on EACH step in the continuous improvement process, getting past just reporting metrics and into actual, tangible outcome improvement.
Accomplishing this goal means measuring fewer things but doing more about the things we measure. It means aligning things like metric definition, interpretation, submission, and reporting requirements across all payer programs, and upgrading the electronic medical record infrastructure across all vendors to accurately and reliably report these core measures. It means connecting the “why” behind each mandatory metric to actual clinical outcomes that matter to both patients and providers, and it means seeking improvement first and accountability second from those required to measure and report them.
Even Don Berwick, the Godfather of continuous improvement in healthcare, agreed in an editorial written nearly two years ago in JAMA (emphasis added):
First, Reduce Mandatory Measurement
Era 2 has brought with it excessive measurement, much of which is useless but nonetheless mandated. Intemperate measurement is as unwise and irresponsible as is intemperate health care. Purveyors of measurement, including the Centers for Medicare & Medicaid Services (CMS), commercial insurers, and regulators, working with the National Quality Forum, should commit to reducing (by 50% in 3 years and by 75% in 6 years) the volume and total cost of measurements currently being used and enforced in health care. The aim should be to measure only what matters, and mainly for learning. With that focus, all health care stakeholders could know what they need to know with 25% of the cost and burden of today’s measurements enterprise. The CMS has, to its credit, removed many process measures from programs, but progress toward a much smaller set of outcome measures needs to be faster. Such discipline would restore to care providers an enormous amount of time wasted now on generating and responding to reports that help no one at all.
My role gives me the first-hand experience to know that it’s not easy to winnow a measure set to “only what matters.” Yet our national history is a compilation of intersections where we’ve chosen not to take the easy path. To paraphrase from Kennedy, we choose to do these things:
and do the other things, not because they are easy, but because they are hard; because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one we intend to win.
With the energy and leadership of groups like NCQA, and the words of both Godfather Don and President Kennedy ringing in our ears, I am confident we can succeed.