Evaluation of the Multi-Payer Advanced Primary Care Practice (MAPCP) Demonstration, Second Annual Report
For the Multi-Payer Advanced Primary Care Practice (MAPCP) Demonstration, the Centers for Medicare and Medicaid Services (CMS) joined state-sponsored initiatives to promote the principles characterizing patient-centered medical home (PCMH) practices. After a competitive solicitation, eight states were selected for the MAPCP Demonstration: Maine, Michigan, Minnesota, New York, North Carolina, Pennsylvania, Rhode Island, and Vermont. [p. 1-1]
In 2011, CMS selected RTI International and its subcontractors, the Urban Institute and the National Academy for State Health Policy, to evaluate the MAPCP Demonstration. The goal of the evaluation is to identify features of the state initiatives or the participating PCMH practices that are positively associated with improved outcomes. [p. 1-2]
Overall, Year Two of the MAPCP Demonstration found state initiatives still attempting to hit their stride. … All states agreed that the benefits of the MAPCP Demonstration were not likely to be strongly visible at this time. Our quantitative analysis supported this contention by finding very few consistent, favorable changes associated with the MAPCP Demonstration across the eight states. [p. 11-6]
https://downloads.cms.gov/files/cmmi/mapcp-secondevalrpt.pdf
Comment by Kip Sullivan, J.D.
In early 2015, I posted two comments here stating it will be impossible for anyone to explain the results of CMS’s “patient-centered medical home” (PCMH) experiments. In those comments (one on CMS’s Comprehensive Primary Care Initiative and the other on CMS’s Multi-Payer Advanced Primary Care Practice Demonstration), I argued that CMS’s PCMH demonstrations will fail to produce useful results because they test variables so poorly defined they cannot be measured. How do evaluators measure the effects of wobbly notions like “patient-centeredness,” “whole-person focus,” “culture of improvement,” “coordination of care across the medical neighborhood,” and “patient engagement”? How do evaluators test those notions one at a time, never mind simultaneously?
The latest reports on the Multi-Payer Advanced Primary Care Practice (MAPCP) Demonstration, one of three PCMH pilots CMS has conducted, confirm my prediction. On May 11, 2016, CMS released both the second- and third-year evaluations of the MAPCP demo. The second-year evaluation (the one quoted above) focuses on Medicare expenditures and quality data while the third-year evaluation presents data on the Medicare beneficiaries involved in the demo. (Neither CMS nor the authors of the evaluations, RTI International and two subcontractors, explained why the evaluations were released together.) Because the second-year evaluation is the one that contains data on the demo’s effect on cost and quality, I focus my remarks on that report.
As they did in the first-year evaluation, RTI reported mind-boggling variation in the definition of the PCMH, both between and within states. [1] Here is RTI’s description of what it was CMS and the states thought they were testing: “While the expectations established by all eight state initiatives varied, states were likely to establish requirements addressing three aspects of performance: practice transformation, quality improvement, and data reporting.” (p. 2-4) I italicized words in that sentence that convey maximum abstractness and imprecision. Half that sentence is italicized. Given the elusive definition of “medical home,” is it any wonder that states “varied” in their “expectations” of PCMHs?
RTI’s report concluded that the PCMHs have not saved money for Medicare during the demo’s first two years and are having almost no effect on quality. Not surprisingly, RTI was unable to explain why the demo is failing (“failing” is my word, not RTI’s).
Here are the main findings from the second-year evaluation:
(1) Medicare costs incurred by PCMHs (as measured by claims submitted to CMS) did not differ from non-PCMH clinics by a statistically significant amount in any state except Vermont (see Table 2-9, p. 2-38);
(2) with a few exceptions, PCMHs failed to outscore non-PCMHs on the very, very few quality measures RTI chose (diabetes, cholesterol and hospital admissions measures) (see Tables 2-6 p. 29 and 2-10, p. 2-40), and when they did, the differences were tiny.
For reasons RTI declined to discuss, RTI made no attempt to determine net savings, that is, whether PCMH costs were higher than non-PCMH costs when CMS’s subsidies to the PCMHs were taken into account. [2]
If you were RTI, how would you make sense of these findings? You stated in both your first- and second-year reports that PCMHs “are expected” to lower costs and improve quality. Moreover, you have promised CMS and your readers you would “identify features of the state initiatives or the participating PCMH practices that are positively associated with improved outcomes” (see excerpt above and identical language at p. 2 of the first-year report). [3]
But you have also reported that PCMHs vary on every dimension imaginable, which means the PCMHs have no uniform “features” for you to analyze. How, then, do you fulfill your promise to “identify features” of PCMHs that explain the PCMHs’ performance?
If I were in RTI’s shoes, I would not have promised to “identify features” of PCMHs for any purpose under the sun. Instead I would have stated upfront that PCMHs are maddeningly non-uniform – as a class, they are featureless – because they are defined so vaguely. Because they are defined so vaguely, the employees of PCMHs do not feel constrained to focus on any particular service or type of patient. I would have said the vague definition of “PCMH” and the glorious heterogeneity of activity that goes on in PCMHs guarantees “homes” cannot be tested. Finally, I would have said that a reasonable explanation for why “homes” are failing is that “homes” are not focusing on any particular treatment or type of patient. They are spending their limited resources on all services and patients rather than a small and uniformly defined subset of services or patients.
But RTI did not say that. RTI’s solution so far has been to avoid discussing the bind it has created for itself. To date RTI has not identified the features or concrete characteristics of PCMHs (the independent variables) that might have some influence over the dependent variables (cost and quality).
RTI comes close to identifying factors that might conceivably be considered independent variables in the course of making several excuses for the PCMHs’ poor performance. RTI clearly identifies one excuse and implies several others. The one alleged problem RTI clearly offers as an excuse is insufficient time and money for PCMHs to demonstrate their true potential. Other excuses that are only implied include (1) inaccurate data and impediments to data sharing caused by glitches in health information technology, (2) poor integration of mental health services with other medical services, and (3) insufficient use of patient “portals.”
RTI’s justifications for the PCMHs’ unimpressive performance appear primarily in two sections in the second-year evaluation: A half-page section entitled “Lessons learned” in Chapter 2 (pp. 2-9 and 2-10) and the last three pages of the final chapter, Chapter 11 (pp. 11-3 through 11-6). Here is how RTI articulated the insufficient-time-and-money and health IT excuses:
Finally, a common lesson in all states was the need for ample time and resources to bring about practice transformation, including adequate resources for program administration and oversight. [M]any interviewees believed that three years was not enough time for the MAPCP Demonstration to show positive results. … [pp. 2-9/10] The most frequent lesson, mentioned by five states … was that the demonstration was not long enough. [p. 11-5]
All eight states reported challenges with health information technology (health IT) and the quality and timeliness of data, which was associated with challenges with patient attribution and payment. [p. 2-7] Health information technology … infrastructure is an integral component of most states’ PCMH demonstrations; in fact, it was the most common challenge that MAPCP Demonstration programs faced during Year Two. Unfortunately, many states had problems operationalizing their health IT plans. [p. 11-3] [4]
I characterize RTI’s justifications for the PCMHs’ poor performance as “excuses” because RTI’s reasoning is circular. The justifications rely on assumptions that RTI does not spell out and which are not evidence-based. For example, neither RTI nor anyone else has the foggiest idea how much money and time “homes” need to demonstrate their magic.
But let us set aside the circular reasoning problem and ask whether any of the problems RTI discusses can be converted into measurable variables. Obviously money, time, health IT and other variables could have been precisely defined prior to the onset of the demo, and in that event those variables could have been measured and examined to see how well they correlated with cost and quality outcomes. But that didn’t happen. At this late date, there is nothing RTI can do to make those variables more concrete and more uniform and, therefore, measurable. The only solution in the future is for CMS and RTI to insist on a radically slimmed down and more precise definition of the services “homes” are supposed to deliver.
I want to stress that RTI’s inability to draw lessons from the MAPCP demo will not hinge on the demo’s outcomes – on whether the demo ultimately shows that PCMHs have negative or positive outcomes or no impact at all. The problem is the nebulous definition of the entity being tested – the “medical home.” We can see this problem in RTI’s chapter on Vermont, the only one of the eight participating states in which PCMHs lowered costs (not counting fees paid by CMS to the PCMHs) compared with non-PCMHs. RTI makes no attempt to explain what it was about Vermont’s PCMHs that made them effective cost-cutters while PCMHs in the other seven states were not.
In my view, the most plausible explanation for Vermont’s performance is that Vermont PCMHs received more money and in-kind help than PCMHs in other states, those additional resources did good things for patients, and those additional resources weren’t counted as a form of spending. But even if my explanation is correct, it still wouldn’t tell us what we want to know, which is what it was PCMHs did with the extra resources that resulted in less use of traditional medical services.
I hope it is clear to readers why we can predict with certainty that the final report on the MAPCP demo (which is due in 2017 according to one of the contractors I exchanged emails with) will fail to “identify features” of PCMHs “that are positively associated with improved outcomes.” It is conceivable RTI will report some “positive outcomes” although that looks unlikely now. What is inconceivable is that RTI will be able to link “features” of PCMHs to those outcomes. If RTI makes any link or connection between any variable and any outcome, it will be based on more circular reasoning. It will be based on pure speculation.
Notes
1. For a discussion of the factors that vary within the MAPCP demo, see my comment on the first-year evaluation here.
2. Screw your thinking cap on tightly, because I’m going to take you down a rabbit hole RTI created but did not justify. The rabbit hole has several sub-holes, so watch your step.
For reasons RTI did not explain, RTI created not one but two control groups against which to compare the performance of MAPCP PCMHs. The two comparison groups were non-PCMH clinics and PCMH clinics not in the MAPCP demo. This would make little sense even if the “medical home” notion were well defined.
To add to the mystery, RTI did this for all expenditure and quality measures but with one exception: Its “budget neutrality” or “net savings” measure. That measure asks the question, Did CMS save any money on the MAPCP PCMHs after taking into consideration the “care coordination fees” CMS paid out to the MAPCP PCMHs? For reasons known only to RTI and presumably CMS, RTI calculated the “budget neutrality” measure using only the non-MAPCP PCMH comparison group. By this strange methodology, RTI found no statistically significant increase or decrease in Medicare net spending. By this strange method, RTI concluded the “budget neutrality” requirement of the demo had been met as of year two.
The mystery doesn’t end there. RTI reported that in two states, New York and Michigan, the PCMHs in the MAPCP demo reduced total Medicare gross spending (that is, Medicare costs as measured only by claims) by statistically significant amounts compared with non-MAPCP PCMHs, but not compared with non-PCMHs. Meanwhile, the opposite outcome occurred in Vermont. In Vermont, MAPCP PCMHs lowered Medicare gross spending by a statistically significant amount when the control group was non-PCMHs, but not when the control group was PCMHs not in the MAPCP demo. RTI made no attempt to explain these baffling outcomes.
I apologize, but our tour of the rabbit hole is not over yet. I have one last tunnel to show you. RTI allowed Minnesota to be an exception to its two-comparison-group rule. For Minnesota, RTI derived its estimates for all cost and quality measures, including its “budget neutrality” or “net savings” measure, using only non-PCMHs as a control group. RTI did offer an explanation of sorts for this exception. RTI said so many PCMHs in Minnesota were in the MAPCP demo it was impossible to construct a control group of PCMHs not in the demo.
Our tour of the rabbit hole is now over. Please ascend back to the real world slowly to prevent severe disorientation.
3. Note the bias in this articulation of RTI’s goal: The outcomes will consist of only “improved outcomes.” “Worse outcomes” cannot possibly occur.
4. The third-year evaluation sets forth the same problems that RTI describes in the second-year evaluation. For example, RTI states: “In Year Three, almost all [eight state] initiatives continued to report significant challenges related to the timeliness and quality of the data provided by the initiatives to practices. Interviewees in most states also reported payment challenges persisting from earlier years.” [p. 2-7]
Kip Sullivan, J.D. is a health policy expert and frequent blogger, and is a member of PNHP Minnesota’s legislative committee. His articles have appeared in The New York Times, The Nation, The New England Journal of Medicine, Health Affairs, the Journal of Health Politics, Policy and Law, and the Los Angeles Times.