Evaluation of CMS FQHC APCP Demonstration: Second Annual Report

By Katherine L. Kahn et al.
RAND Corporation, July 2015

In December 2009, President Barack Obama directed … the Centers for Medicare and Medicaid Services (CMS) … to implement a three-year demonstration intended to support the transformation of federally qualified health centers (FQHCs) into advanced primary care practices (APCPs) in support of Medicare beneficiaries. … For the demonstration, CMS recognizes as advanced primary care (APC) the type of care that is offered by FQHCs that have achieved Level 3 recognition as a patient-centered medical home (PCMH) from the National Committee for Quality Assurance (NCQA).

RAND is conducting an independent evaluation of the FQHC APCP demonstration for CMS. The evaluation includes studying the processes and challenges involved in transforming FQHCs into APCPs and assessing the effects of the APCP model on access, quality, and cost of care provided to Medicare and Medicaid beneficiaries currently served by FQHCs. [p. xi]

As of the end of the demonstration’s ninth quarter, we know that costs for demonstration [PCMH] sites are higher than for comparison sites. [p. xvii]

http://innovation.cms.gov/Files/reports/fqhc-scndevalrpt.pdf

In July the RAND Corporation released a report on the second year of CMS’s three-year “medical home” experiment with federally qualified health centers (FQHCs). The report concluded the clinics in the “medical home” arm of the experiment were spending more money than clinics in the control arm, and that this was unlikely to change by the end of Year 3.

Sad to say, we’re never going to know what it was the experimental clinics did that raised their costs. It may well be those clinics used their “care management fees” to hire more social workers who in turn persuaded more people living near the clinics to make appointments, and that in turn led to greater utilization of medical goods and services. Or perhaps the fees were used to hire more patient educators, and the patient education induced existing patients to visit their clinic more often, and that in turn caused more hospitalizations.

We’re never going to know. RAND may produce some anecdotal evidence that the hypotheses I suggested above are true, or any of numerous other hypotheses could be accurate. But RAND will not produce empirical evidence. The primary reason is the maddeningly vague definition of “medical home.” The “features” that “medical homes” are said to possess are so poorly defined they cannot be reduced to measurable components.

It didn’t have to be this way. When President Obama ordered CMS to study the “medical home,” CMS could have used its discretion to test a version of the “home” that was much more clearly defined than the amorphous version adopted by the Agency for Healthcare Research and Quality, the National Committee for Quality Assurance (NCQA), and other self-appointed arbiters of what the phrase means. Instead, CMS punted – they said a “medical home” would be whatever the NCQA says clinics must do to qualify as a “level 3 patient-centered medical home” (PCMH). But NCQA’s requirements for PCMH certification are as vague as the features NCQA and other PCMH advocates claim PCMH’s possess, and in most cases the requirements bear no relation to the alleged features.

The RAND Corporation either could not or would not insist that CMS or NCQA define the PCMH more precisely before signing a contract with CMS to “evaluate” the FQHC PCMH demonstration.

Thus did the impossible challenge of evaluating the amorphous, elusive PCMH bounce down from Obama to CMS to RAND.

To grasp the impossibility of the challenge RAND signed up for, run your eye over the two tables below. Table 1 lists the seven “features” of the PCMH according to NCQA. Table 2 lists NCQA’s ten “must-pass” requirements for PCMH certification. Put yourself in RAND’s shoes and ask yourself two questions: First, how would you operationalize (reduce to measurable components) the “features” listed in Table 1; second, can you discern any relationship between the ten “must-pass elements” and the seven “features”?

Table 1: Seven “features” of the PCMH according to NCQA

(1)    Personal physician: Each patient has an ongoing relationship with a personal physician
(2)    Physician directed medical practice: The personal physician leads a team of individuals … who collectively take responsibility for ongoing patient care
(3)    Whole person orientation
(4)    Care is coordinated and integrated
(5)    Quality and safety are hallmarks of the medical home
(6)    Enhanced access to care is available through … innovative options for communication
(7)    Payment appropriately recognizes the added value provided to patients who have a patient-centered medical home

http://www.ncqa.org/Portals/0/PCMH%20brochure-web.pdf

***

Table 2: NCQA’s ten “must-pass elements” for certification as a Level 2 or 3 PCMH

(1)    Written standards for patient access and patient communication
(2)    Use of data to show standards for patient access and communication are met
(3)    Use of paper or electronic charting tools to organize clinical information
(4)    Use of data to identify important diagnoses and conditions in practice
(5)    Adoption and implementation of evidence-based guidelines for three chronic or important conditions
(6)    Active support of patient self-management
(7)    Systematic tracking of tests and follow-up on test results
(8)    Systematic tracking of critical referrals
(9)    Measurement of clinical and/or service performance
(10)  Performance reporting by physician or across the practice

http://www.ncqa.org/Portals/0/PCMH%20brochure-web.pdf

Table 1 is riddled with mushy, sometimes tautological phrases such as “ongoing relationship,” “physician-directed,” “whole person” and “hallmarks” (I have italicized amorphous phrases). How is RAND supposed to distinguish, for example:

* a doctor-patient relationship that is “ongoing” from one that is not going,
* a “personal physician” from an impersonal physician,
* a “team of individuals” from a plain-vanilla staff, or
* “whole person orientation” from, say, “half-person orientation”?

How does RAND determine when “quality and safety” have become “hallmarks”? If we stumbled upon a “hallmark” in a clinic, how would we know?

When you’re done struggling with the questions generated by the happy talk in Table 1, then ask yourself whether the ten requirements in Table 2 answer any of the questions generated by Table 1. The answer is no, they just add to the clutter.

There are two reasons for this. The first is that the ten requirements are almost as vaguely described as the seven “features” (I have italicized mushy phrases in Table 2). What sense, for example, is RAND supposed to make of requirement 6, the one about “active support of patient self-management”? How is RAND supposed to know:

* what a “self-managed” patient is as opposed to whatever the opposite of “self-managed” is,
* what constitutes “support,” and once that has been defined, what constitutes “active support” as opposed to (help me out here) inactive support?

What do “important diagnoses” (requirement 4) and “critical referrals” (requirement 8) mean? Why would some diagnoses be unimportant and some referrals uncritical?

The second reason for the disconnect between Tables 1 and 2 is that there is no obvious relationship between the seven “features” and most of the ten requirements. Which of the ten requirements would you say “transforms” a mere staff into a “team,” “transforms” an ordinary doctor-patient relationship into an “ongoing relationship,” or is remotely related to “appropriate payment”? Answer: None.

What I see in the ten requirements is an obsession with measuring and reporting. Nine of the ten requirements mandate measurement and/or reporting. (Only the “active support of patient self-management” requirement does not clearly require measurement, but it might. The phrase is just too vague to say for sure).[1] It is reasonable to conclude that the most accurate “definition” of a “medical home,” according to NCQA, is a clinic that agrees to measure and report on a few vaguely defined policies and activities. This would be like defining a “cargo-centered” trucking company as one that signs a document with the National Committee on Trucking Quality to obey the speed limits, to stop by the side of the road now and then to be weighed, and to limit the number of hours its drivers can go without sleep. These promises tell us nothing about what a “cargo-centered” trucking company does.

My characterization of NCQA’s “definition” is supported by RAND’s finding that PCMH staff felt overwhelmed by NCQA’s documentation requirements. As RAND put it at page 37:

The problem of documentation in general for the NCQA application process was mentioned as an overarching concern for a majority of the FQHC respondents. Not only was there a concern about documenting the specifics of the … NCQA PCMH standards, the respondents also described the demanding nature of documenting every task that a clinician or provider engages in during a patient encounter.

To return to the problem I raised at the outset: If NCQA’s “definition” of a PCMH is essentially a clinic that agrees to document vaguely defined standards and policies that have no obvious relationship with the “features” NCQA says PCMHs possess, how is a CMS contractor like RAND supposed to determine why PCMH clinics raised Medicare’s costs? They can’t. RAND can ask the same questions NCQA asks during its audits – do PCMHs have a document on file explaining their “standards for patient communication,” for example. But since this standard will vary by clinic (thanks to the vague definition of this NCQA requirement), and because it might not even be implemented effectively or at all, RAND has no usable data on this variable.  Ditto for the other requirements. With no useful data on the “standards for patient communication” and the other requirements, RAND cannot test these variables to assess their impact on the dependent variable – Medicare expenditures.

RAND offers no solution. Its analysts merely state they will compare outcomes of PCMH with non-PCMH clinics using claims data and answers from patient surveys about their “experiences with care.”[2] Claims data will tell RAND whether expenditures were higher for PCMH clinics, and patient surveys will answer general questions about patient perceptions, e.g., “satisfaction” with the timeliness of care and whether doctors were clear in their instructions or good listeners. But claims data and patient survey responses contain no information about what it is PCMHs do that makes them different from non-PCMHs.

This quandary is, of course, not peculiar to CMS’s FQHC “medical home” demo. It is afflicts every test of the “medical home” that uses NCQA’s flabby definition.

The inability of RAND, CMS or anyone else to determine what it is PCMH clinics do raises another obvious problem: If we cannot determine what PCMHs do, how do we know what PCMHs do with the “care management fees” CMS pays them? I’ll address that question shortly on this blog.

Notes

1. If evidence supported NCQA’s assumption that measuring and reporting improves quality, we could at least say the requirements have some link with the “quality and safety” feature in Table 1. But the evidence on that issue is mixed.

2. Here is a quote from the RAND study: “Key Policy Question 2 focuses on differences between demonstration and comparison sites. … To answer this policy question, the evaluation focuses on metrics spanning 12 research questions … including: (1) continuity, (2) timeliness, (3) access to care, (4) adherence to evidence-based guidelines, (5) beneficiary ratings of providers, (6) effective beneficiary participation in decision making, (7) self-management, (8) patient experiences with care, (9) coordination of care, (10) disparities, (11) utilization, and (12) expenditures. Some of these metrics are evaluated using claims data, others by beneficiary report.” (p. xvi)

Kip Sullivan, J.D., is a member of the board of Minnesota Physicians for a National Health Program. His articles have appeared in The New York Times, The Nation, The New England Journal of Medicine, Health Affairs, the Journal of Health Politics, Policy and Law, and the Los Angeles Times.