Improving Patient Care21 December 2004

Comparison of Quality of Care for Patients in the Veterans Health Administration and Patients in a National Sample

FREE
    Author, Article, and Disclosure Information

    Abstract

    Background:

    The Veterans Health Administration (VHA) has introduced an integrated electronic medical record, performance measurement, and other system changes directed at improving care. Recent comparisons with other delivery systems have been limited to a small set of indicators.

    Objective:

    To compare the quality of VHA care with that of care in a national sample by using a comprehensive quality-of-care measure.

    Design:

    Cross-sectional comparison.

    Setting:

    12 VHA health care systems and 12 communities.

    Patients:

    596 VHA patients and 992 patients identified through random-digit dialing. All were men older than 35 years of age.

    Measurements:

    Between 1997 and 2000, quality was measured by using a chart-based quality instrument consisting of 348 indicators targeting 26 conditions. Results were adjusted for clustering, age, number of visits, and medical conditions.

    Results:

    Patients from the VHA scored significantly higher for adjusted overall quality (67% vs. 51%; difference, 16 percentage points [95% CI, 14 to 18 percentage points]), chronic disease care (72% vs. 59%; difference, 13 percentage points [CI, 10 to 17 percentage points]), and preventive care (64% vs. 44%; difference, 20 percentage points [CI, 12 to 28 percentage points]), but not for acute care. The VHA advantage was most prominent in processes targeted by VHA performance measurement (66% vs. 43%; difference, 23 percentage points [CI, 21 to 26 percentage points]) and least prominent in areas unrelated to VHA performance measurement (55% vs. 50%; difference, 5 percentage points [CI, 0 to 10 percentage points]).

    Limitations:

    Unmeasured residual differences in patient characteristics, a lower response rate in the national sample, and differences in documentation practices could have contributed to some of the observed differences.

    Conclusions:

    Patients from the VHA received higher-quality care according to a broad measure. Differences were greatest in areas where the VHA has established performance measures and actively monitors performance.

    As methods for measuring the quality of medical care have matured, widespread quality problems have become increasingly evident (1, 2). The solution to these problems is much less obvious, however, particularly with regard to large delivery systems. Many observers have suggested that improved information systems, systematic performance monitoring, and coordination of care are necessary to enhance the quality of medical care (3). Although the use of integrated information systems (including electronic medical records) and performance indicators has become more common throughout the U.S. health care system, most providers are not part of a larger integrated delivery system and continue to rely on traditional information systems (4).

    An exception is the Veterans Health Administration (VHA). As the largest delivery system in the United States, the VHA has been recognized as a leader in developing a more coordinated system of care. Beginning in the early 1990s, VHA leadership instituted both a sophisticated electronic medical record system and a quality measurement approach that holds regional managers accountable for several processes in preventive care and in the management of common chronic conditions (5, 6). Other changes include a system-wide commitment to quality improvement principles and a partnership between researchers and managers for quality improvement (7).

    As Jha and colleagues (8) have shown, since these changes have been implemented, VHA performance has outpaced that of Medicare in the specific areas targeted. Nevertheless, whether this improvement has extended beyond the relatively narrow scope of the performance measures is unknown. Beyond that study, the data comparing VHA care with other systems of care are sparse and mixed. For example, patients hospitalized at VHA hospitals were more likely than Medicare patients to receive angiotensin-converting enzyme inhibitors and thrombolysis after myocardial infarction (9). On the other hand, VHA patients were less likely to receive angiography when indicated and had higher mortality rates after coronary artery bypass grafting than patients in community hospitals (10, 11). Kerr and colleagues found that care for diabetes was better in almost every dimension in the VHA system than in commercial managed care (12). More extensive comparisons, especially of outpatient care, are lacking. To address these issues, a more comprehensive assessment of quality is needed.

    Using a broad measure of quality of care that is based on medical record review and was developed outside the VHA, we compared the quality of outpatient and inpatient care among 2 samples: 1) a national sample of patients drawn from 12 communities and 2) VHA patients from 26 facilities in 12 health care systems located in the southwestern and midwestern United States (13). We analyzed performance in the years after the institution of routine performance measurement and the electronic medical record. Using the extensive set of quality indicators included in the measurement system, we compared the overall quality of care delivered in the VHA system and in the United States, as well as the quality of acute, chronic, and preventive care across 26 conditions. In addition, we evaluated whether VHA performance was better in the specific areas targeted by the VHA quality management system.

    Methods
    Development of Quality Indicators

    For this study, we used quality indicators from RAND's Quality Assessment Tools system, which is described in more detail elsewhere (14-17). The indicators included in the Quality Assessment Tools system are process quality measures, are more readily actionable than outcomes measures, require less risk adjustment, and follow the structure of national guidelines (18, 19). After reviewing established national guidelines and the medical literature, we chose a subset of quality indicators from the Quality Assessment Tools system that represented the spectrum of outpatient and inpatient care (that is, screening, diagnosis, treatment, and follow-up) for acute and chronic conditions and preventive care processes representing the leading causes of morbidity, death, and health care use among older male patients. The Appendix Table lists the full indicator set, which was determined by four 9-member, multispecialty expert panels. These panels assessed the validity of the proposed indicators using the RAND/University of California, Los Angeles–modified Delphi method. The experts rated the indicators on a 9-point scale (1 = not valid; 9 = very valid), and we accepted indicators that had a median validity score of 7 or higher. This method of selecting indicators is reliable and has been shown to have content, construct, and predictive validity (20-23). Of the 439 indicators in the Quality Assessment Tools system, we included 348 indicators across 26 conditions in our study and excluded 91 indicators that were unrelated to the target population (for example, those related to prenatal care and cesarean sections). Of the 348 indicators, 21 were indicators of overuse (for example, patients with moderate to severe asthma should not receive β-blocker medications) and 327 were indicators of underuse (for example, patients who have been hospitalized for heart failure should have follow-up contact within 4 weeks of discharge).

    Two physicians independently classified each indicator according to the type of care delivered; the function of the indicated care (screening, diagnosis, treatment, and follow-up); and whether the indicator was supported by a randomized, controlled trial, another type of controlled trial, or other evidence. Type of care was classified as acute (for example, in patients presenting with dysuria, presence or absence of fever and flank pain should be elicited), chronic (for example, patients with type 2 diabetes mellitus in whom dietary therapy has failed should receive oral hypoglycemic therapy), or preventive (for example, all patients should be screened for problem drinking). In addition, we further classified the indicators into 3 mutually exclusive categories according to whether they corresponded to the VHA performance indicators that were in use in fiscal year 1999. Twenty-six indicators closely matched the VHA indicators, 152 involved conditions that were targeted by the VHA indicators but were not among the 26 matches, and 170 did not match the VHA measures or conditions. We performed a similar process to produce a list of 15 indicators that matched contemporaneous Health Plan Employer Data and Information Set (HEDIS) performance measures (24). Table 1 shows the conditions targeted by the indicators, and Table 2 gives an example indicator for each of the conditions or types of care for which condition- or type-specific comparisons were possible.

    Table 1. Conditions and Number of Indicators Used in Comparisons
    Table 2. Example Indicators of Quality of Care
    Identifying Participants

    Patients were drawn from 2 ongoing quality-of-care studies: a study of VHA patients and a random sample of adults from 12 communities (13). The VHA patients were drawn from 26 clinical sites in 12 health care systems located in 2 Veterans Integrated Service Networks in the midwestern and southwestern United States. These networks closely match the overall Veterans Affairs system with regard to medical record review and survey-based quality measures (25, 26). We selected patients who had had at least 2 outpatient visits in each of the 2 years between 1 October 1997 and 30 September 1999. A total of 106 576 patients met these criteria. We randomly sampled 689, oversampling for chronic obstructive pulmonary disease (COPD), hypertension, and diabetes, and were able to locate records for 664 patients (a record location rate of 96%). Because of resource constraints, we reviewed a random subset of 621 of these records. Since this sample contained only 20 women and 4 patients younger than 35 years of age, we further restricted the sample to men older than 35 years of age. Thus, we included 596 VHA patients in the analysis. All of these patients had complete medical records.

    The methods we used to obtain the national sample have been described elsewhere (13) and are summarized here. As part of a nationwide study, residents of 12 large metropolitan areas (Boston, Massachusetts; Cleveland, Ohio; Greenville, South Carolina; Indianapolis, Indiana; Lansing, Michigan; Little Rock, Arkansas; Miami, Florida; Newark, New Jersey; Orange County, California; Phoenix, Arizona; Seattle, Washington; and Syracuse, New York) were contacted by using random-digit dialing and were asked to complete a telephone survey (27). To ensure comparability with the VHA sample, we included only men older than 35 years of age. Between October 1998 and August 2000, we telephoned 4086 of these participants and asked for permission to obtain copies of their medical records from all providers (both individual and institutional) that they had visited within the past 2 years. We received verbal consent from 3138 participants (77% of those contacted by telephone). We mailed consent forms and received written permission from 2351 participants (75% of those who had given verbal permission). We received at least 1 medical record for 2075 participants (88% of those who had returned consent forms). We excluded participants who had not had at least 2 medical visits in the past 2 years to further ensure comparability with the VHA sample. Thus, our final national sample included 992 persons. The rolling abstraction period (October 1996 to August 2000) substantially overlapped the VHA sampling period. The average overlap was 70%, and all records had at least 1 year of overlap. Seven hundred eight (71%) of the 992 persons in the national sample had complete medical records. On the basis of data from the original telephone survey, we determined that participants in the national sample were more likely to be older, white, and better educated; to have higher income levels; and to have less than excellent health compared with eligible nonparticipants (13).

    Chart Abstraction

    We sent photocopies of all of the medical records to 1 of 2 central areas for abstraction. For VHA patients, we abstracted data on all care received between October 1997 and September 1999; for patients in the national sample, we abstracted data on all care received in the 2 years before the date of recruitment. We used computer-assisted abstraction software on a Microsoft Visual Basic 6.0 platform (Microsoft Corp., Seattle, Washington), which allowed us to tailor the manual chart abstraction to the specific record being reviewed and provided interactive data quality checks (consistency, range), calculations (for example, high blood pressure), and classifications (for example, drug class). Twenty trained registered nurse abstractors collected the data. To assess interrater reliability, we reabstracted charts for 4% of the participants selected at random. According to the κ statistic, average reliability in the national sample was substantial to almost perfect (28) at 3 levels: presence of a condition (κ = 0.83), indicator eligibility (κ = 0.76), and indicator scoring (κ = 0.80) (13).

    Statistical Analysis

    All analyses were conducted by using SAS, version 8.2 (SAS Institute, Cary, North Carolina). The unit of analysis was adherence to a given indicator in a given patient. For each indicator, we determined the criteria that made participants eligible for the process specified in the indicator (yes or no). We then determined whether participants had received the specified process each time an indication was noted in their medical record (yes, no, or proportion). We determined aggregate indicator scores for each summary category (that is, acute, chronic, and preventive care; screening; diagnosis; treatment; and follow-up) by dividing all instances in which participants received recommended care by the total number of instances in which the care should have been received. We constructed the scores as proportions ranging from 0% to 100%, adjusting for clustering of indicators within patients. Because of clustering of the data, we used the bootstrap method to estimate standard errors for all of these scores (29).

    We applied sampling weights to represent the original populations from which the 2 samples were drawn and to adjust for nonresponse. We also used weights to standardize the patients for characteristics common among the VHA population: COPD; hypertension; diabetes; and age categories ranging from 35 to 50 years of age, 51 to 65 years of age, and older than 65 years of age. Sampling weights were applied at the individual level; indicators were implicitly weighted on the basis of prevalence of eligibility. Although we report weighted results because we believe they are most representative, weighting did not affect the direction or significance of any reported results.

    We used t-tests or chi-square tests with bootstrapped standard errors to compare the standardized VHA and national samples according to population characteristics; aggregate quality of care; subsets of indicators related to acute, chronic, and preventive care; subsets of indicators related to function of care; subsets of indicators supported by randomized, controlled trials; subsets of indicators similar to those used by the VHA in its performance measurement system; and chronic conditions that affected more than 50 patients from both samples, including COPD, coronary artery disease, depression, diabetes, hyperlipidemia, headache, hypertension, and osteoarthritis. We used logistic regression to compare the rates at which the respective samples received the care specified in the indicators. This allowed us to adjust for factors beyond the standardization, including age as an integer variable, number of chronic and acute conditions, and number of outpatient visits. We calculated adjusted scores after taking into account clustering of indicators at the individual patient level. For the logistic regression models, standard errors and confidence intervals were adjusted for the clustering of indicators within patients by using the sandwich estimator (30).

    To test the sensitivity of our results to geography and insurance, we also estimated models confining the national sample to the 6 communities nearest the 2 VHA regions and to respondents with insurance. To test the sensitivity of our results to completeness of documentation, we estimated models restricted to patients with complete records and to the subset of indicators with high likelihood (laboratory tests and radiology) and less likelihood (counseling and education) of complete documentation. Since the number of visits could represent an intervening variable between the comparison samples and quality, we also ran models that did not adjust for the number of visits. Finally, to test the sensitivity of our results to the type of indicator set used, we compared the adjusted performance of the VHA and the community on the subset of indicators that matched the widely accepted HEDIS indicator set.

    Role of the Funding Source

    The funding agencies (Veterans Affairs Health Services Research and Development Service, the Robert Wood Johnson Foundation, the Centers for Medicare & Medicaid Services, the Agency for Healthcare Research and Quality, and the California HealthCare Foundation) did not participate in the data collection or analysis or in interpretation of the results. Veterans Affairs officials received advance copies of the manuscript for comment.

    Results
    Characteristics of the Study Samples

    Table 3 presents the characteristics of the VHA and national samples, with and without weighting for sampling, nonresponse, and standardization for age categories and the prevalence of COPD, hypertension, and diabetes in the VHA sample. After standardization, there were no statistically significant differences in the age of the participants or the number of chronic conditions, although patients in the national sample had slightly more acute conditions. There were also no significant differences in the rates of chronic conditions between the 2 samples, with the exception that VHA patients had a somewhat higher prevalence of osteoarthritis. Patients from the VHA also had a significantly greater number of outpatient visits per year (9.2 vs. 7.9; P < 0.001).

    Table 3. Veterans Health Administration and National Sample Characteristics
    Comparisons of Quality of Care

    Table 4 presents the results of our analyses comparing the quality of care between the standardized VHA and national samples, adjusting for age and for the number of chronic conditions, acute conditions, and outpatient visits. Sixteen of the 348 indicators had no eligible patients in either sample, leaving 294 indicators and 596 patients on which to base the VHA scores and 330 indicators and 992 patients on which to base the national scores. Overall, VHA patients were more likely than patients in the national sample to receive the care specified by the indicators (67% vs. 51%; difference, 16 percentage points [CI, 14 to 18 percentage points]). Performance in the VHA outpaced that of the national sample for both chronic care (72% vs. 59%; difference, 13 percentage points [CI, 10 to 17 percentage points]) and preventive care (64% vs. 44%; difference, 20 percentage points [CI, 12 to 28 percentage points]), but not for acute care (53% vs. 55%; difference, −2 percentage points [CI, −9 to −4 percentage points]). In particular, the VHA sample received significantly better care for depression, diabetes, hyperlipidemia, and hypertension. The VHA also performed consistently better across the entire spectrum of care, including screening, diagnosis, treatment, and follow-up. These differences in quality of care held true when we considered only those indicators (n = 72) supported by randomized, controlled trials (57% vs. 45%; difference, 12 percentage points [CI, 3 to 20 percentage points]).

    Table 4. Adjusted Adherence to Indicators by Category
    Associations with Performance Measurement

    To test the association between performance and performance measurement within the VHA, we restricted the analysis of overall quality to processes and conditions specifically addressed by the VHA performance measurement set. When we restricted the analysis to specific indicators that closely matched the performance measures targeted by the VHA, VHA patients had a substantially greater chance of receiving the indicated care than did patients in the national sample (adjusted scores, 67% vs. 43%; difference, 24 percentage points [CI, 21 to 26 percentage points]). Patients from the VHA were also more likely than national patients to receive care in the conditions or areas specified by the VHA indicator set, even when the processes covered by the indicators were substantially different (70% vs. 58%; difference, 12 percentage points [CI, 10 to 15 percentage points]). The difference between VHA patients and national patients in conditions or areas not covered by the VHA performance measurement system barely reached conventional levels of statistical significance (55% vs. 50%; difference, 5 percentage points [CI, 0 to 10 percentage points]).

    Sensitivity Analyses

    Confining the analyses to patients in both samples who had complete records did not change the direction or significance of any reported results. The VHA advantage was largest in indicators most likely to have possible underdocumentation (adjusted performance for counseling and education, 45% vs. 26%; difference, 19 percentage points [CI, 14 to 30 percentage points]), but even in laboratory tests and radiology, an area that would be less sensitive to documentation differences, there was also a substantial difference (67% vs. 52%; difference, 15 percentage points [CI, 11 to 19 percentage points]). Confining the analysis to the 6 nationally sampled metropolitan areas closest to the 2 VHA regions also did not change the direction or significance of any result, nor did excluding uninsured patients from the national sample. Models that did not adjust for the number of visits had the same VHA effects as those that did adjust for number of visits. Patients from the VHA also still received more indicated care (adjusted rates, 60% vs. 39%; difference, 21 percentage points [CI, 16 to 26 percentage points]) when the analyses were confined to the overlap of our indicator set and HEDIS measures, the most commonly used national performance indicator set for managed care.

    Discussion

    Using the RAND Quality Assessment Tools broad measure of quality of care, we found that adherence to recommended processes of care in 2 VHA regions typically exceeded that in a comparable national sample in 12 communities. These findings persisted when we adjusted the samples for age, number of acute and chronic conditions, and number of outpatient visits and when we examined only processes supported by randomized, controlled trials. In addition, we found that the differences between the VHA and national sample were greatest in processes subject to the VHA performance measurement system. The “halo effect” of better VHA care extended to measures of processes in the same condition or area that were not specifically measured by the VHA performance system; however, this effect decreased greatly in unrelated areas. Acute care, COPD care, osteoarthritis care, and coronary artery disease care were exceptions to the pattern of better care in the VHA, although our power to distinguish quality differences was limited by the small number of patients with COPD in the national sample (n = 62).

    To date, the VHA has not targeted acute care or osteoarthritis care as part of its intensive performance measurement system (6). Coronary artery disease, on the other hand, has been the subject of quality improvement efforts both inside and outside the VHA, including those sponsored by the American Heart Association (31-33). Indeed, many previous comparisons between VHA and national samples outside the VHA performance set have involved patients with coronary artery disease and have yielded mixed results (10). That we found little difference between the care provided to patients with coronary artery disease in the VHA and in a national sample is consistent with other findings and could be the result of comparable quality measurement programs for this condition in the United States and in the VHA. On the other hand, predominantly outpatient-based quality improvement efforts for diabetes have also been implemented in both the VHA system and other institutions, and our analyses showed that the VHA outperformed the national sample for diabetes care. The difference may be due to more effective outpatient VHA quality improvement for diabetes, but further research is needed to investigate the roots of this discrepancy.

    Although our study is one of the most comprehensive comparisons between VHA patients and national patients, it has limitations. First, our analysis is based on a comparison of 2 different study samples. Although we used robust statistical techniques to account for any differences between the samples, we could not adjust for the somewhat different geographic distributions or abstraction periods, although there was a great deal of overlap in both areas. Furthermore, in other analyses, we have not observed any large geographic variations in the aggregate indicator scores for the national sample, and our results did not change when we confined the national sample to the 6 communities closest to the 2 Veterans Affairs regions (34). Our study also relied on patient recollection of provider visits in the national sample. It is possible that patients received care from additional providers but did not recall or that we did not receive all available charts. However, we found that confining our analyses to patients with complete records did not change the results, and persons with missing charts were likely to have higher quality scores (13). We lack data on whether patients in the national sample were also receiving care at the VHA, or vice versa. Other studies have found evidence of co-management between VHA and non-VHA providers (35). To the extent that this co-management occurred, it would probably lead to an underestimate of the differences between the 2 groups. An additional limitation of our study is that there were too few men younger than 35 years of age and too few women in our VHA sample to analyze care for these subgroups. For women, limited data from other studies indicate a VHA advantage in breast cancer screening (7). While the Quality Assessment Tools system is quite broad, it cannot represent all of medical care, and there are probably gaps in the indicator set. Last, the evidence grading system for Quality Assessment Tools is based on a simple measure of research design. More precise evidence categories might have altered our analysis of the effect of level of evidence on the comparison between the VHA and national samples, but it is difficult to tell whether the differences would be accentuated or diminished.

    Several unmeasured patient characteristics could have biased our results. The response rate was lower in the national sample than in the VHA sample, underrepresenting ethnic minorities and the poor and exacerbating the natural difference in prevalence between the VHA and the United States as a whole. Ethnic minorities and people with low incomes generally receive lower-quality care (36, 37), although these disparities have not yet been examined by using the Quality Assessment Tools system. If we had been able to adjust for these variables, the differences in quality of care that we observed may have been even greater. Patients from the VHA also tend to have more severe disease than patients outside the VHA, and it is possible that severity of disease influences care quality (38). However, the process indicators we used are clinically precise, and all eligible patients should have received the indicated care regardless of disease severity. In any case, our findings persisted even when we adjusted for number of conditions.

    One of the purported advantages of the electronic medical record (which was universally available in the VHA sites) is more thorough documentation. Indeed, the volume of the VHA medical records we reviewed was larger than that of the national sample; it took almost one and a half times longer to abstract data from the VHA sample, although some of this difference was no doubt due to the higher number of visits and conditions. Some of the observed differences may be due to more thorough documentation for VHA patients rather than more thorough medical care. In constructing the indicator set, expert panelists were instructed to include indicators only where the absence of documentation itself would be evidence of poor care. Even so, 1 VHA study found gaps of only approximately 10% between documentation in the medical record and actual care provision among standardized patients (39, 40). Furthermore, the VHA patients received more care both in indicators that are sensitive to documentation practices (counseling and education) and those that are insensitive (laboratory tests and radiology). Therefore, it seems unlikely that different documentation practices alone could account for all of the differences we observed. Instead, other aspects of the electronic medical record, such as notation templates that structure physician–patient interaction or computerized reminders targeting performance measures, may account for the difference.

    The implications of these data are important to our understanding of quality management. The VHA is the largest health care system to have implemented an electronic medical record, routine performance monitoring, and other quality-related system changes, and we found that the VHA had substantially better quality of care than a national sample. Our finding that performance and performance measurement are strongly related suggests that the measurement efforts are indeed contributing to the observed differences. Performance measurement alone seems unlikely to account for all the differences; the VHA scored better even on HEDIS measures widely applied in managed care settings (but not in other settings) outside the VHA. Our study was not designed to determine which other mechanisms might be acting to improve VHA care, but other studies have suggested that they might include computerized reminders, standing orders, improved interprovider communication, facility performance profiling, leveraging of academic affiliations, accountability of regional managers for performance, and a more coordinated delivery system (5, 6, 41, 42). More research is needed to estimate the relative effects of these practices. As more coordinated systems of medical care delivery develop, our data support the use of the types of information and quality management systems available in the VHA.

    References

    • 1. Schuster MAMcGlynn EABrook RHHow good is the quality of health care in the United States? Milbank Q1998;76:517-63, 509. [PMID: 9879302] CrossrefMedlineGoogle Scholar
    • 2. Crossing the Quality Chasm: A New Health System for the 21st Century. Institute of Medicine Committee on Quality of Health Care in America. Washington, DC: National Academy Pr; 2001. Google Scholar
    • 3. Steinberg EPImproving the quality of care—can we practice what we preach? [Editorial]. N Engl J Med2003;348:2681-3. [PMID: 12826644] CrossrefMedlineGoogle Scholar
    • 4. Adams KCorrigan JMedsPriority Areas for National Action: Transforming Health Care Quality. Institute of Medicine Committee on Identifying Priority Areas for Quality Improvement, Board on Health Care Services. Washington, DC: The National Academies Pr; 2003. Google Scholar
    • 5. Kizer KWDemakis JGFeussner JRReinventing VA health care: systematizing quality improvement and quality innovation. Med Care2000;38:I7-16. [PMID: 10843266] CrossrefMedlineGoogle Scholar
    • 6. Halpern JThe measurement of quality of care in the Veterans Health Administration. Med Care1996;34:MS55-68. [PMID: 8598688] CrossrefMedlineGoogle Scholar
    • 7. Demakis JGMcQueen LKizer KWFeussner JRQuality Enhancement Research Initiative (QUERI): A collaboration between research and clinical practice. Med Care2000;38:I17-25. [PMID: 10843267] CrossrefMedlineGoogle Scholar
    • 8. Jha AKPerlin JBKizer KWDudley RAEffect of the transformation of the Veterans Affairs Health Care System on the quality of care. N Engl J Med2003;348:2218-27. [PMID: 12773650] CrossrefMedlineGoogle Scholar
    • 9. Petersen LANormand SLLeape LLMcNeil BJComparison of use of medications after acute myocardial infarction in the Veterans Health Administration and Medicare. Circulation2001;104:2898-904. [PMID: 11739303] CrossrefMedlineGoogle Scholar
    • 10. Petersen LANormand SLLeape LLMcNeil BJRegionalization and the underuse of angiography in the Veterans Affairs Health Care System as compared with a fee-for-service system. N Engl J Med2003;348:2209-17. [PMID: 12773649] CrossrefMedlineGoogle Scholar
    • 11. Rosenthal GEVaughan Sarrazin MHannan ELIn-hospital mortality following coronary artery bypass graft surgery in Veterans Health Administration and private sector hospitals. Med Care2003;41:522-35. [PMID: 12665716] CrossrefMedlineGoogle Scholar
    • 12. Kerr EAGerzoff RBKrein SLSelby JVPiette JDCurb JDet alDiabetes care quality in the Veterans Affairs Health Care System and commercial managed care: the TRIAD study. Ann Intern Med2004;141:272-81. [PMID: 15313743] LinkGoogle Scholar
    • 13. McGlynn EAAsch SMAdams JKeesey JHicks JDeCristofaro Aet alThe quality of health care delivered to adults in the United States. N Engl J Med2003;348:2635-45. [PMID: 12826639] CrossrefMedlineGoogle Scholar
    • 14. Asch SMKerr EAHamilton EGReifel JLMcGlynn EAedsQuality of Care for Oncologic Conditions and HIV: A Review of the Literature and Quality Indicators. Santa Monica, CA: RAND Health; 2000. Google Scholar
    • 15. Malin JLAsch SMKerr EAMcGlynn EAEvaluating the quality of cancer care: development of cancer quality indicators for a global quality assessment tool. Cancer2000;88:701-7. [PMID: 10649266] CrossrefMedlineGoogle Scholar
    • 16. Kerr EAAsch SMHamilton EGMcGlynn EAedsQuality of Care for General Medical Conditions: A Review of the Literature and Quality Indicators. Santa Monica, CA: RAND Health; 2000. Google Scholar
    • 17. Kerr EAAsch SMHamilton EGMcGlynn EAedsQuality of Care for Cardiopulmonary Conditions. Santa Monica, CA: RAND Health; 2000. Google Scholar
    • 18. McGlynn EAAsch SMDeveloping a clinical performance measure. Am J Prev Med1998;14:14-21. [PMID: 9566932] CrossrefMedlineGoogle Scholar
    • 19. McGlynn EAKerr EAAsch SMNew approach to assessing clinical quality of care for women: the QA Tool system. Womens Health Issues1999;9:184-92. [PMID: 10405590] CrossrefMedlineGoogle Scholar
    • 20. Brook RH. The RAND/University of California, Los Angeles appropriateness method. In: McCormick KA, Moore SR, Siegel RA. Clinical Practice Guideline Development: Methodology Perspectives. Rockville, MD: U.S. Public Health Service; 1994:59-70. AHCPR publication no. 95-0009. Google Scholar
    • 21. Shekelle PGChassin MRPark REAssessing the predictive validity of the RAND/UCLA appropriateness method criteria for performing carotid endarterectomy. Int J Technol Assess Health Care1998;14:707-27. [PMID: 9885461] CrossrefMedlineGoogle Scholar
    • 22. Shekelle PGKahan JPBernstein SJLeape LLKamberg CJPark REThe reproducibility of a method to identify the overuse and underuse of medical procedures. N Engl J Med1998;338:1888-95. [PMID: 9637810] CrossrefMedlineGoogle Scholar
    • 23. Kravitz RLPark REKahan JPMeasuring the clinical consistency of panelists' appropriateness ratings: the case of coronary artery bypass surgery. Health Policy1997;42:135-43. [PMID: 10175621] CrossrefMedlineGoogle Scholar
    • 24. National Committee for Quality AssuranceHEDIS 2000. Volume 2, Technical Specifications. Washington, DC: National Committee for Quality Assurance; 1999. Google Scholar
    • 25. National Performance Data Feedback Center for Office of Quality and Performance. 1999 Network Performance Report. Washington, DC: Office of Quality and Performance, Veterans Health Administration; 2004. Google Scholar
    • 26. Survey of Healthcare Experience of Patients (SHEP). Washington, DC: Office of Quality and Performance, Veterans Health Administration; 2004. Google Scholar
    • 27. Kemper PBlumenthal DCorrigan JMCunningham PJFelt SMGrossman JMet alThe design of the community tracking study: a longitudinal study of health system change and its effects on people. Inquiry1996;33:195-206. [PMID: 8675282] MedlineGoogle Scholar
    • 28. Landis RJKoch GGThe measurement of observer agreement for categorical data. Biometrics1977;33:6159-74. CrossrefGoogle Scholar
    • 29. Efron BTibshirani RAn Introduction to the Bootstrap. New York: Chapman and Hall; 1993. Google Scholar
    • 30. Rogers WHRegression standard errors in clustered samples. Stata Technical Bulletin1993;13:19-23. Google Scholar
    • 31. LaBresh KAGliklich RLiljestrand JPeto REllrodt AGUsing “get with the guidelines” to improve cardiovascular secondary prevention. Jt Comm J Qual Saf2003;29:539-50. [PMID: 14567263] MedlineGoogle Scholar
    • 32. Every NRFihn SDSales AEKeane ARitchie JRQuality Enhancement Research Initiative in ischemic heart disease: a quality initiative from the Department of Veterans Affairs. QUERI IHD Executive Committee. Med Care2000;38:I49-59. [PMID: 10843270] CrossrefMedlineGoogle Scholar
    • 33. Fihn SDDoes VA health care measure up? [Editorial]. N Engl J Med2000;343:1963-5. [PMID: 11136270] CrossrefMedlineGoogle Scholar
    • 34. Kerr EAMcGlynn EAAdams JKeesey JAsch SMProfiling the quality of care in twelve communities: results from the CQI study. Health Aff (Millwood)2004;23:247-56. [PMID: 15160823] CrossrefMedlineGoogle Scholar
    • 35. Jones DHendricks AComstock CRosen AChang BHRothendler Jet alEye examinations for VA patients with diabetes: standardizing performance measures. Int J Qual Health Care2000;12:97-104. [PMID: 10830666] CrossrefMedlineGoogle Scholar
    • 36. Peterson EDWright SMDaley JThibault GERacial variation in cardiac procedure use and survival following acute myocardial infarction in the Department of Veterans Affairs. JAMA1994;271:1175-80. [PMID: 8151875] CrossrefMedlineGoogle Scholar
    • 37. Mayberry RMMili FOfili ERacial and ethnic differences in access to medical care. Med Care Res Rev2000;57 Suppl 1 108-45. [PMID: 11092160] CrossrefMedlineGoogle Scholar
    • 38. Dedier JSinger DEChang YMoore MAtlas SJProcesses of care, illness severity, and outcomes in the management of community-acquired pneumonia at academic hospitals. Arch Intern Med2001;161:2099-104. [PMID: 11570938] CrossrefMedlineGoogle Scholar
    • 39. Luck JPeabody JWUsing standardised patients to measure physicians' practice: validation study using audio recordings. BMJ2002;325:679. [PMID: 12351358] CrossrefMedlineGoogle Scholar
    • 40. Peabody JWLuck JGlassman PDresselhaus TRLee MComparison of vignettes, standardized patients, and chart abstraction: a prospective validation study of 3 methods for measuring quality. JAMA2000;283:1715-22. [PMID: 10755498] CrossrefMedlineGoogle Scholar
    • 41. Kizer KWThe “new VA”: a national laboratory for health care quality management. Am J Med Qual1999;14:3-20. [PMID: 10446659] CrossrefMedlineGoogle Scholar
    • 42. Corrigan JMEden JSmith BMedsLeadership by Example: Coordinating Government Roles in Improving Health Care Quality. Washington, DC: Committee on Enhancing Federal Healthcare Quality Programs; 2002. Google Scholar

    Comments

    0 Comments
    Sign In to Submit A Comment