Featured Articles

Why “Benchmarking” Error Rates Is NEVER a Good Measure of Performance or Patient Safety

Problem: Organizations often want to know, in comparison to their peers, how they stand in achieving and maintaining an environment that promotes patient safety. Benchmarking is a process that can help meet this goal, but is seldom fully understood. The National Institutes of Health defines benchmarking as “a strategic and analytical process of continuously measuring an organization's products, services, and practices against a recognized leader in the studied area for the purpose of improving business performance." Benchmarking requires both performance measurements as well as enablers that help to achieve that performance. It is an ongoing process that is more complex than a direct comparison, but instead, a process that provides a systematic method of understanding the specific underlying practices that result in exemplary performance. 

It has been 25 years since we published our September 9, 1998 article, Benchmarking—when is it dangerous? Unfortunately, there is continued confusion about the term, perpetuating the myth that one can gauge the quality and safety of the medication-use process simply by comparing error rates, both within an organization (e.g., unit-to-unit, employee-to-employee) and externally (e.g., error rates with other hospitals). In fact, inquiries about benchmarking medication error rates represent one of the most common categories of medication safety questions received by ISMP. Organizations often want to know if there is a national standard (benchmark) for medication error rates or reported errors to “make sure the organization has less than the benchmark.” Others want to know statistics on “medication error rates per practitioner,” or “what is the average safe number of medication orders to verify or compound in an hour.” Organizations hoping to demonstrate their commitment to safety often tell us that they have “reduced their error rate” or “have the lowest medication error rate in their health system.”

We have also received feedback from healthcare organizations who tell us that certain payers or regulators continue to embrace the practice of comparing error rates for benchmarking. They tell us they are required to track and report error rates to a utilization review accreditation commission known as URAC. The URAC 2022 Specialty Pharmacy Performance Measurement: Aggregate Summary Performance Report describes the measure for dispensing accuracy (MP2012-06) as the percentage of prescriptions that the organization dispensed inaccurately, assessed in six parts: incorrect drug and/or product dispensed, incorrect recipient, incorrect strength, incorrect dosage form, incorrect instructions, and incorrect quantity. According to URAC, a lower reporting rate represents better performance. 

We are not in agreement. All the above made us realize it was time to revisit this topic.

Both ISMP and the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) recommend that, due to differences in culture, definitions, patient populations, resources, and the types of reporting and error detection systems, medication error rates should never be used to compare one organization to another. There is no acceptable incident rate for medication errors. Also, there is rarely any information regarding how an organization achieved a level of performance from which to learn. The number of error reports is less important than the quality of the information collected, the organization’s analysis of the information, and systems improvements made to prevent patient harm. 

Large variations exist in the definition of an error, the types of errors reported, and what constitutes the threshold to report. Practitioners are more likely to report an event based on the severity or if it occurred closer to or reached the patient. In addition, some practitioners report adverse events regularly while others report less frequently. Other practitioners tell us they do not bother reporting safety issues because it takes too long or the reports in the past did not result in a change to the system. Remember, the easiest way to improve your error rate is to stop reporting – and that is certainly no way for organizations to learn and improve. The impact of these variables on error reporting demonstrates why error rates cannot be used as a valid measure of safety over time, and therefore these invalid metrics should never be used for comparison between health systems, hospitals, and healthcare practitioners.

Safe Practice Recommendations: To ensure continuous performance improvement, consider the importance of self-comparison of organizational medication safety metrics over time. As an alternative to attempting to compare unreliable “benchmarking” error report data with other organizations, instead, consider the following recommendations to ensure your organization is maximizing its opportunities as a learning organization, and discovering opportunities to reduce patient harm. 

Strive for increased actionable reports. The goal for error-reporting programs is not to reduce the number of reports received, but rather, increase the learning that occurs, along with actions taken to improve the safety system. Educate practitioners, leadership, and the board of directors that the goal is to increase reporting, so actions can be taken to improve system reliability (e.g., error reporting rate per patient days, with higher being better and a clear descriptor of a learning culture). 

Improve reporting of close calls (good catches). Measure changes in the number of errors that are caught prior to reaching the patient (e.g., good catches, with higher number of reports being better). An increase in the number of times a practitioner stopped and escalated an unsafe situation demonstrates the development of a learning culture, where individuals see value in sharing safety issues and trying to proactively solve them. 

Encourage self-reporting. Those who receive and act on error reports must earn the trust of reporters and prove that the program is sensitive to reporters’ concerns, particularly fear of punishment or undue embarrassment for making and reporting errors. Use reports of errors and close calls to assess system performance, not staff performance. An increased number of self-reports may indicate that staff feel safe sharing experiences that have happened to them to avoid reoccurrence or the potential for the error reaching a patient the next time.

Educate reporters to include contact information. Anonymous reports can be a barrier to understanding root causes, contributing factors, and behavioral choices since communication with the reporter for additional information is not possible. Coach staff about how anonymous reports can represent a missed opportunity. Explain to staff how the organization learns from errors to improve systems and processes and encourage them to include their contact information to ensure a thorough investigation is completed. An increase in anonymous reporting might indicate staff are afraid to report due to fear of a punitive response from leadership. Organizations that operate within a Just Culture have created an open and learning reporting environment in which staff are comfortable in raising their hand when they have observed a hazard or cut a corner to achieve an organizational goal, or will self-report when a mistake has been made.

Enhance safety culture survey participation. Use results from surveys of the hospital’s safety culture to gauge the level of psychological safety perceived regarding error reporting. Take the time to understand staff’s perceptions and identify an appropriate organizational response to improve the culture. Focus efforts on increasing safety culture survey response rates and improving scores.

Maximize the use of technology. Define and monitor organizational expectations as it relates to data collection for technology utilization, such as increased barcode scanning compliance (e.g., pharmacy, nursing), reduction in automated dispensing cabinet overrides, and increased use of smart pump dose error-reduction systems and pharmacy intravenous workflow management systems. Engineer these technology systems to prevent workarounds, bypasses, or alternative mechanisms which counter their effectiveness in preventing errors. 

Quantify system changes. Keep track of the system-based problems that have been uncovered and the corresponding efforts and strategies employed to reduce the risk of errors and patient harm. While it may be difficult to measure risk avoidance and a reduction in patient harm, a reasonable alternative is highlighting the system changes that have been made as a result of increasing information shared through the error-reporting system. Develop a process to regularly inform staff of actions taken to make the systems safer as a direct result of reporting.

Build a medication safety dashboard. Build your organization’s targets into a medication safety dashboard to expedite the processes for analysis and to self-evaluate your medication-use system. When presenting dashboard information, identify actions that supported progress and those challenges that still exist. 

Monitor and share performance improvement. Establish a cadence for reviewing internal metrics (e.g., monthly, quarterly). Report findings to frontline staff, committees, executive leadership, and the board of directors, and gather feedback for further improvements. Help committee members and executive leadership who are seeking error rate comparisons to understand why there is no national comparison. and what can be done instead to demonstrate the movement to a safe and reliable medication-use system. Communicate the meaningful impact of implemented changes that resulted from error reporting 

 

Suggested citation:

Institute for Safe Medication Practices (ISMP). Why “benchmarking” error rates is NEVER a good measure of performance or patient safety. ISMP Medication Safety Alert! Acute Care. 2023;28(23):1-3.