Featured Articles

Understanding Human Over-Reliance on Technology

The implementation of information technology in medication use systems is widely accepted as a way to reduce adverse drug events by decreasing human error.1 Technology examples include computerized order entry systems, clinical decision support systems, robotic dispensing, profiled automated dispensing cabinets (ADCs), smart infusion pumps, and barcode scanning of medications during compounding, dispensing, ADC restocking, and administration. These technologies are meant to support human cognitive processes and, thus, have great potential to combat the shortcomings of manual medication systems and improve clinical decisions and patient outcomes. This is accomplished through precise controls, automatically generated cues and recommendations to help the user respond appropriately, prompts that promote the correct sequence of work or ensure the collection of critical information, and alerts to make the user aware of potential errors. 

Information technology to support clinical decision making does not replace human activity but rather changes it, often in unintended or unanticipated ways.2 Instances of misuse and disuse, often to work around technology issues, and new sources of errors after technology implementation, have been well documented. Errors can also be caused by over-reliance and trust in the proper function of technology.3 The technology can occasionally malfunction, misdirect the user, or provide incorrect information or recommendations that lead the user to change a previously correct decision or follow a pathway that leads to an error. Over-reliance on technology can result in serious consequences for patients. In its recent Safety Bulletin,4 our sister organization, ISMP Canada, highlighted human over-reliance on technology based on its analysis of an event reported to a Canadian national reporting system. In the article, they discussed two related cognitive limitations: automation bias and automation complacency.

Incident Description

An elderly patient was admitted to the hospital with new-onset seizures. Admission orders included the anticonvulsant phenytoin (handwritten using the brand name DILANTIN), 300 mg orally every evening. Before the pharmacy closed, a pharmacy staff member entered the Dilantin order into the pharmacy computer so that the medication could be obtained from an ADC in the patient care unit overnight. In the pharmacy computer, medication selection for order entry was performed by typing the first 3 letters of the medication name (“dil” in this case) and then choosing the desired medication name from a drop-down list. The computer list contained both generic and brand names. The staff member was interrupted while entering the order. When this task was resumed, dilTIAZem 300 mg was selected instead of Dilantin 300 mg.

On the patient care unit, the order for Dilantin had been correctly transcribed by hand onto a daily computer-generated medication administration record (MAR), which was verified against the prescriber’s order and cosigned by a nurse. The nurse who obtained the medication from the unit’s ADC noticed the discrepancy between the MAR and the ADC display, but accepted the information displayed on the ADC screen as correct. The patient received one dose of long-acting dilTIAZem 300 mg orally instead of the Dilantin 300 mg as ordered. The error was caught the next morning when the patient exhibited significant hypotension and bradycardia. 

Automation Bias and Automation Complacency

The tendency to favor or give greater credence to information from technology (e.g., an ADC display) and to ignore a manual source of information that provides contradictory information (e.g., a handwritten entry on the computer-generated MAR), even if it is correct, illustrates the phenomenon of automation bias.3 Automation complacency is a closely linked, overlapping concept that refers to the monitoring of technology less frequently or with less vigilance because of a lower degree of suspicion of error and a stronger belief in its accuracy.2 End-users of a technology (e.g., a nurse that relies on the ADC display that lists medications to be administered) tend to forget or ignore that information from the device may depend on data entered by a person. In other words, processes that may appear to be wholly automated are often dependent upon human input at critical points and thus require the same degree of monitoring and attention as manual processes. These two phenomena can affect decision making in individuals as well as in teams and offset the benefits of technology.2

Automation bias and complacency can lead to decisions that are not based on a thorough analysis of all available information but that are strongly biased toward the presumed accuracy of the technology.2 While these effects are inconsequential if the technology is correct, errors are possible if the technology output is misleading. An automation bias omission error takes place when users rely on the technology to inform them of a problem but it does not (e.g., excessive dose warning); thus, they fail to respond to a potentially critical situation because they were not prompted to do so. An automation bias commission error occurs when users make choices based on incorrect suggestions or information provided by technology.3 In the Dilantin incident described above, there were two errors caused by automation bias: the first error was when the pharmacy staff member accepted dilTIAZem as the correct drug in the pharmacy order entry system. The second error occurred when the nurse identified the discrepancy between the ADC display and the MAR but trusted the information on the ADC display over that on the handwritten entry on the computer-generated MAR.

In recent analyses of health-related studies on automation bias and complacency, clinicians overrode their own correct decisions in favor of erroneous advice from technology between 6 to 11% of the time,3 and the risk of an incorrect decision increased by 26% if the technology output was in error.5 The rate of detecting technology failures is also low—in one study, half of all users failed to detect any of the technology failures introduced during the course of a typical work day (e.g., important alert did not fire, presentation of the wrong information or recommendation).2,6

Causes of Automation Bias and Complacency

Automation bias and complacency are thought to result from three basic human factors:2,3

  • In human decision-making, people have a tendency to select the pathway requiring the least cognitive effort, which often results in letting technology dictate the path. This factor is likely to play a greater role as people are faced with more complex tasks, multitasking, heavier workloads, or increasing time pressures—common phenomena in healthcare.
  • People often believe that the analytic capability of technology is superior to that of humans, which may lead to overestimating the performance of these technologies.
  • People may reduce their effort or shed responsibility in carrying out a task when an automated system is also performing the same function. It has been suggested that the use of technology convinces the human mind to hand over tasks and associated responsibilities to the automated system.7,8 This mental handover can reduce the vigilance that the person would demonstrate if carrying out the particular task independently.

Other conditions linked to automation bias and complacency are discussed below.

Experience. There is conflicting evidence as to the effect of experience on automation bias and complacency. While there is evidence that reliance on technology is reduced as experience and confidence in one’s own decisions increases, it has also been shown that increased familiarity with technology can lead to desensitization, which may cause clinicians to doubt their instincts and accept inaccurate technology-derived information.3 Thus, automation bias and complacency have been found in both naïve and expert users.2

Perceived reliability and trust in the technology. While once believed to be a general tendency to trust all technology, today, automation bias and complacency are believed to be influenced by the perceived reliability of a specific technology based on the user’s prior experiences with the system.2 When automation is perceived to be reliable at least 70% of the time, people are less likely to question its accuracy.9

Confidence in decisions. As trust in technology increases automation bias and complacency, users are less likely to be biased if they are confident in their own decisions.3,10,11

Safe Practice Recommendations

The use of technology is considered a high-leverage strategy to optimize clinical decision making—but only if the users’ trust in the technology closely matches the reliability of the technology itself. Therefore, the following strategies to address errors related to automation bias and complacency focus on:

  • Improving the reliability of the technology itself
  • Supporting clinicians to more accurately assess the reliability of the technology, so that appropriate monitoring and verification strategies can be employed

Analyze and address vulnerabilities. Conduct a proactive risk assessment (e.g., failure mode and effects analysis [FMEA]) for new technologies to identify unanticipated vulnerabilities and address them before undertaking facility-wide implementation. Also encourage reporting of technology-associated risks, issues, and errors.

Limit human-computer interfaces. Organizations should continue to enable all technology to communicate seamlessly, thereby limiting the need for human interaction with the technology, which could introduce errors.

Design the technology to reduce over-reliance. The design of the technology can affect the users’ attention and how they regard its value and reliability. For example, the “auto-complete” function for drug names after entering the first few letters is a design strategy that has often led to selection of the first, but incorrect, choice provided by the technology. Requiring the use of 4 letters that generates a list of potential drug names could reduce these types of errors. To cite another example, studies have found that providing too much on-screen detail can decrease the user’s attention and care, thereby increasing automation bias.3

Provide training. Provide training about the technology involved in the medication-use system to all staff who utilize the technology. Include information about the limitations of such technology, as well as previously identified gaps and opportunities for error. Allow trainees to experience automation failures during the training (e.g., technology failure to issue an important alert; discrepancies between technology entries and handwritten entries in which the handwritten entries are correct; “auto-fill” or “auto-correct” errors; incorrect calculation of body surface area due to human error during input of the weight in pounds instead of kg). Experiencing technology failures during training can help to reduce errors due to complacency and automation bias by encouraging critical thinking when using automated systems.3 Allowing trainees to experience automation failures may increase the likelihood of recognizing these failures during daily work.

Reduce task distraction. Although easier said than done, leaders should attempt to ensure those using technology can do so uninterrupted and are not simultaneously responsible for other tasks. Automation failures are less likely to be identified if the user is required to multitask or is otherwise distracted or rushed.2

Conclusion

Technology plays an important role in the design and improvement of medication systems; however, it must be viewed as supplementary to clinical judgement. Although its use can make many aspects of the medication-use system safer, healthcare professionals must continue to apply their clinical knowledge and critical thinking skills to use and monitor technology to provide optimal patient care.

ISMP thanks ISMP Canada for its generous contribution to the content for this article.

References

  1. Mahoney CD, Berard-Collins CM, Coleman R, Amaral JF, Cotter CM. Effects of an integrated clinical information system on medication safety in a multi-hospital setting. Am J Health Syst Pharm. 2007;64(18):1969-77.
  2. Parasuraman R, Manzey DH. Complacency and bias in human use of automation: an attentional integration. Hum Factors. 2010;52(3):381-410.
  3. Goddard K, Roudsari A, Wyatt JC. Automation bias: a systematic review of frequency, effect mediators, and mitigators. J Am Med Inform Assoc. 2012;19(1):121-7.
  4. ISMP Canada. Understanding human over-reliance on technology. ISMP Canada Safety Bulletin. 2016;16(5):1-4.
  5. Goddard K, Roudsari A, Wyatt JC. Automation bias: empirical results assessing influencing factors. Int J Med  Inform. 2014;83(5):368-75.
  6. Parasuraman R, Molloy R, Singh IL. Performance consequences of automation-induced “complacency.” Int J Aviat Psychol. 1993;3(1):1–23.
  7. Coiera E. Technology, cognition and error. BMJ Qual Saf. 2015;24(7):417-22.
  8. Mosier KL, Skitka LJ. Human decision makers and automated decision aid: made for each other? In: Parasuraman R, Mouloua M, eds. Automation and Human Performance: Theory and Applications. Mahwah, NJ: Lawrence Erlbaum Associates Inc. 1996;201-20.
  9. Campbell EM, Sittig DF, Guappone KP, Dykstra RH, Ash JS. Overdependence on technology: an unintended adverse consequence of computerized provider order entry. AMIA Annu Symp Proc. 2007. 2007:94-8.
  10. Lee JD, Moray N. Trust, control strategies and allocation of function in human-machine systems. Ergonomics. 1992;35(10):1243–70.
  11. Yeh M, Wickens CD. Display signaling in augmented reality: effects of cue reliability and image realism on attention allocation and trust calibration. Hum Factors. 2001;43(3):355–65.