jueves, 12 de noviembre de 2009

AHRQ Patient Safety Network - Glossary


Glossary
exacta definición de los términos médicos en idioma inglés [herramienta]
Jump to: A B C D E F H I J L M N O P R S T U W (ver al pié)


Active Error (or Active Failure) - The terms "active" and "latent" as applied to errors were coined by James Reason.(1,2) Active errors occur at the point of contact between a human and some aspect of a larger system (eg, a human-machine interface). They are generally readily apparent (eg, pushing an incorrect button, ignoring a warning light) and almost always involve someone at the frontline. Latent errors (or latent conditions), in contrast, refer to less apparent failures of organization or design that contributed to the occurrence of errors or allowed them to cause harm to patients.

Active failures are sometimes referred to as errors at the "sharp end," figuratively referring to a scalpel. In other words, errors at the sharp end are noticed first because they are committed by the person closest to the patient. This person may literally be holding a scalpel (eg, an orthopedist who operates on the wrong leg) or figuratively be administering any kind of therapy (eg, a nurse programming an intravenous pump) or performing any aspect of care. To complete the metaphor, latent errors are those at the other end of the scalpel—the "blunt end"—referring to the many layers of the health care system that affect the person "holding" the scalpel.

1. Reason JT. Human Error. New York, NY: Cambridge University Press; 1990. [ go to PSNet listing ]

2. Reason J. Human error: models and management. BMJ. 2000;320:768-770. [ go to PubMed ]

Adverse Drug Event (ADE) - An adverse event involving medication use.

Examples:

anaphylaxis to penicillin
major hemorrhage from heparin
aminoglycoside-induced renal failure
agranulocytosis from chloramphenicol
As with the more general term adverse event, there is no necessary relation to error or poor quality of care. In other words, ADEs include expected adverse drug reactions (or "side effects") defined below, as well as events due to error.

Thus, a serious allergic reaction to penicillin in a patient with no prior such history is an ADE, but so is the same reaction in a patient who does have a known allergy history but receives penicillin due to a prescribing oversight. To avoid having to use medication error as an outcome, some studies refer instead to potential ADEs. For instance, if a clinician ordered penicillin for a patient with a documented serious penicillin allergy, many would characterize the order as a potential ADE, on the grounds that administration of the drug would carry a substantial risk of harm to the patient.

Ignoring the distinction between expected medication side effects and ADEs due to errors may seem misleading, but a similar distinction can be achieved with the concept of preventability. All ADEs due to error are preventable, but other ADEs not warranting the label error may also be preventable.

Adverse Drug Reaction - Adverse effect produced by the use of a medication in the recommended manner. These effects range from "nuisance effects" (eg, dry mouth with anticholinergic medications) to severe reactions, such as anaphylaxis to penicillin.

Adverse Event - Any injury caused by medical care.

Examples:

pneumothorax from central venous catheter placement
anaphylaxis to penicillin
postoperative wound infection
hospital-acquired delirium (or "sun downing") in elderly patients
Identifying something as an adverse event does not imply "error," "negligence," or poor quality care. It simply indicates that an undesirable clinical outcome resulted from some aspect of diagnosis or therapy, not an underlying disease process.

Thus, pneumothorax from central venous catheter placement counts as an adverse event regardless of insertion technique. Similarly, postoperative wound infections count as adverse events even if the operation proceeded with optimal adherence to sterile procedures, the patient received appropriate antibiotic prophylaxis in the peri-operative setting, and so on. (See also iatrogenic)

Anchoring Error (or Bias) - Refers to the common cognitive trap of allowing first impressions to exert undue influence on the diagnostic process. Clinicians often latch on to features of a patient's presentation that suggest a specific diagnosis. Often, this initial diagnostic impression will prove correct, hence the use of the phrase "anchoring heuristic" in some contexts, as it can be a useful rule of thumb to "always trust your first impressions." However, in some cases, subsequent developments in the patient's course will prove inconsistent with the first impression. Anchoring bias refers to the tendency to hold on to the initial diagnosis, even in the face of disconfirming evidence.

1. Redelmeier DA. Improving patient care. The cognitive psychology of missed diagnoses. Ann Intern Med. 2005;142:115-120. [go to PubMed]

2. Croskerry P. Cognitive forcing strategies in clinical decisionmaking. Ann Emerg Med. 2003;41:110-120. [go to PubMed]

3. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78:775-780. [go to PubMed]

APACHE - The Acute Physiologic and Chronic Health Evaluation (APACHE) scoring system has been widely used in the United States. APACHE II is the most widely studied version of this instrument (a more recent version, APACHE III, is proprietary, whereas APACHE II is publicly available); it derives a severity score from such factors as underlying disease and chronic health status.(1,2) Other points are added for 12 physiologic variables (ie, hematocrit, creatinine, Glasgow Coma Score, mean arterial pressure) measured within 24 hours of admission to the ICU. The APACHE II score has been validated in several studies involving tens of thousands of ICU patients.

1. Knaus WA, Draper EA, Wagner DP, Zimmerman JE. APACHE II: a severity of disease classification system. Crit Care Med. 1985;13:818-29.[ go to PubMed ]

2. Knaus WA, Wagner DP, Zimmerman JE, Draper EA. Variations in mortality and length of stay in intensive care units. Ann Intern Med. 1993;118:753-61.[ go to PubMed ]

Authority Gradient - Refers to the balance of decision-making power or the steepness of command hierarchy in a given situation. Members of a crew or organization with a domineering, overbearing, or dictatorial team leader experience a steep authority gradient. Expressing concerns, questioning, or even simply clarifying instructions would require considerable determination on the part of team members who perceive their input as devalued or frankly unwelcome.

Most teams require some degree of authority gradient; otherwise roles are blurred and decisions cannot be made in a timely fashion. However, effective team leaders consciously establish a command hierarchy appropriate to the training and experience of team members.

Authority gradients may occur even when the notion of a team is less well defined. For instance, a pharmacist calling a physician to clarify an order may encounter a steep authority gradient, based on the tone of the physician’s voice or a lack of openness to input from the pharmacist. A confident, experienced pharmacist may nonetheless continue to raise legitimate concerns about an order, but other pharmacists might not.

Availability Bias (or Heuristic) - Refers to the tendency to assume, when judging probabilities or predicting outcomes, that the first possibility that comes to mind (ie, the most cognitively "available" possibility) is also the most likely possibility. For instance, suppose a patient presents with intermittent episodes of very high blood pressure. Because episodic hypertension resembles textbook descriptions of pheochromocytoma, a memorable but uncommon endocrinologic tumor, this diagnosis may immediately come to mind. A clinician who infers from this immediate association that pheochromocytoma is the most likely diagnosis would be exhibiting availability bias. In addition to resemblance to classic descriptions of disease, personal experience can also trigger availability bias, as when the diagnosis underlying a recent patient's presentation immediately comes to mind when any subsequent patient presents with similar symptoms. Particularly memorable cases may similarly exert undue influence in shaping diagnostic impressions.

1. Redelmeier DA. Improving patient care. The cognitive psychology of missed diagnoses. Ann Intern Med. 2005;142:115-120. [go to PubMed]

2. Croskerry P. Cognitive forcing strategies in clinical decisionmaking. Ann Emerg Med. 2003;41:110-120. [go to PubMed]

3. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78:775-780. [go to PubMed]




B


Bayesian Approach - Probabilistic reasoning in which test results (not just laboratory investigations, but history, physical exam, or any aspect for the diagnostic process) are combined with prior beliefs about the probability of a particular disease. One way of recognizing the need for a Bayesian approach is to recognize the difference between the performance of a test in a population vs. in an individual. At the population level, we can say that a test has a sensitivity and specificity of, say, 90%—ie, 90% of patients with the condition of interest have a positive result and 90% of patients without the condition have a negative result. In practice, however, a clinician needs to attempt to predict whether an individual patient with a positive or negative result does or does not have the condition of interest. This prediction requires combining the observed test result not just with the known sensitivity and specificity, but also with the chance the patient could have had the disease in the first place (based on demographic factors, findings on exam, or general clinical gestalt).

Beers criteria - Beers criteria define medications that should generally be avoided in ambulatory elderly patients, doses or frequencies of administration that should generally not be exceeded, and medications that should be avoided in older persons known to have any of several common conditions. They were originally developed using a formal consensus process for combining reviews of the evidence with expert input. The criteria for inappropriate use address commonly used categories of medications such as sedative-hypnotics, antidepressants, antipsychotics, antihypertensives, nonsteroidal anti-inflammatory agents, oral hypoglycemics, analgesics, dementia treatments, platelet inhibitors, histamine-2 blockers, antibiotics, decongestants, iron supplements, muscle relaxants, gastrointestinal antispasmodics, and antiemetics. The criteria were intended to guide clinical practice, but also to inform quality assurance review and health services research.

Benchmark - A "benchmark" in health care refers to an attribute or achievement that serves as a standard for other providers or institutions to emulate.

Benchmarks differ from other "standard of care" goals, in that they derive from empiric data—specifically, performance or outcomes data. For example, a statewide survey might produce risk-adjusted 30-day rates for death or other major adverse outcomes. After adjusting for relevant clinical factors, the top 10% of hospitals can be identified in terms of particular outcome measures. These institutions would then provide benchmark data on these outcomes. For instance, one might benchmark "door-to-balloon" time at 90 minutes, based on the observation that the top-performing hospitals all had door-to-balloon times in this range.

In the present example regarding infection control, benchmarks would typically be derived from national or regional data on the rates of relevant nosocomial infections. The lowest 10% of these rates might be regarded as benchmarks for other institutions to emulate.

The article below provides an excellent discussion of the principles of benchmarking and the specific steps in using outcomes data to generate benchmarks.

Kiefe CI, Weissman NW, Allison JJ, et al. Identifying achievable benchmarks of care: concepts and methodology. Int J Qual Health Care. 1998;10:443-47. [ go to PubMed ]

Black Box Warnings - Refer to the prominent warning labels (inside "black boxes") on packages for certain prescription medications in the United States. These warnings typically arise from post-market surveillance or post-approval clinical trials that bring to light serious adverse reactions. The U.S. Food and Drug Administration (FDA) subsequently may require a pharmaceutical company to place a black box warning on the labeling or packaging of the drug.

Black box warnings tend to appear relatively soon after drug approval. Among new medications approved in the United States between 1975 and 2000, 10% either acquired a new black box warning or were withdrawn from the market, with half of these changes occurring within 7 years of drug introduction.(1) However, in some cases, major side effects that result in black box warnings have not come to light for decades.(2) Prominent examples of side effects leading to black box warnings are liver toxicity (valproic acid, ketoconazole) and increased risk of suicidal behavior (certain antidepressants in children).

Black box warnings should not be regarded as the equivalent of a "skull and crossbones." Most convey important information that clinicians should take into account when weighing benefits and risks, but do not completely contraindicate the use of the medication. Rather, the purpose of the warning is to guide safe selection of the medication (eg, not prescribing a medication with a black box warning about liver toxicity to a patient who already has problems with her liver). Interestingly, even when patients receive medications in apparent violation of black box warnings, the risk of harm appears quite low.(3) When more serious side effects come to light that truly contraindicate the use of a medication for most patients, the FDA will typically remove the medication from the market (eg, as occurred with the non-steroidal anti-inflammatory medication Vioxx [4,5]).

That said, occasionally drugs remain on the market when they clearly should have been withdrawn, so one should not disregard the potential seriousness of black box warnings, even if harm appears rare. For instance, the oral diabetic medication troglitazone was rapidly withdrawn from the European market due to concerns over liver toxicity, but enjoyed sales of more than $2 billion in the United States before it was withdrawn in March 2000. By the time of the removal, the drug had been linked to at least 90 cases of liver failure, 70 of which resulted in death or the need for liver transplantation.(6)

In summary, although medications with black box warnings often enjoy widespread use and, with cautious use, typically do not result in harm, these warnings remain important sources of safety information for patients and health care providers. They also emphasize the importance of continued, post-market surveillance for adverse drug reactions for all medications, especially relatively new ones.

1. Lasser KE, Allen PD, Woolhandler SJ, Himmelstein DU, Wolfe SM, Bor DH. Timing of new black box warnings and withdrawals for prescription medications. JAMA. 2002;287:2215-2220. [go to PubMed]

2. Ladewski LA, Belknap SM, Nebeker JR, et al. Dissemination of information on potentially fatal adverse drug reactions for cancer drugs from 2000 to 2002: first results from the research on adverse drug events and reports project. J Clin Oncol. 2003;21:3859-3866. [go to PubMed]

3. Horton R. Vioxx, the implosion of Merck, and aftershocks at the FDA. Lancet. 2004;364:1995-1996. [go to PubMed]

4. Waxman HA. The lessons of Vioxx—drug safety and sales. N Engl J Med. 2005;352:2576-2578. [go to PubMed]

5. Lasser KE, Seger DL, Yu DT, et al. Adherence to black box warnings for prescription medications in outpatients. Arch Intern Med. 2006;166:338-344. [go to PubMed]

6. Gale EA. Lessons from the glitazones: a story of drug development. Lancet. 2001;357:1870-1875. [go to PubMed]

Blunt End - The "blunt end" refers to the many layers of the health care system not in direct contact with patients, but which influence the personnel and equipment at the “sharp end” who do contact patients. The blunt end thus consists of those who set policy, manage health care institutions, design medical devices, and other people and forces, which, though removed in time and space from direct patient care, nonetheless affect how care is delivered.

Thus, an error programming an intravenous pump would represent a problem at the sharp end, while the institution’s decision to use multiple different types of infusion pumps, making programming errors more likely, would represent a problem at the blunt end. The terminology of “sharp” and “blunt” ends corresponds roughly to “active failures” and “latent conditions.”




C


Checklist - Algorithmic listing of actions to be performed in a given clinical setting (eg, Acute Cardiac Life Support [ACLS] protocols for treating cardiac arrest) to ensure that, no mater how often performed by a given practitioner, no step will be forgotten. An analogy is often made to flight preparation in aviation, as pilots and air-traffic controllers follow pre-take-off checklists regardless of how many times they have carried out the tasks involved.

Clinical Decision Support System (CDSS) - Any system designed to improve clinical decision making related to diagnostic or therapeutic processes of care. CDSSs thus address activities ranging from the selection of drugs (eg, the optimal antibiotic choice given specific microbiologic data [1]) or diagnostic tests (2) to detailed support for optimal drug dosing (3,4) and support for resolving diagnostic dilemmas.(5)

Structured antibiotic order forms (6) represent a common example of paper-based CDSSs. Although such systems are still commonly encountered, many people equate CDSSs with computerized systems in which software algorithms generate patient-specific recommendations by matching characteristics, such as age, renal function, or allergy history, with rules in a computerized knowledge base.

The distinction between decision support and simple reminders can be unclear, but usually reminder systems are included as decision support if they involve patient-specific information. For instance, a generic reminder (eg, “Did you obtain an allergy history?”) would not be considered decision support, but a warning (eg, “This patient is allergic to codeine.”) that appears at the time of entering an order for codeine would be.

1. Evans RS, Pestotnik SL, Classen DC, et al. A computer-assisted management program for antibiotics and other antiinfective agents. N Engl J Med. 1998;338:232-238.
[ go to PubMed ]

2. Harpole LH, Khorasani R, Fiskio J, Kuperman GJ, Bates DW. Automated evidence-based critiquing of orders for abdominal radiographs: impact on utilization and appropriateness. J Am Med Inform Assoc. 1997;4:511-521.
[ go to PubMed ]

3. Walton RT, Harvey E, Dovey S, Freemantle N. Computerised advice on drug dosage to improve prescribing practice. Cochrane Database Syst Rev. 2001:CD002894.
[ go to PubMed ]

4. Chertow GM, Lee J, Kuperman GJ, et al. Guided medication dosing for inpatients with renal insufficiency. JAMA. 2001;286:2839-2844.
[ go to PubMed ]

5. Friedman CP, Elstein AS, Wolf FM, et al. Enhancement of clinicians’ diagnostic reasoning by computer-based consultation: a multisite study of 2 systems. JAMA. 1999;282:1851-1856.
[ go to PubMed ]

6. Avorn J, Soumerai SB, Taylor W, Wessels MR, Janousek J, Weiner M. Reduction of incorrect antibiotic dosing through a structured educational order form. Arch Intern Med. 1988;148:1720-1724.
[ go to PubMed ]

Close Call - An event or situation that did not produce patient injury, but only because of chance. This good fortune might reflect robustness of the patient (eg, a patient with penicillin allergy receives penicillin, but has no reaction) or a fortuitous, timely intervention (eg, a nurse happens to realize that a physician wrote an order in the wrong chart). Such events have also been termed "near miss" incidents.

Competency - Having the necessary knowledge or technical skill to perform a given procedure within the bounds of success and failure rates deemed compatible with acceptable care.

Complexity Science (or Complexity Theory) - Provides an approach to understanding the behavior of systems that exhibit non-linear dynamics, or the ways in which some adaptive systems produce novel behavior not expected from the properties of their individual components. Such behaviors emerge as a result of interactions between agents at a local level in the complex system and between the system and its environment.(1,2)

At first, this may sound indistinguishable from the “systems thinking” commonly encountered in the patient safety literature. Some people probably use these terms loosely and occasionally interchangeably, but complexity theory differs importantly from systems thinking in its emphasis of the interaction between local systems and their environment (such as the larger system in which a given hospital or clinic operates). It is often tempting to ignore the larger environment as unchangeable and therefore outside the scope of quality improvement or patient safety activities. According to complexity theory, however, behavior within a hospital or clinic (eg, non-compliance with a national practice guideline) can often be understood only by identifying interactions between local attributes and environmental factors.

Another key feature of complexity theory is the emphasis on achieving deep understanding of a given problem prior to engaging in efforts to change practice. For instance, instead of simply identifying that providers’ behavior fails to comply with some target guideline and then implementing an “off the shelf” means of achieving behavior change (eg, a financial incentive), complexity theorists might identify what currently works well in a given practice and the attitudes or structures that provide the basis for what works well. This process may then reveal an important negative interaction between local values and perceptions about the national guideline. A more effective change strategy may then emerge in which the national guideline is adapted for the local setting. The alternative approach of attempting to force behavioral change may lead to no improvement or, worse, perverse collateral effects. This phenomenon is certainly familiar when the complex adaptive system in question is an ecosystem; complexity theorists advocate that we view health care systems through a similar lens and not rush into change strategies, however plausible they may seem. The two references below provide concrete examples to flesh out the ideas of complexity theory and distinguish it from other major theories of organizational behavior.(1,2)

1. Rhydderch M, Elwyn G, Marshall M, Grol R. Organisational change theory and the use of indicators in general practice. Qual Saf Health Care. 2004;13:213-217.
[ go to PubMed ]

2. Plsek PE, Wilson T. Complexity, leadership, and management in healthcare organisations. BMJ. 2001;323:746-749.
[ go to PubMed ]

Confirmation Bias - Refers to the tendency to focus on evidence that supports a working hypothesis, such as a diagnosis in clinical medicine, rather than to look for evidence that refutes it or provides greater support to an alternative diagnosis.(1,2) Suppose that a 65-year-old man with a past history of angina presents to the emergency department with acute onset of shortness of breath. The physician immediately considers the possibility of cardiac ischemia, so asks the patient if he has experienced any chest pain. The patient replies affirmatively. Because the physician perceives this answer as confirming his working diagnosis, he does not ask if the chest pain was pleuritic in nature, which would decrease the likelihood of an acute coronary syndrome and increase the likelihood of pulmonary embolism (a reasonable alternative diagnosis for acute shortness of breath accompanied by chest pain). The physician then orders an EKG and cardiac troponin. The EKG shows nonspecific ST changes and the troponin returns slightly elevated.

Of course, ordering an EKG and testing cardiac enzymes is appropriate in the work-up of acute shortness of breath, especially when it is accompanied by chest pain and in a patient with known angina. The problem is that these tests may be misleading, since positive results are consistent not only with acute coronary syndrome but also with pulmonary embolism. To avoid confirmation in this case, the physician might have obtained an arterial blood glass or a D-dimer level. Abnormal results for either of these tests would be relatively unlikely to occur in a patient with an acute coronary syndrome (unless complicated by pulmonary edema), but likely to occur with pulmonary embolism. These results could be followed up by more direct testing for pulmonary embolism (eg, with a helical CT scan of the chest), whereas normal results would allow the clinician to proceed with greater confidence down the road of investigating and managing cardiac ischemia.

This vignette was presented as if information were sought in sequence. In many cases, especially in acute care medicine, clinicians have the results of numerous tests in hand when they first meet a patient. The results of these tests often do not all suggest the same diagnosis. The appeal of accentuating confirmatory test results and ignoring nonconfirmatory ones is that it minimizes cognitive dissonance.(3)

A related cognitive trap that may accompany confirmation bias and compound the possibility of error is “anchoring bias”—the tendency to stick with one’s first impressions, even in the face of significant disconfirming evidence.

1. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78:775-780.
[ go to PubMed ]

2. Redelmeier DA. Improving patient care. The cognitive psychology of missed diagnoses. Ann Intern Med. 2005;142:115-120.
[ go to PubMed ]

3. Pines JM. Profiles in patient safety: confirmation bias in emergency medicine. Acad Emerg Med. 2006;13:90-94.
[ go to PubMed ]

Computerized Physician Order Entry or Computerized Provider Order Entry (CPOE) - Refers to a computer-based system of ordering medications and often other tests. Physicians (or other providers) directly enter orders into a computer system that can have varying levels of sophistication. Basic CPOE ensures standardized, legible, complete orders, and thus primarily reduces errors due to poor handwriting and ambiguous abbreviations. Almost all CPOE systems offer some additional capabilities, which fall under the general rubric of Clinical Decision Support System (CDSS). Typical CDSS features involve suggested default values for drug doses, routes of administration, or frequency. More sophisticated CDSSs can perform drug allergy checks (eg, the user orders ceftriaxone and a warning flashes that the patient has a documented penicillin allergy), drug-laboratory value checks (eg initiating an order for gentamicin prompts the system to alert you to the patient’s last creatinine), drug-drug interaction checks, and so on. At the highest level of sophistication, CDSS prevents not only errors of commission (eg, ordering a drug in excessive doses or in the setting of a serious allergy), but also of omission. (For example, an alert may appear such as, "You have ordered heparin; would you like to order a PTT in 6 hours?" Or, even more sophisticated: "The admitting diagnosis is hip fracture; would you like to order heparin DVT prophylaxis?")

Crew Resource Management - Crew resource management (CRM), also called crisis resource management in some contexts (eg, anesthesia), encompasses a range of approaches to training groups to function as teams, rather than as collections of individuals. Originally developed in aviation, CRM emphasizes the role of "human factors"-the effects of fatigue, expected or predictable perceptual errors (such as misreading monitors or mishearing instructions), as well as the impact of different management styles and organizational cultures in high-stress, high-risk environments.

CRM training develops communication skills, fosters a more cohesive environment among team members, and creates an atmosphere in which junior personnel will feel free to speak up when they think the something is amiss. Some CRM programs emphasize education on the settings in which errors occur and the aspects of team decision making conducive to "trapping" errors before they cause harm. Other programs may provide more hands-on training involving simulated crisis scenarios followed by debriefing sessions in which participants assess their own and others’ behavior.

Critical Incidents - A term made famous by a classic human factors study by Cooper (1) of “anesthetic mishaps,” though the term had first been coined in the 1950s. Cooper and colleagues brought the technique of critical incident analysis to a wide audience in health care but followed the definition of the originator of the technique.(2) They defined critical incidents as occurrences that are “significant or pivotal, in either a desirable or an undesirable way,” though Cooper and colleagues (and most others since) chose to focus on incidents that had potentially undesirable consequences. This definition by itself conveys little—what does “significant or pivotal” mean? It is best understood in the context of the type of investigation that follows, which is very much in the style of root cause analysis. Thus, “significant or pivotal” means that there was significant potential for harm (or actual harm), but also that the event has the potential to reveal important hazards in the organization. In many ways, it is the spirit of the expression in quality improvement circles, “every defect is a treasure.”(3) In other words, these incidents, whether close calls or disasters in which significant harm occurred, provide valuable opportunities to learn about individual and organizational factors that can be remedied to prevent similar incidents in the future.

1. Cooper JB, Newbower RS, Long CD, McPeek B. Preventable anesthesia mishaps: a study of human factors. Anesthesiology. 1978;49:399-406.
[ go to PubMed ]

2. Flanagan JC. The critical incident technique. Psychol Bull. 1954;51:327-358.
[ go to PubMed ]

3. James BC. Every defect a treasure: learning from adverse events in hospitals. Med J Aust. 1997;166:484-487.
[ go to PubMed ]



D


Decision Support - Refers to any system for advising or providing guidance about a particular clinical decision at the point of care. For example, a copy of an algorithm for antibiotic selection in patients with community acquired pneumonia would count as clinical decision support if made available at the point of care. Increasingly, decision support occurs via a computerized clinical information or order entry system. Computerized decision support includes any software employing a knowledge base designed to assist clinicians in decision making at the point of care.

Typically a decision support system responds to "triggers" or "flags"—specific diagnoses, laboratory results, medication choices, or complex combinations of such parameters—and provides information or recommendations directly relevant to a specific patient encounter. For instance, ordering an aminoglycoside for a patient with creatinine above a certain value might trigger a message suggesting a dose adjustment based on the patient’s decreased renal function.




E


Error - An act of commission (doing something wrong) or omission (failing to do the right thing) that leads to an undesirable outcome or significant potential for such an outcome. For instance, ordering a medication for a patient with a documented allergy to that medication would be an act of commission. Failing to prescribe a proven medication with major benefits for an eligible patient (eg, low-dose unfractionated heparin as venous thromboembolism prophylaxis for a patient after hip replacement surgery) would represent an error of omission.

Errors of omission are more difficult to recognize than errors of commission but likely represent a larger problem. In other words, there are likely many more instances in which the provision of additional diagnostic, therapeutic, or preventive modalities would have improved care than there are instances in which the care provided quite literally should not have been provided. In many ways, this point echoes the generally agreed-upon view in the health care quality literature that underuse far exceeds overuse, even though the latter historically received greater attention. (See definition for for Underuse, Overuse, Misuse.)

In addition to commission vs. omission, three other dichotomies commonly appear in the literature on errors: active failures vs. latent conditions, errors at the "sharp end" vs. errors at the "blunt end," and slips vs. mistakes.




Error Chain - Error chain generally refers to the series of events that led to a disastrous outcome, typically uncovered by a root cause analysis. Sometimes the chain metaphor carries the added sense of inexorability, as many of the causes are tightly coupled, such that one problem begets the next. A more specific meaning of error chain, especially when used in the phrase break the error chain, relates to the common themes or categories of causes that emerge from root cause analyses. These categories go by different names in different settings, but they generally include (1) failure to follow standard operating procedures (2) poor leadership (3) breakdowns in communication or teamwork (4) overlooking or ignoring individual fallibility and (5) losing track of objectives. Used in this way, break the error chain is shorthand for an approach in which team members continually address these links as a crisis or routine situation unfolds. The checklists that are included in teamwork training programs have categories corresponding to these common links in the error chain (e.g., establish team leader, assign roles and responsibilities, monitor your teammates).




Evidence-based - Use of the phrase "evidence-based" in connection with an assertion about some aspect of medical care—a recommended treatment, the cause of some condition, or the best way to diagnose it—implies that the assertion reflects the results of medical research, as opposed to, for example, a personal opinion (plausible or widespread as that opinion might be). Given the volume of medical research and the not infrequent occurrence of conflicting results from different studies addressing the same question, the phrase "reflects the results of medical research" should be clarified as "reflects the preponderance of results from relevant studies of good methodological quality."

The concept of "evidence-based" treatments has particular relevance to patient safety, because many recommended methods for measuring and improving safety problems have been drawn from other high-risk industries, without any studies to confirm that these strategies work well in health care (or, in many cases, that they work well in the original industry).(1)

The lack of evidence supporting widely recommended (sometimes even mandated) patient safety practices contrasts sharply with the rest of clinical medicine. While individual practitioners may employ diagnostic tests or administer treatments of unproven value, professional organizations typically do not endorse such aspects of care until well-designed studies demonstrate that these diagnostic or treatment strategies confer net benefit to patients (i.e., until they become "evidence-based"). Certainly, diagnostic and therapeutic processes do not become standard of care or in any way mandated until they have undergone rigorous evaluation in well-designed studies.

In patient safety, by contrast, patient safety goals established at state and national levels (sometimes even mandated by regulatory agencies or by law) often reflect ideas that have undergone little or no empiric evaluation. Just as in clinical medicine, promising safety strategies sometimes can turn out to confer no benefit or even create new problems—hence the need for rigorous evaluations of candidate patient safety strategies just as in other areas of medicine.(2)

That said, just how high to set the bar for the evidence required to justify actively disseminating patient safety and quality improvement strategies is a subject that has received considerable attention in recent years. Some leading thinkers in patient safety argue that an evidence bar comparable to that used in more traditional clinical medicine would be too high, given the difficulty of studying complex social systems such as hospitals and clinics, and the high costs of studying interventions such as rapid response teams or computerized order entry.

A discussion of this debate can be found in the references below.(2-7)

A classic article (8) defining and describing "evidence-based medicine" in general can also be found below, as can a more recent overview, which discusses some of the recent criticism of evidence-based medicine as a paradigm for clinical medicine.(9)



1. Shojania KG, Duncan BW, McDonald KM, Wachter RM. Safe but sound: patient safety meets evidence-based medicine. JAMA. 2002;288:508-513.
[go to PubMed]



2. Auerbach AD, Landefeld CS, Shojania KG. The tension between needing to improve care and knowing how to do it. N Engl J Med. 2007;357:608-613.
[go to PubMed]



3. Berwick DM. The science of improvement. JAMA. 2008;299:1182-1184.
[go to PubMed]



4. Berwick DM. Broadening the view of evidence-based medicine. Qual Saf Health Care. 2005;14:315-316.
[go to PubMed]



5. Landefeld CS, Shojania KG, Auerbach AD. Should we use large scale healthcare interventions without clear evidence that benefits outweigh costs and harms? No. BMJ. 2008;336:1277.
[go to PubMed]



6. Wachter RM, Pronovost PJ. The 100,000 Lives Campaign: A scientific and policy review. Jt Comm J Qual Patient Saf. 2006;32:621-627.
[go to PubMed]



7. Pronovost P, Wachter R. Proposed standards for quality improvement research and publication: one step forward and two steps back. Qual Saf Health Care. 2006;15:152-153.
[go to PubMed]



8. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn't. BMJ. 1996;312:71-72.
[go to PubMed]



9. Shojania KG. Evidence-Based Medicine (EBM). Google Knol (unit of knowledge). Available at:
http://knol.google.com/k/kaveh-shojania/evidence-based-medicine-ebm/WvELmat9/oST8zA




F


Face Validity - The extent to which a technical concept, instrument, or study result is plausible, usually because its findings are consistent with prior assumptions and expectations.

Failure Mode - Error analysis may involve retrospective investigations (as in Root Cause Analysis) or prospective attempts to predict "error modes." Different frameworks exist for predicting possible errors. One commonly used approach is failure mode and effect analysis (FMEA), in which the likelihood of a particular process failure is combined with an estimate of the relative impact of that error to produce a "criticality index." By combining the probability of failure with the consequences of failure, this index allows for the prioritization of specific processes as quality improvement targets. For instance, an FMEA analysis of the medication dispensing process on a general hospital ward might break down all steps from receipt of orders in the central pharmacy to filling automated dispensing machines by pharmacy technicians. Each step in this process would be assigned a probability of failure and an impact score, so that all steps could be ranked according to the product of these two numbers. Steps ranked at the top (ie, those with the highest "criticality indices") would be prioritized for error proofing.

Failure Mode and Effect Analysis (FMEA) - Error analysis may involve retrospective investigations (as in Root Cause Analysis) or prospective attempts to predict "error modes." Different frameworks exist for predicting possible errors. One commonly used approach is failure mode and effect analysis (FMEA), in which the likelihood of a particular process failure is combined with an estimate of the relative impact of that error to produce a "criticality index." By combining the probability of failure with the consequences of failure, this index allows for the prioritization of specific processes as quality improvement targets. For instance, an FMEA analysis of the medication dispensing process on a general hospital ward might break down all steps from receipt of orders in the central pharmacy to filling automated dispensing machines by pharmacy technicians. Each step in this process would be assigned a probability of failure and an impact score, so that all steps could be ranked according to the product of these two numbers. Steps ranked at the top (ie, those with the highest "criticality indices") would be prioritized for error proofing.

Failure to Rescue - "Failure to rescue" is shorthand for failure to rescue (ie, prevent a clinically important deterioration, such as death or permanent disability) from a complication of an underlying illness (eg, cardiac arrest in a patient with acute myocardial infarction) or a complication of medical care (eg, major hemorrhage after thrombolysis for acute myocardial infarction). Failure to rescue thus provides a measure of the degree to which providers responded to adverse occurrences (eg, hospital-acquired infections, cardiac arrest or shock) that developed on their watch. It may reflect the quality of monitoring, the effectiveness of actions taken once early complications are recognized, or both.

The technical motivation for using failure to rescue to evaluate the quality of care stems from the concern that some institutions might document adverse occurrences more assiduously than other institutions.(1,2) Therefore, using lower rates of in-hospital complications by themselves may simply reward hospitals with poor documentation. However, if the medical record indicates that a complication has occurred, the response to that complication should provide an indicator of the quality of care that is less susceptible to charting bias.

Initial studies of mortality and complication rates after surgical procedures indicated that lower rates of failure to rescue correlated with other plausible quality measures.(1,2) Rates of failure to rescue have since served as outcome measures in prominent studies of the impacts of nurse-staffing ratios (3,4) and nurse educational levels (5) on the quality of care. Examples of the specific "rescue-able" adverse occurrences in such studies include pneumonia, shock, cardiac arrest, upper gastrointestinal bleeding, sepsis, and deep venous thrombosis.(4) Death after any of these in-hospital occurrences would count as failure to rescue, on the view that early identification by providers can influence the risk of death.

The AHRQ technical report that developed the AHRQ Patient Safety Indicators (6) reviews the evidence supporting failure to rescue as a measure of the quality and safety of hospital care. Although failure to rescue made the final set of approved indicators, the expert panels that reviewed each candidate indicator identified some unresolved concerns about its use. For instance, patients with advanced illnesses may be particularly difficult to rescue from complications such as sepsis and cardiac arrest. Moreover, patients with advanced illness may not wish "rescue" from such complications. The initial studies that examined failure to rescue focused on surgical care, where these issues may not be as problematic. Nonetheless, the concept of failure to rescue is an important one and finds increasing application in studies of health care quality and safety.

1. Silber JH, Williams SV, Krakauer H, Schwartz JS. Hospital and patient characteristics associated with death after surgery. A study of adverse occurrence and failure to rescue. Med Care. 1992;30:615-629.
[ go to PubMed ]


2. Silber JH, Rosenbaum PR, Schwartz JS, Ross RN, Williams SV. Evaluation of the complication rate as a measure of quality of care in coronary artery bypass graft surgery. JAMA. 1995;274:317-323.
[ go to PubMed ]


3. Aiken LH, Clarke SP, Sloane DM, Sochalski J, Silber JH. Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction. JAMA. 2002;288:1987-1993.
[ go to PubMed ]


4. Needleman J, Buerhaus P, Mattke S, Stewart M, Zelevinsky K. Nurse-staffing levels and the quality of care in hospitals. N Engl J Med. 2002;346:1715-1722.
[ go to PubMed ]


5. Aiken LH, Clarke SP, Cheung RB, Sloane DM, Silber JH. Educational levels of hospital nurses and surgical patient mortality. JAMA. 2003;290:1617-1623.
[ go to PubMed ]


6. McDonald KM, Romano PS, Geppert J, et al. Measures of Patient Safety Based on Hospital Administrative Data—The Patient Safety Indicators. Rockville, MD: Agency for Healthcare Research and Quality; 2002. AHRQ Publication No. 02-0038. Available at: http://www.ahrq.gov/clinic/evrptfiles.htm#psi.


The Five Rights - The "Five Rights"—administering the Right Medication, in the Right Dose, at the Right Time, by the Right Route, to the Right Patient—are the cornerstone of traditional nursing teaching about safe medication practice.

Although no one would disagree with these goals, framing them as an effective and comprehensive basis for safe practice can be misleading because they miss crucial aspects of modern thinking about patient safety.(1) In fact, regarding them as the standard for nursing practice may have the perverse effect of perpetuating the traditional focus on individual performance rather than on system improvement. They also encourage administrators to penalize competent frontline practitioners for expected human errors that are beyond their control—situations in which harm to patients truly reflects system failings.

For instance, while the Five Rights represent goals of safe medication administration, they contain no procedural detail. What procedures for identifying the Right Patient or Right Medication will avoid perceptual errors with regard to similar looking drug names or the inevitable human errors that result when practitioners "see" what they expect to see? Consider this common scenario: It's Mr. Jones' room and a quick glance at his wristband confirms that. Even when the quick glance is followed up with verbal confirmation, without concrete procedures that in fact do identify the Right Patient, the nurse might simply ask, "You're Mr. Jones, right?" This leading question could inappropriately elicit an affirmative answer if Mr. James is drowsy or in pain, or from Mr. Anybody when he is delirious, hard of hearing, or does not speak English well. A better question would be "What is your name?"

The Five Rights also ignore human factor and systems design issues (such as workload, ambient distractions, poor lighting, problems with wristbands, ineffective double check protocols, etc.) that can threaten or undermine even the most conscientious efforts to comply with the Five Rights. In the end, the Five Rights remain an important goal for safe medication practice, but one that may give the illusion of safety if not supported by strong policies and procedures, a system organized around modern principles of patient safety, and a robust safety culture.

1. The "five rights." ISMP Medication Safety Alert! Acute Care Edition. April 7, 1999. Available at: http://www.ismp.org/Newsletters/acutecare/articles/19990407.asp.

Forcing Function - An aspect of a design that prevents a target action from being performed or allows its performance only if another specific action is performed first. For example, automobiles are now designed so that the driver cannot shift into reverse without first putting her foot on the brake pedal. Forcing functions need not involve device design. For instance, one of the first forcing functions identified in health care is the removal of concentrated potassium from general hospital wards. This action is intended to prevent the inadvertent preparation of intravenous solutions with concentrated potassium, an error that has produced small but consistent numbers of deaths for many years.




H

Health Literacy - Individuals’ ability to find, process, and comprehend the basic health information necessary to act on medical instructions and make decisions about their health.(1)

1. Ad Hoc Committee on Health Literacy for the Council on Scientific Affairs AMA. Health literacy: report of the Council on Scientific Affairs. JAMA. 1999;281:552-7.
[ go to PubMed ]

Heuristic - Loosely defined or informal rule often arrived at through experience or trial and error (eg, gastrointestinal complaints that wake patients up at night are unlikely to be functional). Heuristics provide cognitive shortcuts in the face of complex situations, and thus serve an important purpose. Unfortunately, they can also turn out to be wrong.


High Reliability Organizations (HROs) - High reliability organizations refer to organizations or systems that operate in hazardous conditions but have fewer than their fair share of adverse events. (1,2) Commonly discussed examples include air traffic control systems, nuclear power plants, and naval aircraft carriers. (3,4) It is worth noting that, in the patient safety literature, HROs are considered to operate with nearly failure-free performance records, not simply better than average ones. This shift in meaning is somewhat understandable given that the “failure rates” in these other industries are so much lower than rates of errors and adverse events in health care. This comparison glosses over the difference in significance of a “failure” in the nuclear power industry compared with one in health care. The point remains, however, that some organizations achieve consistently safe and effective performance records despite unpredictable operating environments or intrinsically hazardous endeavors. Detailed case studies of specific HROs have identified some common features, which have been offered as models for other organizations to achieve substantial improvements in their safety records. These features include:

Preoccupation with failure—the acknowledgment of the high-risk, error-prone nature of an organization’s activities and the determination to achieve consistently safe operations.
Commitment to resilience—the development of capacities to detect unexpected threats and contain them before they cause harm, or bounce back when they do.
Sensitivity to operations—an attentiveness to the issues facing workers at the frontline. This feature comes into play when conducting analyses of specific events (eg, frontline workers play a crucial role in root cause analyses by bringing up unrecognized latent threats in current operating procedures), but also in connection with organizational decision making, which is somewhat decentralized. Management units at the frontline are given some autonomy in identifying and responding to threats, rather than adopting a rigid top-down approach.
A culture of safety, in which individuals feel comfortable drawing attention to potential hazards or actual failures without fear of censure from management.


1. Weick KE, Sutcliffe KM. Managing the Unexpected: Assuring High Performance in an Age of Complexity. San Francisco, CA: Jossey-Bass; 2001.

2. Reason J. Human error: models and management. BMJ. 2000;320:768-770.
[ go to PubMed ]

3. LaPorte TR. The United States air traffic control system: increasing reliability in the midst of rapid growth. In: Mayntz R, Hughes TP, eds. The Development of Large Technical Systems. Boulder, CO: Westview Press; 1988.

4. Roberts KH. Managing high reliability organizations. Calif Manage Rev. 1990;32:101-113.


Hindsight Bias - In a very general sense, hindsight bias relates to the common expression “hindsight is 20/20.” This expression captures the tendency for people to regard past events as expected or obvious, even when, in real time, the events perplexed those involved. More formally, one might say that after learning the outcome of a series of events—whether the outcome of the World Series or the steps leading to a war—people tend to exaggerate the extent to which they had foreseen the likelihood of its occurrence.

In the context of safety analysis, hindsight bias refers to the tendency to judge the events leading up to an accident as errors because the bad outcome is known. The more severe the outcome, the more likely that decisions leading up to this outcome will be judged as errors. Judging the antecedent decisions as errors implies that the outcome was preventable. In legal circles, one might use the phrase “but for,” as in “but for these errors in judgment, this terrible outcome would not have occurred.” Such judgments return us to the concept of “hindsight is 20/20.” Those reviewing events after the fact see the outcome as more foreseeable and therefore more preventable than they would have appreciated in real time.

Psychologist Baruch Fischhoff drew attention to the importance of this problem in a classic paper published in 1975 (1), since which time multiple examples of the impacts of this bias have been explored in the psychology literature.

The impact of hindsight on judgments by peer reviewers regarding the quality of clinical care in medicine has also been demonstrated.(2) One of the case-based discussions in “Quality Grand Rounds,” published in Annals of Internal Medicine, provides a detailed exploration of the extent to which difficult decisions are cast as errors after an undesirable outcome occurs.(3)

1. Fischhoff B. Hindsight ? foresight: the effect of outcome knowledge on judgment under uncertainty [reprint of Fischhoff B. Hindsight does not equal foresight: the effect of outcome knowledge on judgment under uncertainty. J of Exp Psychol: Hum Perform and Perception. 1975;1:288–299.]. Qual Saf Health Care. 2003;12:304-112.
[ go to PubMed ]

2. Caplan RA, Posner K., Cheney FW. Effect of outcome on physician judgments of appropriateness of care. JAMA. 1991;265:1957-1960.
[ go to PubMed ]

3. Hofer TP, Hayward RA. Are bad outcomes from questionable clinical decisions preventable medical errors? A case of cascade iatrogenesis. Ann Intern Med. 2002; 137:327-333.
[ go to PubMed ]


The Health Insurance Portability and Accountability Act (HIPAA) - The Health Insurance Portability and Accountability Act of 1996 (HIPAA) contains new federal regulations intended to increase privacy and security of patient information during electronic transmission or communication of "protected health information" (PHI) among providers or between providers and payers or other entities.

"Protected health information" (PHI) includes all medical records and other individually identifiable health information. "Individually identifiable information" includes data that explicitly linked to a patient as well as health information with data items with a reasonable potential for allowing individual identification.

HIPAA also requires providers to offer patients certain rights with respect to their information, including the right to access and copy their records and the right to request amendments to the information contained in their records.

Administrative protections specified by HIPAA to promote the above regulations and rights include requirements for a Privacy Officer and staff training regarding the protection of patients’ information.


Human Factors (or Human Factors Engineering) - Refers to the study of human abilities and characteristics as they affect the design and smooth operation of equipment, systems, and jobs. The field concerns itself with considerations of the strengths and weaknesses of human physical and mental abilities and how these affect the systems design. Human factors analysis does not require designing or redesigning existing objects. For instance, the now generally accepted recommendation that hospitals standardize equipment such as ventilators, programmable IV pumps, and defibrillators (ie, that each hospital pick a single type, so that different floors do not have different defibrillators) is an example of a very basic application of a heuristic from human factors that equipment be standardized within a system wherever possible. In general, human factors engineering examines a particular activity in terms of its component tasks and then considers each task in terms of: physical demands, skill demands, mental workload, and other such factors, along with their interactions with aspects of the work environment (eg, adequate lighting, limited noise, or other distractions), device design, and team dynamics.




I


Iatrogenic - An adverse effect of medical care, rather than of the underlying disease (literally "brought forth by healer," from Greek iatros, for healer, and gennan, to bring forth); equivalent to adverse event.

Incident Reporting - Refers to the identification of occurrences that could have led, or did lead, to an undesirable outcome. Reports usually come from personnel directly involved in the incident or events leading up to it (eg, the nurse, pharmacist, or physician caring for a patient when a medication error occurred) rather than, say, floor managers.

Incident reporting represents a species of the more general activity of surveillance for errors, adverse events, or other quality problems. From the perspective of those collecting the data, incident reporting counts as a passive form of surveillance. It relies on those involved in target incidents choosing to provide the desired information. More active methods of surveillance range from activities such as going to gatherings of frontline workers and asking if any recent incidents have occurred (1) to retrospective medical record review (2) to direct observation.(3) Compared with medical record review and direct observation, incident reporting captures only a fraction of incidents.(3,4)

Despite their low yield, spontaneous incident reporting systems have some advantages, including their relatively low cost and the involvement of frontline personnel in the process of identifying important problems for the organization. The involvement of frontline workers, however, also raises the issue of confidentiality. Because incident reports tend to come from personnel involved in the incidents, these personnel may have legitimate concerns about the effects reporting will have on their performance records. To encourage reporting, some organizations make incident reporting anonymous. In other words, personnel can report an incident without identifying themselves.

Absent anonymity, some incident reporting systems assure confidentiality regarding the identity of individuals who submit reports. The Aviation Safety Reporting System (http://asrs.arc.nasa.gov) represents a confidential reporting system. As long as the persons reporting incidents have not committed any breaches of professional conduct, their identities remain in strict confidence and play no role in the investigations.


1. Weingart SN, Ship AN, Aronson MD. Confidential clinician-reported surveillance of adverse events among medical inpatients. J Gen Intern Med. 2000;15:470-477.
[ go to PubMed ]

2. Bates DW, Cullen DJ, Laird N, et al. Incidence of adverse drug events and potential adverse drug events. Implications for prevention. ADE Prevention Study Group. JAMA. 1995;274:29-34.
[ go to PubMed ]

3. Flynn EA, Barker KN, Pepper GA, Bates DW, Mikeal RL. Comparison of methods for detecting medication errors in 36 hospitals and skilled-nursing facilities. Am J Health Syst Pharm. 2002;59:436-446.
[ go to PubMed ]

4. Cullen DJ, Bates DW, Small SD, Cooper JB, Nemeskal AR, Leape LL. The incident reporting system does not detect adverse drug events: a problem for quality improvement. Jt Comm J Qual Improv. 1995;21:541-548.
[ go to PubMed ]

Informed Consent - Refers to the process whereby a physician informs a patient about the risks and benefits of a proposed therapy or test. Informed consent aims to provide sufficient information about the proposed treatment and any reasonable alternatives that the patient can exercise autonomy in deciding whether to proceed.

Legislation governing the requirements of, and conditions under which, consent must be obtained varies by jurisdiction. Most general guidelines require patients to be informed of the nature of their condition, the proposed procedure, the purpose of the procedure, the risks and benefits of the proposed treatments, the probability of the anticipated risks and benefits, alternatives to the treatment and their associated risks and benefits, and the risks and benefits of not receiving the treatment or procedure.

Although the goals of informed consent are irrefutable, consent is often obtained in a haphazard, pro forma fashion, with patients having little true understanding of procedures to which they have consented. Evidence suggests that asking patients to restate the essence of the informed consent improves the quality of these discussions and makes it more likely that the consent is truly "informed."

[ Procedures For Obtaining Informed Consent ]


J


Just Culture - Just Culture - The phrase “just culture” was popularized in the patient safety lexicon by a report (1) that outlined principles for achieving a culture in which frontline personnel feel comfortable disclosing errors—including their own—while maintaining professional accountability. The examples in the report relate to transfusion safety, but the principles clearly generalize across domains within health care organizations.

Traditionally, health care’s culture has held individuals accountable for all errors or mishaps that befall patients under their care. By contrast, a just culture recognizes that individual practitioners should not be held accountable for system failings over which they have no control. A just culture also recognizes many individual or “active” errors represent predictable interactions between human operators and the systems in which they work. However, in contrast to a culture that touts “no blame” as its governing principle, a just culture does not tolerate conscious disregard of clear risks to patients or gross misconduct (eg, falsifying a record, performing professional duties while intoxicated).

In summary, a just culture recognizes that competent professionals make mistakes and acknowledges that even competent professionals will develop unhealthy norms (shortcuts, “routine rule violations”), but has zero tolerance for reckless behavior.

1. Marx D. Patient Safety and the “Just Culture”: A Primer for Health Care Executives. New York, NY: Columbia University; 2001. Available at: http://www.mers-tm.org/support/Marx_Primer.pdf


L

Latent Error (or Latent Condition) - The terms "active" and "latent" as applied to errors were coined by James Reason.(1,2) Latent errors (or latent conditions) refer to less apparent failures of organization or design that contributed to the occurrence of errors or allowed them to cause harm to patients. For instance, whereas the active failure in a particular adverse event may have been a mistake in programming an intravenous pump, a latent error might be that the institution uses multiple different types of infusion pumps, making programming errors more likely. Thus, latent errors are quite literally "accidents waiting to happen."

Latent errors are sometimes referred to as errors at the "blunt end," referring to the many layers of the health care system that affect the person "holding" the scalpel. Active failures, in contrast, are sometimes referred to as errors at the “sharp end,” or the personnel and parts of the health care system in direct contact with patients.

1. Reason JT. Human Error. New York, NY: Cambridge University Press; 1990.
[ go to PSNet listing ]

2. Reason J. Human error: models and management. BMJ. 2000;320:768-770.
[ go to PubMed ]

Learning Curve - The acquisition of any new skill is associated with the potential for lower-than-expected success rates or higher-than-expected complication rates. This phenomenon is often known as a "learning curve." In some cases, this learning curve can be quantified in terms of the number of procedures that must be performed before an operator can replicate the outcomes of more experienced operators or centers.

While learning curves are almost inevitable when new procedures emerge or new providers are in training, minimizing their impact is a patient safety imperative. One option is to perform initial operations or procedures under the supervision of more experienced operators. Surgical and procedural simulators may play an increasingly important role in decreasing the impact of learning curves on patients, by allowing acquisition of relevant skills in laboratory settings.




M


Magnet Hospital Status - Refers to a designation by the Magnet Hospital Recognition Program administered by the American Nurses Credentialing Center. The program has its genesis in a 1983 study conducted by the American Academy of Nursing that sought to identify hospitals that retained nurses for longer than average periods of time. The study identified institutional characteristics correlated with high retention rates, an important finding in light of a major nursing shortage at the time.(1) These findings provided the basis for the concept of “magnet hospital” and led 10 years later to the formal Magnet Program.

Without taking anything away from the particular hospitals that have achieved Magnet status, the program as a whole has its critics. In fact, at least one state nurses’ association (Massachusetts) has taken an official position critiquing the program, charging that its perpetuation reflects the financial interests of its sponsoring organization and the participating hospitals more than the goals of improving health care quality or improving working conditions for nurses.(2)

Regardless of the particulars of the Magnet Recognition Program and the lack of persuasive evidence linking magnet status to quality, to many the term “magnet hospital” connotes a hospital that delivers superior patient care and, partly on this basis, attracts and retains high-quality nurses.

1. Magnet hospitals. Attraction and retention of professional nurses. Task Force on Nursing Practice in Hospitals. American Academy of Nursing. ANA Publ. 1983;(G-160):i-xiv, 1-135.
[ go to PubMed ]

2. Position Statement On the "Magnet Recognition Program for Nursing Services in Hospitals" and Other Consultant-Driven Quality Improvement Projects that Claim to Improve Care [Massachusetts Nurses Association Web site]. November 2004.
Available at: http://www.massnurses.org/pubs/positions/magnet.htm.


Medical Emergency Team - The concept of medical emergency teams (also known as rapid response teams) is that of a cardiac arrest team with more liberal calling criteria. Instead of just frank respiratory or cardiac arrest, medical emergency teams respond to a wide range of worrisome, acute changes in patients’ clinical status, such as low blood pressure, difficulty breathing, or altered mental status. In addition to less stringent calling criteria, the concept of medical emergency teams de-emphasizes the traditional hierarchy in patient care in that anyone can initiate the call. Nurses, junior medical staff, or others involved in the care of patients can call for the assistance of the medical emergency team whenever they are worried about a patient’s condition, without having to wait for more senior personnel to assess the patient and approve the decision to call for help.

Medication Reconciliation - Patients admitted to a hospital commonly receive new medications or have changes made to their existing medications. As a result, the new medication regimen prescribed at the time of discharge may inadvertently omit needed medications that patients have been receiving for some time.(1) Alternatively, new medications may unintentionally duplicate existing medications. For example, a physician might prescribe a calcium channel blocker to a patient who has hypertension but is already taking another medication from the same drug class.

Such unintended inconsistencies in medication regimens may occur at any point of transition in care (e.g., transfer from an intensive care unit to a general ward), not just hospital admission or discharge. Medication reconciliation refers to the process of avoiding such inadvertent inconsistencies across transitions in care by reviewing the patient’s complete medication regimen at the time of admission/transfer/discharge and comparing it with the regimen being considered for the new setting of care.

In July 2004, the Joint Commission for Accreditation of Healthcare Organizations (JCAHO) announced 2005 National Patient Safety Goal #8 to "accurately and completely reconcile medications across the continuum of care."(2) The JCAHO does not stipulate the details of the reconciliation process or who should perform it. While most hospitals cannot afford to hire pharmacists to take on this role, it is worth noting that more rigorous positive studies of medication reconciliation have tended to involve pharmacists performing the medication history and reconciliation process.(3-5)

1. Tam VC, Knowles SR, Cornish PL, Fine N, Marchesano R, Etchells EE. Frequency, type and clinical importance of medication history errors at admission to hospital: a systematic review. CMAJ 2005;173:510-515.
2. Using medication reconciliation to prevent errors. Sentinel Event Alert. Issue 35 - January 25, 2006. Available at: http://www.jointcommission.org/SentinelEvents/SentinelEventAlert/sea_35.htm. Accessed May 15, 2006.
3. Schnipper JL, Kirwin JL, Cotugno MC, et al. Role of pharmacist counseling in preventing adverse drug events after hospitalization. Arch Intern Med 2006;166:565-571.
4. Coleman EA, Smith JD, Raha D, Min SJ. Posthospital medication discrepancies: prevalence and contributing factors. Arch Intern Med 2005;165:1842-1847.
5. Cornish PL, Knowles SR, Marchesano R, et al. Unintended medication discrepancies at the time of hospital admission. Arch Intern Med 2005;165:424-429.

Mental Models - Mental models are psychological representations of real, hypothetical, or imaginary situations. Scottish psychologist Kenneth Craik (1943) first proposed mental models as the basis for anticipating events and explaining events (ie, for reasoning). Though easiest to conceptualize in terms of mental pictures of objects (eg, a DNA double helix or the inside of an internal combustion engine) mental models can also include "scripts" or processes and other properties beyond images. Mental models create differing expectations, which suggest different courses of action. For instance, when you walk into a fast-food restaurant, you are invoking a different mental model than when in a fancy restaurant. Based on this model, you automatically go to place your order at the counter, rather than sitting at a booth and expecting a waiter to take your order.


Metacognition - Metacognition refers to thinking about thinking—that is, reflecting on the thought processes that led to a particular diagnosis or decision to consider whether biases or cognitive short cuts may have had a detrimental effect. Numerous cognitive biases affect human reasoning.(1-3)

In some ways, metacognition amounts to playing devil’s advocate with oneself when it comes to working diagnoses and important therapeutic decisions. However, the devil is often in the details—one must become familiar with the variety of specific biases that commonly affect medical reasoning. For instance, when discharging a patient with atypical chest pain from the emergency department, you might step back and consider how much the discharge diagnosis of "musculoskeletal pain" reflects the sign out as a "soft rule out" given by a colleague on the night shift. Or, your might mull over the degree to which your reaction to and assessment of a particular patient stemmed from his having been labeled a "frequent flyer." Another cognitive bias is that clinicians tend to assign more importance to pieces of information that required personal effort to obtain (4) (eg, the additional symptom elicited by your history compared with that given by a colleague, or the lab result obtained though numerous phone calls.)

While metacognition refers to the general process of reflecting on the possibility of cognitive biases affecting clinical diagnoses and decisions, "cognitive forcing functions" refer to specific approaches to looking for such biases.(1,5) Just as a computer programmer might routinely check for errors during the "debugging" process, clinicians should likewise consider routinely employing cognitive strategies to check for "bugs." These should take into account the different types of biases known to affect cognition (reviewed in the articles below [1-3,5]), details of the clinical context, and even personal details (eg, recognition that you like to follow hunches or trust your initial gestalt).

1. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78:775-80. [ go to PubMed ]

2. Croskerry P. Achieving quality in clinical decision making: cognitive strategies and detection of bias. Acad Emerg Med. 2002;9:1184-1204. [ go to PubMed ]

3. Graber M, Gordon R, Franklin N. Reducing diagnostic errors in medicine: what’s the goal? Acad Med. 2002;77:981-92. [ go to PubMed ]

4. Redelmeier DA, Shafir E, Aujla PS. The beguiling pursuit of more information. Med Decis Making. 2001;21:376-381. [ go to PubMed ]

5. Croskerry P. Cognitive forcing strategies in clinical decisionmaking. Ann Emerg Med. 2003;41:110-20. [ go to PubMed ]


Mistakes - In some contexts, errors are dichotomized as “slips” or “mistakes,” based on the cognitive psychology of task-oriented behavior. Attentional behavior is characterized by conscious thought, analysis, and planning, as occurs in active problem solving. Schematic behavior refers to the many activities we perform reflexively or as if acting on “autopilot.” Complementary to these two behavior types are two categories of error: slips and mistakes.

Mistakes reflect failures during attentional behaviors, or incorrect choices. Rather than lapses in concentration (as with slips), mistakes typically involve insufficient knowledge, failure to correctly interpret available information, or application of the wrong cognitive “heuristic” or rule. Thus, choosing the wrong diagnostic test or ordering a suboptimal medication for a given condition represent mistakes. A slip, on the other hand, would be forgetting to check the chart to make sure you ordered them for the right patient.

Operationally, one can distinguish slips from mistakes by asking if the error involved problem solving. Mistakes refer to errors that arise in problem solving. Reason distinguishes rule-based errors and knowledge-based errors. Using this terminology, slips are characterized as skill-based errors.(1)

Distinguishing slips from mistakes serves two important functions. First, the risk factors for their occurrence differ. Slips occur in the face of competing sensory or emotional distractions, fatigue, and stress; mistakes more often reflect lack of experience or insufficient training. Second, the appropriate responses to these error types differ. Reducing the risk of slips requires attention to the designs of protocols, devices, and work environments—using checklists so key steps will not be omitted, reducing fatigue among personnel (or shifting high-risk work away from personnel who have been working extended hours), removing unnecessary variation in the design of key devices, eliminating distractions (eg, phones) from areas where work requires intense concentration, and other redesign strategies. Reducing the likelihood of mistakes typically requires more training or supervision. Even in the many cases of slips, health care has typically responded to all errors as if they were mistakes, with remedial education and/or added layers of supervision.

1. Reason JT. Human Error. New York, NY: Cambridge University Press; 1990. [ go to PSNet listing ]


N

Near Miss - An event or situation that did not produce patient injury, but only because of chance. This good fortune might reflect robustness of the patient (eg, a patient with penicillin allergy receives penicillin, but has no reaction) or a fortuitous, timely intervention (eg, a nurse happens to realize that a physician wrote an order in the wrong chart). This definition is identical to that for close call.

Normal Accident Theory - Though less often cited than high reliability theory in the health care literature, normal accident theory has played a prominent role in the study of complex organizations. The phrase and theory were developed by sociologist Charles Perrow (1) in connection with a careful analysis of the accident at the Three Mile Island nuclear power plant in 1979, among other industrial (near) catastrophes. In contrast to the optimism of high reliability theory, normal accident theory suggests that, at least in some settings, major accidents become inevitable and, thus, in a sense, "normal."

Perrow proposed two factors that create an environment in which a major accident becomes increasingly likely over time: "complexity" and "tight coupling." The degree of complexity envisioned by Perrow occurs when no single operator can immediately foresee the consequences of a given action in the system. Tight coupling occurs when processes are intrinsically time-dependent–once a process has been set in motion, it must be completed within a certain period of time. Many health care organizations would illustrate Perrow’s definition of complexity, but only hospitals would be regarded as exhibiting tight coupling. Importantly, normal accident theory contends that accidents become inevitable in complex, tightly coupled systems regardless of steps taken to increase safety. In fact, these steps sometimes increase the risk for future accidents through unintended collateral effects and general increases in system complexity.

Approximately 10 years after normal accident theory appeared, Scott Sagan, a political scientist, conducted a detailed examination of the question of why there has never been an accidental nuclear war (2) with a view toward testing the competing paradigms of normal accident theory and high reliability theory. The results of detailed archival research initially appeared to confirm the predictions of high reliability theory. However, interviews with key personnel uncovered several hair-raising near misses. The study ultimately concluded that good fortune played a greater role than good design in the safety record of the nuclear weapons industry to date.

Even if one does not believe the central contention of normal accident theory–that the potential for catastrophe emerges as an intrinsic property of certain complex systems–analyses informed by this theory’s perspective have offered some fascinating insights into possible failure modes for high-risk organizations, including hospitals.

1. Perrow C. Normal Accidents: Living with High-Risk Technologies. Princeton, NJ; Princeton University Press; 1999. [ go to PSNet listing ]

2. Sagan SD. The Limits of Safety: Organizations, Accidents and Nuclear Weapons. Princeton, NJ: Princeton University Press; 1993. [ go to PSNet listing] ]

Normalization of Deviance - Normalization of deviance was coined by Diane Vaughan in her book The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA (1), in which she analyzes the interactions between various cultural forces within NASA that contributed to the Challenger disaster. Vaughn used this expression to describe the gradual shift in what is regarded as normal after repeated exposures to “deviant behavior” (behavior straying from correct [or safe] operating procedure). Corners get cut, safety checks bypassed, and alarms ignored or turned off, and these behaviors become normal—not just common, but stripped of their significance as warnings of impending danger. In their discussion of a catastrophic error in healthcare, Mark Chassin and Elise Becher used the phrase “a culture of low expectations.”(2) When a system routinely produces errors (paperwork in the wrong chart, major miscommunications between different members of a given healthcare team, patients in the dark about important aspects of the care), providers in the system become inured to malfunction. In such a system, what should be regarded as a major warning of impending danger is ignored as a normal operating procedure.

1. Vaughan D. The Challenger launch decision: risky technology, culture and deviance at NASA. Chicago, IL: University of Chicago Press; 1996.

2. Chassin MR, Becher EC. The wrong patient. Ann Intern Med 2002;136:826-833.[ go to PubMed ]

O


Onion - The "onion" model illustrates variables that affect the multiple levels of a hierarchal system in which a task is performed and errors occur.




P

Patient Safety - Freedom from accidental or preventable injuries produced by medical care.

Pay for Performance - (sometimes abbreviated as “P4P”) Refers to the general strategy of promoting quality improvement by rewarding providers (meaning individual clinicians or, more commonly, clinics or hospitals) who meet certain performance expectations with respect to health care quality or efficiency.

Performance can be defined in terms of patient outcomes but is more commonly defined in terms of processes of care (eg, the percentage of eligible diabetics who have been referred for annual retinal examinations, the percentage of children who have received immunizations appropriate for their age, patients admitted to the hospital with pneumonia who receive antibiotics within 6 hours).

Pay-for-performance initiatives reflect the efforts of purchasers of health care—from the federal government to private insurers—to use their purchasing power to encourage providers to develop whatever specific quality improvement initiatives are required to achieve the specified targets. Thus, rather than committing to a specific quality improvement strategy, such as a new information system or a disease management program, which may have variable success in different institutions, pay for performance creates a climate in which provider groups will be strongly incentivized to find whatever solutions will work for them.

A brief overview of pay for performance in general, with references and Web sites for specific programs can be found in the reference below.

1. Pawlson LG. Pay for performance: two critical steps needed to achieve a successful program. Am J Manag Care. November 2004 (suppl).
Available at: http://www.ajmc.com/Article.cfm?Menu=1&ID=2771

Plan-Do-Study-Act - Commonly referred to as PDSA (or PDCA, for Plan-Do-Check-Act), refers to the cycle of activities advocated for achieving process or system improvement. The cycle was first proposed by Walter Shewhart, one of the pioneers of statistical process control (see glossary definition for run charts) and popularized by his student, quality expert W. Edwards Deming. The PDSA cycle represents one of the cornerstones of continuous quality improvement (CQI). The components of the cycle are briefly described below:

Plan: Analyze the problem you intend to improve and devise a plan to correct the problem.
Do: Carry out the plan (preferably as a pilot project to avoid major investments of time or money in unsuccessful efforts).
Study: Did the planned action succeed in solving the problem? If not, what went wrong? If partial success was achieved, how could the plan be refined?
Act: Adopt the change piloted above as is, abandon it as a complete failure, or modify it and run through the cycle again. Regardless of which action is taken, the PDSA cycle continues, either with the same problem or a new one.
The references below discuss PDSA cycles and the interpretation of articles reporting quality improvement activities driven by the PDSA approach.

1. Walley P, Gowland B. Completing the circle: from PD to PDSA. Int J Health Care Qual Assur Inc Leadersh Health Serv. 2004;17:349-358.
[ go to PubMed ]

2. Speroff T, James BC, Nelson EC, Headrick LA, Brommels M. Guidelines for appraisal and publication of PDSA quality improvement. Qual Manag Health Care. 2004;13:33-39.
[ go to PubMed ]

3. Speroff T, O’Connor GT. Study designs for PDSA quality improvement research. Qual Manag Health Care. 2004;13:17-32.
[ go to PubMed ]

Potential ADE - A potential adverse drug event is a medication error or other drug-related mishap that reached the patient but happened not to produce harm (eg, a penicillin-allergic patient receives penicillin but happens not to have an adverse reaction). In some studies, potential ADEs refer to errors or other problems that, if not intercepted, would be expected to cause harm. Thus, in some studies, if a physician ordered penicillin for a patient with a documented serious penicillin allergy, the order would be characterized as a potential ADE, on the grounds that administration of the drug would carry a substantial risk of harm to the patient.

Production Pressure - Represents the pressure to put quantity of output—for a product or a service—ahead of safety. This pressure is seen in its starkest form in the “line speed” of factory assembly lines, famously demonstrated by Charlie Chaplin in Modern Times, as he is carried away on a conveyor belt and into the giant gears of the factory by the rapidly moving assembly line. The dark reality of production pressures was also vividly described in Fast Food Nation (1) in the section on workers in meat-packing factories. The furious pace at which they must work—standing side by side and wielding sharp knives—to keep up with the line speed often results in serious, even dismembering, injuries.

In health care, production pressure refers to delivery of services—the pressure to run hospitals at 100% capacity, with each bed filled with the sickest possible patients who are discharged at the first sign that they are stable, or the pressure to leave no operating room unused and to keep moving through the schedule for each room as fast as possible. In a survey of members of the American Society of Anesthesiologists (2), half of respondents stated that they had witnessed at least one case in which production pressure resulted in what they regarded as unsafe care. Examples included elective surgery in patients without adequate preoperative evaluation and proceeding with surgery despite significant contraindications.

Production pressure produces an organizational culture in which frontline personnel (and often managers as well) are reluctant to suggest any course of action that compromises productivity, even temporarily. For instance, in the survey of anesthesiologists (2), respondents reported pressure by surgeons to avoid delaying cases through additional patient evaluation or canceling cases, even when patients had clear contraindications to surgery.

1. Schlosser E. Fast Food Nation. Boston, MA: Houghton Mifflin; 2001.

2. Gaba DM, Howard SK, Jump B. Production pressure in the work environment. California anesthesiologists’ attitudes and experiences. Anesthesiology. 1994;81:488-500. [ go to PubMed ]


R


Rapid Response Team - See Medical Emergency Team

Read-Backs - When information is conveyed verbally, miscommunication may occur in a variety of ways, especially when transmission may not occur clearly (eg, by telephone or radio, or if communication occurs under stress). For names and numbers, the problem often is confusing the sound of one letter or number with another. To address this possibility, the military, civil aviation, and many high-risk industries use protocols for mandatory "read-backs," in which the listener repeats the key information, so that the transmitter can confirm its correctness.

Because mistaken substitution or reversal of alphanumeric information is such a potential hazard, read-back protocols typically include the use of phonetic alphabets, such as the NATO system ("Alpha-Bravo-Charlie-Delta-Echo...X-ray-Yankee-Zulu") now familiar to many. In health care, traditionally, read-back has been mandatory only in the context of checking to ensure accurate identification of recipients of blood transfusions. However, there are many other circumstances in which health care teams could benefit from following such protocols, for example, when communicating key lab results or patient orders over the phone, and even when exchanging information in person (eg, "sign outs" and other such handoffs).

Red Rules - Rules that must be followed to the letter. In the language of non-health care industries, red rules “stop the line.” In other words, any deviation from a red rule will bring work to a halt until compliance is achieved. Red rules, in addition to relating to important and risky processes, must also be simple and easy to remember.

An example of a red rule in health care might be the following: “No hospitalized patient can undergo a test of any kind, receive a medication or blood product, or undergo a procedure if they are not wearing an identification bracelet.” The implication of designating this a red rule is that the moment a patient is identified as not meeting this condition, all activity must cease in order to verify the patient’s identity and supply an identification band.

Health care organizations already have numerous rules and policies that call for strict adherence. So what is it about red rules that makes them more than particularly important rules? The reason that some organizations are using this new designation is that, unlike many standard rules, red rules are ones that will always be supported by the entire organization. In other words, when someone at the frontline calls for work to cease on the basis of a red rule, top management must always support this decision. Thus, when properly implemented, red rules should foster a culture of safety, as frontline workers will know that they can stop the line when they notice potential hazards, even when doing so may result in considerable inconvenience or be time consuming and costly, for their immediate supervisors or the organization as a whole.

Root Cause Analysis (RCA) - A structured process for identifying the causal or contributing factors underlying adverse events or other critical incidents.(1,2) The key advantage of RCA over traditional clinical case reviews is that it follows a pre-defined protocol for identifying specific contributing factors in various causal categories (eg, personnel, training, equipment, protocols, scheduling) rather than attributing the incident to the first error one finds or to preconceived notions investigators might have about the case. For instance, in a case involving a patient who mistakenly received someone else’s invasive cardiac procedure,(3) the initial reaction of many hearing about the case might be: the nurse should have checked the wrist band. Or, how could the doctor not have looked at the face of the patient on the operating table? Traditionally, an internal review of such a case would do little more than reiterate these "first stories"(4)—typically involving errors committed by personnel at the "sharp end"—and miss the "second stories" that emerge from more detailed, open-minded investigation.

Though the definition of RCA emphasizes analysis, the single most important product of an RCA is descriptive—a detailed account of the events that led up to the incident. For instance, in the case mentioned above,(3) the detailed catalogue of events leading up to the "wrong person procedure" included 17 distinct errors, rather than one or two "so-and-so should have checked such-and-such" errors.

Root cause analysis is still a widely used term, but many now find it misleading. Critics of the term argue that there are no true "causes," so much as "contributing factors." This is not entirely a semantic distinction. As illustrated by the Swiss cheese model, multiple errors and system flaws must come together for a critical incident to reach the patient. Labeling one or even several of these factors as "causes" fosters undue emphasis on specific "holes in the cheese" rather than the overall relationships between different layers and other aspects of system design. Accordingly, some have suggested replacing the term "root cause analysis" with "systems analysis."(5)

Specific resources that facilitate carrying out RCAs or "systems analyses" can be found at:

Root Cause Analysis (RCA). Veterans Affairs National Center for Patient Safety Web site.
Available at: http://www.patientsafety.gov/rca.html.

Taylor-Adams S, Vincent C. Systems analysis of critical incidents: the London Protocol. London, UK: Clinical Safety Research Unit, Imperial College London; 2004.
Available at: http://www.csru.org.uk/downloads/SACI.pdf.

1. Wald H, Shojania KG. Root cause analysis. In: Shojania KG, Duncan BW, McDonald KM, Wachter RM, eds. Making Health Care Safer: A Critical Analysis of Patient Safety Practices. Evidence Report/Technology Assessment No. 43 from the Agency for Healthcare Research and Quality: AHRQ Publication No. 01-E058; 2001.
Available at: http://www.ahrq.gov/clinic/ptsafety/chap5.htm

2. Bagian JP, Gosbee J, Lee CZ, Williams L, McKnight SD, Mannos DM. The Veterans Affairs root cause analysis system in action. Jt Comm J Qual Improv. 2002;28:531-545.
[ go to PubMed ]

3. Chassin MR, Becher EC. The wrong patient. Ann Intern Med. 2002;136:826-833.
[ go to PubMed ]

4. Cook RI, Woods DD, Miller C. A Tale of Two Stories: Contrasting Views of Patient Safety. National Patient Safety Foundation at the AMA: Annenberg Center for Health Sciences, Rancho Mirage, CA; 1998.
Available at: http://www.npsf.org/exec/front.html.

5. Vincent CA. Analysis of clinical incidents: a window on the system not a search for root causes. Qual Saf Health Care. 2004;13:242-243.
[ go to PubMed ]

Rule of Thumb (same as heuristic) - Loosely defined or informal rule often arrived at through experience or trial and error (eg, gastrointestinal complaints that wake patients up at night are unlikely to be functional). Heuristics provide cognitive shortcuts in the face of complex situations, and thus serve an important purpose. Unfortunately, they can also turn out to be wrong.

The phrase "rule of thumb" probably has it origin with trades such as carpentry in which skilled workers could use the length of their thumb (roughly one inch from knuckle to tip) rather than more precise measuring instruments and still produce excellent results. In other words, they measured not using a "rule of wood" (old-fashioned way of saying ruler), but by a "rule of thumb."

Run Charts - A type of "statistical process control" or "quality control" graph in which some observation (eg, manufacturing defects or adverse outcomes) is plotted over time to see if there are "runs" of points above or below a center line, usually representing the average or median. In addition to the number of runs, the length of the runs conveys important information. For run charts with more than 20 useful observations, a run of 8 or more dots would count as a "shift" in the process of interest, suggesting some non-random variation.

Other key tests applied to run charts include tests for "trends" (sequences of successive increases or decreases in the observation of interest) and "zigzags" (alternation in the direction—up or down—of the lines joining pairs of dots). If a non-random change for the better, or "shift," occurs, it suggests that an intervention has succeeded. The expression "moving the dots" refers to this type of shift.

Further information about run charts and statistical process control can be found at:

Yeung S, MacLeod M. Using run charts and control charts to monitor quality in healthcare [NHS Scotland Web site]. May 2004.
Available at: http://www.show.scot.nhs.uk/indicators/Tutorial/TUTORIAL_GUIDE_V4.pdf

Mohammed MA. Using statistical process control to improve the quality of health care. Qual Saf Health Care. 2004;13:243-245.
[ go to PubMed ]


S


Safety Culture - Safety culture and culture of safety are frequently encountered terms referring to a commitment to safety that permeates all levels of an organization, from frontline personnel to executive management. More specifically, "safety culture" calls up a number of features identified in studies of high reliability organizations, organizations outside of health care with exemplary performance with respect to safety.(1,2) These features include: acknowledgment of the high-risk, error-prone nature of an organization’s activities
a blame-free environment where individuals are able to report errors or close calls without fear of reprimand or punishment
an expectation of collaboration across ranks to seek solutions to vulnerabilities
a willingness on the part of the organization to direct resources for addressing safety concerns (3)
The Veterans Affairs system has explicitly focused on achieving a culture of safety, in addition to its focus on a number of specific patient safety initiatives.(4) The impact of such efforts are very difficult to assess, but some tools for quantifying the degree to which organizations differ with respect to "safety culture" have begun to emerge.(5)

1. Roberts KH. Managing high reliability organizations. Calif Manage Rev. 1990;32:101-113.

2. Weick KE. Organizational culture as a source of high reliability. Calif Manage Rev. 1987;29:112-127.

3. Pizzi L, Goldfarb N, Nash D. Promoting a culture of safety. In: Shojania KG, Duncan BW, McDonald KM, Wachter RM, eds. Making Health Care Safer: A Critical Analysis of Patient Safety Practices. Evidence Report/Technology Assessment No. 43 from the Agency for Healthcare Research and Quality: AHRQ Publication No. 01-E058; 2001.
[ Available at: http://www.ncbi.nlm.nih.gov/books/bv.fcgi?rid=hstat1.section.61719 ]

4. Weeks WB, Bagian JP. Developing a culture of safety in the Veterans Health Administration. Eff Clin Pract 2000;3:270-276.
[ go to PubMed ]

5. Singer SJ, Gaba DM, Geppert JJ, Sinaiko AD, Howard SK, Park KC. The culture of safety: results of an organization-wide survey in 15 California hospitals. Qual Saf Health Care. 2003;12:112-118.
[ go to PubMed ]

Sensemaking - A term from organizational theory that refers to the processes by which an organization takes in information to make sense of its environment, to generate knowledge, and to make decisions. It is the organizational equivalent of what individuals do when they process information, interpret events in their environments, and make decisions based on these activities. More technically, organizational sensemaking constructs the shared meanings that define the organization’s purpose and frame the perception of problems or opportunities that the organization needs to work on.

Karl Weick, at the University of Michigan Business School, has written an excellent book on the subject, titled Sensemaking in Organizations.(1) Weick also discussed a specific example of what happens when organizational sensemaking breaks down.(2) This example, the Mann Gulch fire, was subsequently brought to the attention of a wider audience by Don Berwick in his speech Escape Fire.(3)

1. Weick KE. Sensemaking in Organizations. Thousand Oaks, CA: SAGE Publications; 1995.
[ go to PSNet listing ]

2. Weick KE. The collapse of sensemaking in organizations: the Mann Gulch disaster. Adm Sci Q. 1993;38:628-652.
[ go to PSNet listing ]

3. Berwick DM. Escape Fire: Lessons for the Future of Health Care. New York, NY: The Commonwealth Fund; 2002.
[ go to PSNet listing ]

Sentinel Event - An adverse event in which death or serious harm to a patient has occurred; usually used to refer to events that are not at all expected or acceptable—eg, an operation on the wrong patient or body part. The choice of the word "sentinel" reflects the egregiousness of the injury (eg, amputation of the wrong leg) and the likelihood that investigation of such events will reveal serious problems in current policies or procedures.

Sharp End - The “sharp end” refers to the personnel or parts of the health care system in direct contact with patients. Personnel operating at the sharp end may literally be holding a scalpel (eg, an orthopedist who operates on the wrong leg) or figuratively be administering any kind of therapy (eg, a nurse programming an intravenous pump) or performing any aspect of care.

To complete the metaphor, the "blunt end" refers to the many layers of the health care system that affect the scalpels, pills, and medical devices, or the personnel wielding, administering, and operating them.

Thus, an error in programming an intravenous pump would represent a problem at the sharp end, while the institution’s decision to use multiple types of infusion pumps (making programming errors more likely) would represent a problem at the blunt end.

The terminology of “sharp” and “blunt” ends correspond roughly to “active failures” and “latent conditions.”

Situational Awareness - Situational awareness refers to the degree to which one’s perception of a situation matches reality. In the context of crisis management, where the phrase is most often used, situational awareness includes awareness of fatigue and stress among team members (including oneself), environmental threats to safety, appropriate immediate goals, and the deteriorating status of the crisis (or patient). Failure to maintain situational awareness can result in various problems that compound the crisis. For instance, during a resuscitation, an individual or entire team may focus on a particular task (a difficult central line insertion or a particular medication to administer, for example). Fixation on this problem can result in loss of situational awareness to the point that steps are not taken to address immediately life-threatening problems such as respiratory failure or a pulseless rhythm. In this context, maintaining situational awareness might be seen as equivalent to keeping the “big picture” in mind. Alternatively, in assigning tasks in a crisis, the leader may ignore signals from a team member, which may result in escalating anxiety for the team member, failure to perform the assigned task, or further patient deterioration.

Six Sigma - Refers loosely to striving for near perfection in the performance of a process or production of a product. The name derives from the Greek letter sigma, often used to refer to the standard deviation of a normal distribution. By definition, 95% of a normally distributed population falls within 2 standard deviations of the average (or "2 sigma"). This leaves 5% of observations as “abnormal” or “unacceptable.” Six Sigma targets a defect rate of 3.4 per million opportunities—6 standard deviations from the population average.

When it comes to industrial performance, having 5% of a product fall outside the desired specifications would represent an unacceptably high defect rate. What company could stay in business if 5% of its product did not perform well? For example, would we tolerate a pharmaceutical company that produced pills containing incorrect dosages 5% of the time? Certainly not. But when it comes to clinical performance—the number of patients who receive a proven medication, the number of patients who develop complications from a procedure—we routinely accept failure or defect rates in the 2% to 5% range, orders of magnitude below Six Sigma performance.(1)

Not every process in health care requires such near-perfect performance. In fact, one of the lessons of Reason’s Swiss cheese model is the extent to which low overall error rates are possible even when individual components have many “holes.” However, many high-stakes processes are far less forgiving, since a single “defect” can lead to catastrophe (eg, wrong-site surgery, accidental administration of concentrated potassium).

One version of Six Sigma commonly emulated in health care derives from an approach developed at General Electric (2) and consists of five phases summarized by the acronym DMAIC: Define, Measure, Analyze, Improve, and Control.(3) Although this process is somewhat reminiscent of the Plan-Do-Study-Act (PDSA) approach to continuous quality improvement, the resemblance can be misleading. Whereas PDSA seeks successive incremental improvements, Six Sigma typically strives for quantum leaps in performance, which, by their nature, often necessitate major organizational changes and substantial investments of time and resources at all levels of the institution. Thus, a clinic trying to improve the percentage of elderly patients who receive influenza vaccines might reasonably adopt a PDSA-type approach and expect to see successive, modest improvements without radically altering normal workflow at the clinic. By contrast, an ICU that strives to reduce the rate at which patients develop catheter-associated bacteremia virtually to zero will need major changes that may disrupt normal workflow.(4) In fact, the point of choosing Six Sigma is often that normal workflow is recognized as playing a critical role in the unacceptably high defect rate.

Several examples (4-6) of the successful application of Six Sigma methodology to improving patient safety are listed below.

1. Chassin MR. Is health care ready for Six Sigma quality? Milbank Q. 1998;76:565-591, 510.
[ go to PubMed ]

2. Buck CR Jr. Improving the quality of health care. Health care through a Six Sigma lens. Milbank Q. 1998;76:749-753.
[ go to PubMed ]

3. Benedetto AR. Six Sigma: not for the faint of heart. Radiol Manage. 2003;25:40-53.
[ go to PubMed ]

4. Frankel HL, Crede WB, Topal JE, Roumanis SA, Devlin MW, Foley AB. Use of corporate six sigma performance-improvement strategies to reduce incidence of catheter-related bloodstream infections in a surgical ICU. J Am Coll Surg. 2005;201:349-358.
[ go to PubMed ]

5. Castle L, Franzblau-Isaac E, Paulsen J. Using Six Sigma to reduce medication errors in a home-delivery pharmacy service. Jt Comm J Qual Patient Safety. 2005;31:319-324.
[ go to PubMed ]

6. Chan AL. Use of Six Sigma to improve pharmacist dispensing errors at an outpatient clinic. Am J Med Qual. 2004;19:128-131.
[ go to PubMed ]

Slips (or Lapses) - In some contexts, errors are dichotomized as “slips” or “mistakes,” based on the cognitive psychology of task-oriented behavior. Attentional behavior is characterized by conscious thought, analysis, and planning, as occurs in active problem solving. Schematic behavior refers to the many activities we perform reflexively or as if acting on “autopilot.” Complementary to these two behavior types are two categories of error: slips (or lapses) and mistakes.

Slips refer to failures of schematic behaviors, or lapses in concentration (eg, overlooking a step in a routine task due to a lapse in memory, an experienced surgeon nicking an adjacent organ during an operation due to a momentary lapse in concentration). Mistakes, by contrast, reflect incorrect choices. A mistake would be choosing the wrong diagnostic test or ordering a suboptimal medication for a given condition represent mistakes. Forgetting to check the chart to make sure you ordered them for the right patient would be a slip.

Distinguishing slips from mistakes serves two important functions. First, the risk factors for their occurrence differ. Slips occur in the face of competing sensory or emotional distractions, fatigue, and stress; mistakes more often reflect lack of experience or insufficient training. Second, the appropriate responses to these error types differ. Reducing the risk of slips requires attention to the designs of protocols, devices, and work environments—using checklists so key steps will not be omitted, reducing fatigue among personnel (or shifting high-risk work away from personnel who have been working extended hours), removing unnecessary variation in the design of key devices, eliminating distractions (eg, phones) from areas where work requires intense concentration, and other redesign strategies. Reducing the likelihood of mistakes typically requires more training or supervision. Even in the many cases of slips, health care has typically responded to all errors as if they were mistakes, with remedial education and/or added layers of supervision.

Standard of Care - What the average, prudent clinician would be expected to do under certain circumstances. The standard of care may vary by community (eg, due to resource constraints). When the term is used in the clinical setting, the standard of care is generally felt not to vary by specialty or level of training. In other words, the standard of care for a condition may well be defined in terms of the standard expected of a specialist, in which case a generalist (or trainee) would be expected to deliver the same care or make a timely referral to the appropriate specialist (or supervisor, in the case of a trainee). Standard of care is also a term of art in malpractice law, and its definition varies from jurisdiction to jurisdiction. When used in this legal sense, often the standard of care is specific to a given specialty; it is often defined as the care expected of a reasonable practitioner with similar training practicing in the same location under the same circumstances.

Structure-Process-Outcome Triad - Quality has been defined as the “degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge."(1) This definition, like most others, emphasizes favorable patient outcomes as the gold standard for assessing quality. In practice, however, one would like to detect quality problems without waiting for poor outcomes to develop in such sufficient numbers that deviations from expected rates of morbidity and mortality can be detected. Avedis Donabedian first proposed that quality could be measured using aspects of care with proven relationships to desirable patient outcomes.(2,3) For instance, if proven diagnostic and therapeutic strategies are monitored, quality problems can be detected long before demonstrable poor outcomes occur.

Aspects of care with proven connections to patient outcomes fall into two general categories: process and structure. Processes encompass all that is done to patients in terms of diagnosis, treatment, monitoring, and counseling. Cardiovascular care provides classic examples of the use of process measures to assess quality. Given the known benefits of aspirin and beta-blockers for patients with myocardial infarction, the quality of care for patients with myocardial infarction can be measured in terms of the rates at which eligible patients receive these proven therapies. The percentage of eligible women who undergo mammography at appropriate intervals would provide a process-based measure for quality of preventive care for women.

Structure refers to the setting in which care occurs and the capacity of that setting to produce quality. Traditional examples of structural measures related to quality include credentials, patient volume, and academic affiliation. More recent structural measures include the adoption of organizational models for inpatient care (eg, closed intensive care units and dedicated stroke units) and possibly the presence of sophisticated clinical information systems. Cardiovascular care provides another classic example of structural measures of quality. Numerous studies have shown that institutions that perform more cardiac surgeries and invasive cardiology procedures achieve better outcomes than institutions that see fewer patients. Given these data, patient volume represents a structural measure of quality of care for patients undergoing cardiac procedures.

1. Lohr KN, ed. Medicare: A Strategy for Quality Assurance. Washington, DC: National Academy Press; 1990.

2. Donabedian A. Evaluating the quality of medical care. Milbank Mem Fund Q. 1966;44 (suppl):166-206.
[ go to PubMed ]

3. Donabedian A. Explorations in Quality Assessment and Monitoring. The definition of quality and approaches to its assessment. Vol 1. Ann Arbor, MI: Health Administration Press; 1980.
[ go to PSNet listing ]

Swiss Cheese - James Reason developed the "Swiss cheese model" to illustrate how analyses of major accidents and catastrophic systems failures tend to reveal multiple, smaller failures leading up to the actual hazard.(1)
In the model, each slice of cheese represents a safety barrier or precaution relevant to a particular hazard. For example, if the hazard were wrong-site surgery, slices of the cheese might include conventions for identifying sidedness on radiology tests, a protocol for signing the correct site when the surgeon and patient first meet, and a second protocol for reviewing the medical record and checking the previously marked site in the operating room. Many more layers exist. The point is that no single barrier is foolproof. They each have "holes"; hence, the Swiss cheese. For some serious events (eg, operating on the wrong site or wrong person), even though the holes will align infrequently, even rare cases of harm (errors making it "through the cheese") will be unacceptable.

While the model may convey the impression that the slices of cheese and the location of their respective holes are independent, this may not be the case. For instance, in an emergency situation, all three of the surgical identification safety checks mentioned above may fail or be bypassed. The surgeon may meet the patient for the first time in the operating room. A hurried x-ray technologist might mislabel a film (or simply hang it backwards and a hurried surgeon not notice), "signing the site" may not take place at all (eg, if the patient is unconscious) or, if it takes place, be rushed and offer no real protection. In the technical parlance of accident analysis, the different barriers may have a common failure mode, in which several protections are lost at once (ie, several layers of the cheese line up). An aviation example would be a scenario in which the engines on a plane are all lost, not because of independent mechanical failure in all four engines (very unlikely), but because the wings fell off due to a structural defect. This disastrous failure mode might arise more often than the independent failure of multiple engines.

In health care, such failure modes, in which slices of the cheese line up more often than one would expect if the location of their holes were independent of each other (and certainly more often than wings fly off airplanes) occur distressingly commonly. In fact, many of the systems problems discussed by Reason and others—poorly designed work schedules, lack of teamwork, variations in the design of important equipment between and even within institutions—are sufficiently common that many of the slices of cheese already have their holes aligned. In such cases, one slice of cheese may be all that is left between the patient and significant hazard.

1. Reason J. Human error: models and management. BMJ. 2000;320:768-770.
[ go to PubMed ]

Systems Approach - Medicine has traditionally treated quality problems and errors as failings on the part of individual providers, perhaps reflecting inadequate knowledge or skill levels. The "systems approach," by contrast, takes the view that most errors reflect predictable human failings in the context of poorly designed systems (eg, expected lapses in human vigilance in the face of long work hours or predictable mistakes on the part of relatively inexperienced personnel faced with cognitively complex situations). Rather than focusing corrective efforts on reprimanding individuals or pursuing remedial education, the systems approach seeks to identify situations or factors likely to give rise to human error and implement "systems changes" that will reduce their occurrence or minimize their impact on patients. This view holds that efforts to catch human errors before they occur or block them from causing harm will ultimately be more fruitful than ones that seek to somehow create flawless providers.

This "systems focus" includes paying attention to human factors engineering (or ergonomics), including the design of protocols, schedules, and other factors that are routinely addressed in other high-risk industries but have traditionally been ignored in medicine. Relevant concepts defined elsewhere in the glossary include root cause analysis, active failures vs. latent conditions, errors at the "sharp end" vs. errors at the "blunt end," slips vs. mistakes, and the Swiss cheese model.


T

"Time Outs" - Refer to planned periods of quiet and/or interdisciplinary discussion focused on ensuring that key procedural details have been addressed. For instance, protocols for ensuring correct site surgery often recommend a "time out" to confirm the identification of the patient, the surgical procedure, site, and other key aspects, often stating them aloud for double-checking by other team members. In addition to avoiding major misidentification errors involving the patient or surgical site, such a time out ensures that all team members share the same “game plan” so to speak. Taking the time to focus on listening and communicating the plans as a team can rectify miscommunications and misunderstandings before a procedure gets underway.

Triggers - Refer to signals for detecting likely adverse events. For instance, if a hospitalized patient received naloxone (a drug used to reverse the effects of narcotics), the patient probably received an excessive dose of morphine or some other opiate. In the emergency department, the use of naloxone would more likely represent treatment of a self-inflected opiate overdose, so the trigger would have little value in that setting. But, among patients already admitted to hospital, a pharmacy could use the administration of naloxone as a “trigger” to investigate possible adverse drug events.

A common setting in which triggers have been employed is monitoring anticoagulation with warfarin.(1-3) Triggers might consist of elevated laboratory measures of anticoagulation (eg, International Normalized Ratio [INR] > 3) or any administration of vitamin K, which reverses the effects of warfarin and therefore would likely signal the need to correct particularly worrisome levels of anticoagulation.

In many studies, triggers alert providers involved in patient safety activities to probable adverse events so they can review the medical record to determine if an actual or potential adverse event has occurred. In cases in which the trigger correctly identified an adverse event, causative factors can be identified and, over time, interventions developed to reduce the frequency of particularly common causes of adverse events (such as anticoagulant problems [1-3]). In these studies, the triggers provide an efficient means of identifying potential adverse events after the fact.

The traditional use of triggers has been to generate these retrospective reviews. However, using triggers in real time has tremendous potential as a patient safety tool. In one study of real-time triggers in a single community hospital, for example, more than 1000 triggers were generated in 6 months, and approximately 25% led to physician action and would not have been recognized without the trigger.(4)

As with any alert or alarm system, the threshold for generating triggers has to balance true and false positives. The system will lose its value if too many triggers prove to be false alarms.(5) This concern is less relevant when triggers are used as chart review tools. In such cases, the tolerance of “false alarms” depends only on the availability of sufficient resources for medical record review. Reviewing four false alarms for every true adverse event might be quite reasonable in the context of an institutional safety program, but frontline providers would balk at (and eventually ignore) a trigger system that generated four false alarms for every true one.

1. Hartis CE, Gum MO, Lederer JW Jr. Use of specific indicators to detect warfarin-related adverse events. Am J Health Syst Pharm. 2005;62:1683-1688.
[ go to PubMed ]

2. Lederer J, Best D. Reduction in anticoagulation-related adverse drug events using a trigger-based methodology. Jt Comm J Qual Patient Saf. 2005;31:313-318.
[ go to PubMed ]

3. Cohen MM, Kimmel NL, Benage MK, et al. Medication safety program reduces adverse drug events in a community hospital. Qual Saf Health Care. 2005;14:169-174.
[ go to PubMed ]

4. Raschke RA, Gollihare B, Wunderlich TA, et al. A computer alert system to prevent injury from adverse drug events: development and evaluation in a community teaching hospital. JAMA. 1998;280:1317-1320. Erratum in: JAMA. 1999;281:420.
[ go to PubMed ]

5. Edworthy J, Hellier E. Fewer but better auditory alarms will improve patient safety. Qual Saf Health Care. 2005;14:212-215.
[ go to PubMed ]


U


Underuse, Overuse, Misuse - For process of care, quality problems can arise in one of three ways: underuse, overuse, and misuse.

“Underuse” refers to the failure to provide a health care service when it would have produced a favorable outcome for a patient. Standard examples include failures to provide appropriate preventive services to eligible patients (eg, Pap smears, flu shots for elderly patients, screening for hypertension) and proven medications for chronic illnesses (steroid inhalers for asthmatics; aspirin, beta-blockers, and lipid-lowering agents for patients who have suffered a recent myocardial infarction).

“Overuse” refers to providing a process of care in circumstances where the potential for harm exceeds the potential for benefit. Prescribing an antibiotic for a viral infection like a cold, for which antibiotics are ineffective, constitutes overuse. The potential for harm includes adverse reactions to the antibiotics and increases in antibiotic resistance among bacteria in the community. Overuse can also apply to diagnostic tests and surgical procedures.

“Misuse” occurs when an appropriate process of care has been selected but a preventable complication occurs and the patient does not receive the full potential benefit of the service. Avoidable complications of surgery or medication use are misuse problems. A patient who suffers a rash after receiving penicillin for strep throat, despite having a known allergy to that antibiotic, is an example of misuse. A patient who develops a pneumothorax after an inexperienced operator attempted to insert a subclavian line would represent another example of misuse.




W


Workaround - From the perspective of frontline personnel trying to accomplish their work, the design of equipment or the policies governing work tasks can seem counterproductive. When frontline personnel adopt consistent patterns of work or ways of bypassing safety features of medical equipment, these patterns and actions are referred to as “workarounds.” Although workarounds “fix the problem,” the system remains unaltered and thus continues to present potential safety hazards for future patients.

A case on AHRQ WebM&M (Transfusion “Slip”) describes a potentially fatal near miss in which the blood samples drawn for crossmatching from husband and wife trauma victims were inadvertently swapped. The error was caught when an alert laboratory technician noted that the wife’s blood type differed from that recorded previously at the same hospital. A comment on the forum provides a striking example of a workaround. The reader noted that after a similar incident had occurred at another hospital, the organization instituted a policy requiring two screens for all transfusion crossmatches. The intention was that, by requiring two separate samples, any mislabeled sample would lead to a discrepancy with the other sample and provide a warning that would virtually eliminate the risk of transfusion errors due to mislabeled samples. However, frontline personnel at the hospital created a workaround: they routinely drew both crossmatch samples from the same needle stick, saving them time and patients discomfort, but completely undermining the value of double samples to avoid labeling errors.

As pointed out by a second reader on the forum, the appearance of a workaround at that hospital was expected because the new policy doubled the work associated with a common task in order to prevent a very uncommon error—one that virtually none of them would ever have encountered.

From a definitional point of view, it does not matter if frontline users are justified in working around a given policy or equipment design feature. What does matter is that the motivation for a workaround lies in getting work done, not laziness or whim. Thus, the appropriate response by managers to the existence of a workaround should not consist of reflexively reminding staff about the policy and restating the importance of following it. Rather, workarounds should trigger assessment of workflow and the various competing demands for the time of frontline personnel. In busy clinical areas where efficiency is paramount, managers can expect workarounds to arise whenever policies create added tasks for frontline personnel, especially when the extra work is out of proportion to the perceived importance of the safety goal.

abrir aquí para acceder al documento general, completo, del cual se reproduce sólo un ejemplo pro-activo:
AHRQ Patient Safety Network - Glossary

No hay comentarios:

Publicar un comentario