lunes, 12 de marzo de 2012

Cancer Screening Data Often Misunderstood By Doctors

Cancer Screening Data Often Misunderstood By Doctors


From Medscape Medical News > Oncology

Cancer Screening Data Often Misunderstood By Doctors

Roxanne Nelson


 

March 5, 2012 — Many primary care physicians appear to be misinterpreting cancer screening data. According to research published in the March 5 issue of the Annals of Internal Medicine, they often mistakenly interpret improved survival and increased detection with screening as evidence that screening can save lives.
In their study, the authors, led by Odette Wegwarth, PhD, senior research scientist at the Max Planck Institute for Human Development, Berlin, Germany, found that few primary care physicians were able to correctly recognize that decreased mortality in a randomized trial demonstrates evidence of the benefits of cancer screening.
Primary care physicians were more likely to recommend a screening test "supported by irrelevant evidence," such as an increased 5-year survival rate, than a screening test supported by the "relevant evidence — reduction in cancer mortality," the authors note.
"It is natural to assume that survival is the same as mortality; that is what the words imply in common language," said Dr. Wegwarth. "And it is what the statistics imply in medical settings besides screening."
Survival in the context of screening is different from survival in the context of a treatment trial.
Dr. Wegwarth explained to Medscape Medical News that in a randomized trial of a treatment, survival is based on the starting trial population. "If 10% of the patients die in 1 year, 90% survive," she said. "However, in the context of screening, the term survival takes on a different meaning, because the calculation of survival, in contrast to mortality, is based only on people diagnosed with cancer, not on the whole study population."
"We believe that many physicians are not aware that survival in the context of screening is different from survival in the context of a treatment trial," she added.
Confusing Survival With Mortality
In the survey, physicians were presented with information about 2 hypothetical screening tests. For the first test, they were told that the 5-year survival rate improved from 68% to 99%; for the second, they were told that the mortality rate decreased from 2.0 to 1.6 deaths per 1000 people.
Physicians were 3 times more likely to report that they would "definitely recommend" the test with improved 5-year survival than the one with lower mortality (69% vs 23%). In reality, the data were for the same test and the same disease — prostate-specific antigen screening for prostate cancer.
In addition, the majority of respondents (80%) stated that the screening test supported by irrelevant evidence (5-year survival) "saves lives from cancer," whereas only 60% felt the same way about the test supported by relevant evidence (decreased cancer mortality) (P < .001).
Statistical Confusion
Screening for disease, especially cancer, is enthusiastically supported by patients, the public, and even health professionals," notes Virginia A. Moyer, MD, MPH, from the Baylor College of Medicine, Houston, Texas, in an accompanying editorial. However, many do not appreciate the fact that for a number of tests, there is not only no evidence of benefit, there is evidence of potential harm.
Information is available to patients about screening and treatment, but simply "providing patients with the statistical data about screening tests does little to improve their decision making," Dr. Moyer explains.
She points out that numeracy — the ability to perform basic quantitative calculations — is an important component of health literacy, but a number of studies have shown an alarmingly low skill level among patients. "The result is poor understanding of the benefits and risks of screening," Dr. Moyer writes.
Healthcare providers are usually the most highly rated sources of health information, so the responsibility for helping patients understand the potential benefits and risks of screening falls largely to primary care physicians, she notes. The question is: Can they do it?
"The news on that front is not good," says Dr. Moyer. Medical students do not understand statistical concepts well, and this study suggests that fully trained physicians do not either.
Physicians clearly do not understand how to interpret cancer screening statistics.
"Physicians clearly do not understand how to interpret cancer screening statistics themselves — expecting them to communicate this information to patients is a stretch," she writes.
Dr. Wegwarth and colleagues offer potential solutions, Dr. Moyer points out, but to "temper the unbridled enthusiasm of patients for screening tests, and especially for cancer screening, we need to reach beyond medicine to the public, which of course gets a substantial amount of medical information from the media."
Thus, educational efforts need to focus not just on physicians and medical students, but also on journalists," she explains.
"The need for high-quality, evidence-based guidelines for preventive services is widely recognized," adds Dr. Moyer. "Patients and physicians are most likely to believe and follow the resulting recommendations if they understand the statistics on which recommendations are based."
Survival in Screening Misinterpreted
Dr. Wegwarth and colleagues conducted their survey to learn whether primary care physicians in the United States understand which statistics provide evidence that screening saves lives.
In 2010, 297 physicians who practiced both inpatient and outpatient medicine responded to the survey; in 2011, 115 physicians who practiced exclusively outpatient medicine responded.
The majority incorrectly equated improved survival and early detection with lives saved by screening. About one half (47%) incorrectly reported that discovering cancer in screened, as opposed to unscreened, populations is evidence that screening saves lives.
Almost the same number of physicians believed that survival data prove that screening saves lives as believed that mortality data prove this (76% vs 81%; = .39).
Many of the physicians appear to have mistakenly interpreted survival in screening as if it were survival in the context of a treatment trial, note the authors. After reviewing only the 5-year survival rates in the scenario provided (99% vs 68%), almost half the respondents who thought that "lives were saved" stated that there would be 300 to 310 fewer cancer deaths per 1000 people screened.
The actual reduction in cancer mortality demonstrated in the European Randomized Study of Screening for Prostate Cancer was about 0.4 in 1000 within 5 years. More than 50% of physicians correctly identified this when presented with cancer mortality rates.
Education Needed
"There are medical organizations, physicians, and students who still tend to see statistics as inherently mathematical and clinically irrelevant for the individual patient," said Dr. Wegwarth. This attitude is reinforced at medical schools, "which mainly focus on analysis-of-variance and multiple-regression techniques. Indeed, these kind of statistics are not necessarily needed in doctors' practices."
However, other statistics are considered highly practical and relevant for an informative doctor–patient communication — statistics such as survival vs mortality, relative vs absolute risk, and the predictive values of screening tests, she continued. "Excellent work has been done that shows how the simple techniques of these statistics can be easily taught to doctors and patients. If medical schools would pick up on these techniques, the biggest part of the problem might be solved."
It is imperative that physicians be aware that they are misinterpreting these data in the first place; research such as ours might help raise awareness, explained Dr. Wegwarth.
"However, as long as the survival statistics in the context of screening are published in high-ranked medical journals, many physicians may believe that this statistic is meaningful in that context," she added. "Medical journal editors can play an important part in preventing confusion about health statistics by carefully limiting inferences about the value of screening based on statistics other than mortality. Journals could provide explicit guidance to readers — perhaps in editors' notes — about what can and cannot be inferred from changes in survival, early-stage detection, and incidence with screening."
This study was funded by the Max Planck Institute for Human Development and the National Cancer Institute.
Ann Intern Med. 2012;156:340-349, 392-393. Abstract, Editorial

No hay comentarios:

Publicar un comentario