jueves, 16 de septiembre de 2010

Creating a Framework for "Best Evidence" Approaches in Systematic Reviews: Review Protocol


"Best Evidence" Approaches
Full Title: Creating a Framework for "Best Evidence" Approaches in Systematic Reviews
Evidence-based Practice Center Systematic Review Protocol

Expected Release Date: early 2011
Contents


Background and Objectives
Approach
Background and Objectives
Background

One of the major challenges facing systematic reviewers is determining the study inclusion criteria. Reviewers often employ a "best evidence" approach to address the key questions in the reviews. However, what is meant by "best" is often unclear, and several factors may influence a reviewer's decision to broaden or narrow study inclusion criteria.

The question of when to use less-than-ideal evidence (e.g. non-randomized studies, indirect comparisons) has been a major issue in comparative effectiveness reviews in the Agency for Healthcare Research and Quality (AHRQ) Effective Health Care program. Although development of inclusion criteria ideally should be a priori, in practice these criteria sometimes require modification based upon findings of the initial literature searches or even review of retrieved study data. For many topics, the best possible evidence (e.g., double-blind randomized controlled trials [RCTs] using concealment of allocation and without strict enrollment criteria) does not exist, and this may not be discovered until the reviewer scans the literature search results. If the initial inclusion criteria specified studies directly comparing specific treatments, the criteria may be modified to allow for indirect comparisons. Conversely, for other topics, overly broad inclusion criteria (e.g. allowing non-randomized studies or indirect comparisons) may be impractical within the restrictions of time and budget; these criteria may be narrowed to include only the "best" evidence.

Other examples are less straightforward than those described above. If only a few trials directly address the treatment comparison of interest, should indirect evidence from placebo or no-treatment-controlled trials be used to increase the strength of the evidence base? Alternatively, what should a reviewer do when there is direct evidence from non-randomized controlled studies and indirect evidence from RCTs? The variety of dilemmas facing systematic reviewers, some of which are unanticipated, has spawned innumerable approaches with no organizing framework.

The notion of the "best" evidence must be primarily influenced by internal validity: the degree to which a study estimate is unbiased. In the context of comparative effectiveness research; however, the "best" evidence should also be determined by external validity: the degree to which a study estimate accurately reflects effects in usual clinical practice. Thus, study enrollment criteria, patient characteristics, treatment settings, and treatment implementations should be considered carefully when formulating inclusion criteria for a systematic review of effectiveness.
Objectives

The goal of this project is to create a decision framework for "best evidence" approaches in systematic reviews. This will be Phase I of a larger project (Phase II would involve a formal evaluation of the impact of variations in inclusion criteria on a review's conclusions). This project will accomplish the following tasks:

1. Create a list of possible inclusion criteria, and for each criterion, create a list of factors that might affect a reviewer's decision to use it.
2. Create a list of evidence prioritization strategies.
3. List the ways in which evidence prioritization strategies might be formally evaluated.
4. Prepare a summary report for posting on the AHRQ Web site.


Approach

The Evidence-based Practice Center (EPC) and our direct collaborators from other EPCs will meet the first objective by creating a candidate list of inclusion variables (e.g. randomization, presence of concurrent control group, direct comparison) and a separate list of modifying factors (e.g., disease course, number of studies, possibility of unmeasured confounders) on the project. Once we achieve consensus on the two lists, they will be used to create a grid that will allow a full cross-tabular evaluation of the interaction between these variables and factors. This grid will provide a guide to help inform the decisions of EPC reviewers regarding evidence prioritization strategies.

To meet the second objective, the lists of factors identified above will be used to create a set of strategies for prioritizing evidence. For a given topic, each strategy may result in a different set of included studies. Some of these strategies may be so strict as to yield zero included studies, and others may yield many studies. Some of these strategies may produce studies that, considered together, permit evidence-based conclusions, whereas other strategies produce studies that do not permit conclusions. Further, some strategies may be more accurate at effect size estimation than others. A list of possible strategies will be pared down to those most likely to be implemented by EPC reviewers.

The third objective will be to create a list of methods for formally evaluating these strategies. For a given outcome when comparing two treatment options, a systematic review should yield conclusions about:

1. Whether the evidence is sufficient to permit a conclusion.
2. If so, whether the conclusion favors one treatment, or the other, or indicates near equivalence.
3. If the treatments differ, an estimated size of the effect.

We will first devise a list of ways to test these kinds of output of a systematic review. This will set the stage for a formal evaluation of the impact in variation of evidence prioritization strategies on review conclusions or strength of evidence. A separate proposal will be developed after completion of Phase I.

All collaborating EPCs will participate in the preparation of a report summarizing all of the above steps. This report will undergo peer review, and the final revised document will be posted on the AHRQ Web site.



Current as of September 2010

Internet Citation:

Creating a Framework for "Best Evidence" Approaches in Systematic Reviews, Review Protocol. September 2010. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/clinic/tp/bestevtp.htm

Creating a Framework for "Best Evidence" Approaches in Systematic Reviews: Review Protocol

No hay comentarios:

Publicar un comentario