How to Be an Informed Consumer of Evidence Ratings: It’s in the Details
CME ACTIVITY — Volume 16 — September 5, 2019
Alison Bergum, MPA1; Lael Grigg, MPA1; Marjory L. Givens, PhD1; Bridget Booske Catlin, PhD1; Julie Willems Van Dijk, PhD2 (View author affiliations)
Suggested citation for this article: Bergum A, Grigg L, Givens ML, Booske Catlin B, Willems Van Dijk J. How to Be an Informed Consumer of Evidence Ratings: It’s in the Details. Prev Chronic Dis 2019;16:190067. DOI: http://dx.doi.org/10.5888/pcd16.190067.
MEDSCAPE CME
In support of improving patient care, this activity has been planned and implemented by Medscape, LLC and Preventing Chronic Disease. Medscape, LLC is jointly accredited by the Accreditation Council for Continuing Medical Education (ACCME), the Accreditation Council for Pharmacy Education (ACPE), and the American Nurses Credentialing Center (ANCC), to provide continuing education for the healthcare team.
Medscape, LLC designates this Journal-based CME activity for a maximum of 1.00 AMA PRA Category 1 Credit(s)™. Physicians should claim only the credit commensurate with the extent of their participation in the activity.
Successful completion of this CME activity, which includes participation in the evaluation component, enables the participant to earn up to 1.0 MOC points in the American Board of Internal Medicine’s (ABIM) Maintenance of Certification (MOC) program. Participants will earn MOC points equivalent to the amount of CME credits claimed for the activity. It is the CME activity provider’s responsibility to submit participant completion information to ACCME for the purpose of granting ABIM MOC credit.
Release date: September 5, 2019; Expiration date: September 5, 2020
Learning Objectives
Upon completion of this activity, participants will be able to:
- Describe how evidence clearinghouses rate evidence, according to a report and review
- Determine lessons learned from reviewing a sample of clearinghouses’ evidence of effectiveness ratings, according to a report and review
- Identify guidance needed by public health practitioners, community leaders, and policy makers to be informed consumers of evidence clearinghouses that summarize evidence about health improvement efforts, according to a report and review
EDITOR
Ellen Taratus, MS
Editor
Preventing Chronic Disease
Disclosure: Ellen Taratus, MS, has disclosed no relevant financial relationships.
CME AUTHOR
Laurie Barclay, MD
Freelance writer and reviewer
Medscape, LLC
Disclosure: Laurie Barclay, MD, has disclosed no relevant financial relationships.
AUTHORS
Alison Bergum, MPA
Population Health Institute
University of Wisconsin–Madison
Disclosure: Alison Bergum, MPA, has disclosed no relevant financial relationships.
Lael Grigg, MPA
Population Health Institute
University of Wisconsin–Madison
Disclosure: Lael Grigg, MPA, has disclosed no relevant financial relationships.
Marjory L. Givens, PhD
Population Health Institute
University of Wisconsin–Madison
Disclosure: Marjory L. Givens, PhD, has disclosed no relevant financial relationships.
Bridget Booske Catlin, PhD
Population Health Institute
University of Wisconsin–Madison
Disclosure: Bridget Booske Catlin, PhD, has disclosed no relevant financial relationships.
Julie Willems Van Dijk, PhD
Wisconsin Department of Health Services
Madison, Wisconsin
Disclosure: Julie Willems Van Dijk, PhD, has disclosed no relevant financial relationships.
PEER REVIEWED
On This Page
- Abstract
- What Is Evidence and Why Is It Important?
- Where Can Communities Find Evidence?
- How Is Evidence Rated?
- Key Lessons in Considering Evidence of Effectiveness Ratings Provided by Evidence Clearinghouses
- Guidance for Public Health Practitioners, Community Members, and Policy Makers
- Acknowledgments
- Author Information
- References
- Tables
- Appendix
Summary
What is already known on this topic?
Public health practitioners are increasingly aware of the importance of considering evidence about effectiveness when selecting strategies for implementation to improve community health.
What is added by this report?
This report offers an inventory of evidence clearinghouses for disseminating research on evidence and a review of the approaches used by a subset of these clearinghouses that provide summary ratings of evidence.
What are the implications for public health practice?
Understanding the types of strategies these clearinghouses review and how they develop their summary ratings is key knowledge for public health practitioners to make informed decisions about potential strategies for implementation.
Abstract
What are evidence-based strategies and how can public health practitioners find evidence without conducting extensive literature reviews? We developed an inventory of clearinghouses and other resources that disseminate research on evidence of effectiveness. We examined differences in evidence classification among 6 evidence clearinghouses that rate the effectiveness of community-level strategies to address determinants of health. Most evidence clearinghouses clearly defined their scope, but only a few clearinghouses explicitly defined the types of strategies they assess (eg, programs, policies, practices). The term “evidence-based” was widely used, but definitions and standards were inconsistent across organizations and disciplines. Evidence clearinghouses varied in the way they used evidence rating classifications and criteria for assigning ratings. Attention to detail is important. The criteria for the top rating of some evidence clearinghouses, for example, require a more thorough literature review with more robust results than the criteria for the top rating of others. In addition, some clearinghouses report only on strategies considered to be evidence-based, whereas others also report on strategies that have no effect, mixed evidence, or no qualifying studies, demonstrating that a listing of a strategy by an evidence clearinghouse does not necessarily mean that it is effective. We conclude by providing guidance for users of evidence clearinghouses about how to interpret and effectively apply rating criteria across platforms: look closely at the details of how clearinghouses assign their ratings and be aware of similarities and differences when you are aligning potential strategies with your local priorities. We encourage communities to balance evidence with local needs, resources, and culture in strategy selection and funding decisions.
What Is Evidence and Why Is It Important?
Since the early 1990s, evidence-based decision making has gained prominence in the field of medicine, followed by the field of public health. In medicine and public health, evidence typically refers to research evidence, rather than experiential or contextual evidence (1,2). Our study examines best available research evidence as both strength of evidence and effectiveness. “Strength of evidence” refers to how rigorously a program, policy, or practice has been evaluated and to the quality and quantity of evidence available to determine whether the program or policy is producing the desired outcomes. Effectiveness considers whether the outcomes observed are, in fact, a product of the program, policy, or practice itself and whether those outcomes are desirable or not desirable (2).
Systematic reviews of randomized control trials (RCTs) are widely recognized as the gold standard of intervention research. Such reviews follow an established process for searching, critically appraising, and summarizing results of research studies, accounting for all relevant qualifying studies and their results and establishing whether research findings are consistent and generalizable across populations and settings. Individual studies (for example, an RCT, a cohort study, a case-control study, a case series, and a case report) vary in strength of evidence. Sometimes, however, no study is available, and practitioners might turn to expert opinion (3,4). Researchers acknowledge that best evidence can exist in various forms (5), often in tandem with contextual factors such as clinical expertise, patient preference, and environmental and organizational context (6). Medical literature describes various methods for assessing evidence to support clinical practice recommendations, such as the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system (7); however, these methods are primarily designed to evaluate clinical practice rather than community-based interventions.
Community leaders and practitioners have numerous approaches to finding evidence and varying criteria for considering evidence in decisions (8). Understanding the details of approaches to synthesizing and rating evidence can help practitioners harness best evidence to implement locally applicable, effective solutions. Using what has been successful through research, evidence can drive smart investments and support wise allocation of scarce dollars and other resources. And, knowing whether strategies exist to address local priorities can inform decisions about when to innovate and when to adopt strategies that have already been tested and shown to be effective. When strategies that support local priorities have strong evidence of effectiveness, practitioners have a solid starting place for action. When strategies that support local priorities do not have strong evidence of effectiveness or cannot be implemented with fidelity, which increases the likelihood of expected results (9), innovation using new untested strategies can be a better approach, especially when combined with evaluation.
Where Can Communities Find Evidence?
Searching the scientific literature for RCTs or other studies is often not feasible for public health practitioners or community members, largely because of limited time and access to scientific literature (10). Evidence clearinghouses offer registries of strategies that communities can implement to address local priorities. Some evidence clearinghouses also review and assess evidence to rate strategies on the basis of the strength of the evidence of effectiveness. All aim to help guide local strategy selection or design decisions, but approaches differ. Our use of the term “evidence clearinghouse” refers to all clearinghouses that support this aim, incorporating a spectrum of methods and content areas.
We developed a comprehensive, but not exhaustive, inventory of evidence clearinghouses and other resources that summarize evidence on strategies that address the multiple determinants of health (Table 1). We focused on clearinghouses that regularly update content and make it available through searchable web-based platforms. We identified these clearinghouses through a general internet search and by using search terms such as “evidence ratings” and “research clearinghouses” and reviewing inventories compiled by groups such as the Results First Initiative (14), the Bridgespan Group (15), and the Corporation for National and Community Service (16). Some clearinghouses, such as healthevidence.org and Strengthening Families Evidence Review, focus on the quality of individual studies. Others, such as The Guide to Community Preventive Services (The Community Guide), conduct systematic reviews and provide a summary rating. Clearinghouses such as What Works for Health (WWFH) (the authors’ clearinghouse, part of the County Health Rankings & Roadmaps program) consider study quality and rate intervention effectiveness. Our inventory notes 21 clearinghouses that rate intervention effectiveness (Table 1).
Each clearinghouse has its own scope of interest, methods, and rating classifications. Many clearinghouses also provide additional content to accompany evidence ratings and support effective decision making. Some, such as The Community Guide, WWFH, and Social Programs that Work (SPTW), provide cost-related information. This information ranges from The Community Guide’s economic effectiveness analysis (conducted for strategies they rate as “recommended”) to study details noted by WWFH and SPTW. Some also emphasize tools or content that can bolster efforts to increase equity or reduce disparities in health-related outcomes. WWFH, for example, assesses the likely effect of each strategy among socioeconomic, racial/ethnic, and geographic groups. Many clearinghouses that assess and rate evidence also provide examples, stories, or other action-focused resources to support implementation.
How Is Evidence Rated?
To understand how evidence clearinghouses rate evidence, we selected a sample of clearinghouses that provide evidence of effectiveness ratings for strategies that affect multiple determinants of health. Multiple determinants of health are defined in several ways, for example, “genetics, behavior, social circumstances, environmental and physical influences, and medical care” (17). The County Health Rankings model (18), on which WWFH is based, and this analysis, exclude genetics. In selecting our sample, we excluded clearinghouses that rate the quality of individual studies about an intervention but do not assess the effectiveness of that intervention overall. We also excluded clearinghouses that indicated their content is no longer updated. We minimized the inclusion of clearinghouses that are part of the Results First Clearinghouse Database, “an online resource that brings together information on the effectiveness of social policy programs from nine national clearinghouses” (12), because Results First provides tables to help users compare and contrast these ratings (11).
Our focused review examined the work of the following 6 evidence clearinghouses: Best Evidence Encyclopedia (BEE); The Community Guide; Healthy Communities Institute (HCI); Rural Health Information Hub (RHIhub); SPTW (formerly the Coalition for Evidence-Based Policy); and WWFH.
We conducted a qualitative analysis of the scope, methods, and ratings as described on the website of each of the 6 selected clearinghouses, with particular attention to the literature assessment (eg, literature review, systematic review), the criteria used to assess the quality of individual studies, and the type and number of studies required to establish each rating. We also considered scope of interest and the types of strategies assessed. We completed reviews in September 2018 and confirmed our information in October 2018. We invited staff members from each of the 6 clearinghouses to provide feedback on the accuracy of our information.
Each evidence clearinghouse has its own scope of interest (Table 2). The types of strategies (eg, programs, policies) assessed also varies, and selection of these strategies is largely tied to scope of interest and approach to compiling and assessing the literature. BEE, SPTW, and WWFH monitor topic-relevant research to identify potential strategies for assessment; SPTW and WWFH consult with experts. The Community Guide has a set process and priority-setting criteria to determine which strategies will be assessed. HCI accepts submissions and reviews them for inclusion on all community sites; local site administrators can decide whether to include submissions that are not selected for inclusion on all HCI sites. RHIhub also accepts submissions and includes programs that address rural health issues, are implemented in a rural US community, and include a program contact.
Scope of interest and type of strategies assessed. WWFH, HCI, and The Community Guide address multiple determinants of health; the latter two also address several diseases and injuries. RHIhub focuses on programs and interventions in rural communities. SPTW focuses on social programs and BEE on education programs. Some, such as The Community Guide and WWFH, emphasize broadly defined policy, systems, and environmental change (PSE) strategies; WWFH also includes some named programs, such as Nurse Family Partnership and Reach Out and Read. Other clearinghouses, such as SPTW and HCI, focus more heavily on named programs.
Approach to compiling and assessing literature. The websites of these 6 clearinghouses indicate various approaches to compiling and assessing available literature in support of their evidence ratings. The Community Guide and SPTW conduct systematic reviews, and BEE conducts systematic reviews with meta-analysis. WWFH conducts an extensive literature review, informed by the principles of systematic review methods, to capture and assess available evidence in a shorter time frame than systematic reviews, allowing inclusion of more strategies than the aforementioned clearinghouses. HCI and RHIhub seek and accept submissions from evaluators, practitioners, and others, fostering dissemination of early practice-based results. Review criteria for submissions were not apparent in our search of these 2 websites; it was also unclear whether a formal literature review process is used to inform evidence ratings.
Study types considered. Studies vary in their ability to determine causality; reviewed clearinghouses vary in the types of studies required to support evidence rating assignments. SPTW and BEE include RCTs and strong quasi-experimental designs (QEDs) as the foundation for their rating assignments. The Community Guide and WWFH include RCTs, QEDs, and some weaker study designs in their reviews. Strong QEDs are based on sound theory, use comparison groups, and typically include multiple measurement points; weaker study designs are also based on sound theory but do not have comparison groups and might not include multiple measurement points (2). Although HCI and RHIhub require peer-reviewed studies, overall, these clearinghouses do not specify the types of studies required for each rating. HCI and RHIhub describe pre–post designs for their highest rating categories and appear to assign this rating to strategies studied with or without comparison groups.
Replicability. The 6 clearinghouses also vary in their approach to replication, or demonstrations of generalizability, which is important to ensure a study’s results are valid in different settings, with different populations, or at different times (2). BEE, SPTW, and WWFH require multiple strong studies, a strong study implemented in multiple sites, or systematic review(s) of strong studies for their highest evidence ratings. The Community Guide conducts an applicability assessment process to evaluate generalizability along with the criteria used to assign their highest evidence rating. RHIhub requires successful implementation in more than 1 community via peer-reviewed program evaluations as a means to gauge replicability. HCI does not appear to require a demonstration of replication; its highest rating category can be assigned on the basis of 1 study that demonstrates program success in 1 or more locations.
Rating categories. Each of the 6 clearinghouses has a unique scale for rating evidence and a unique number of ratings (Table 3). Most ratings indicate degree of effectiveness, and some ratings indicate additional evidence is needed. Most rating categories are favorable (eg, “strong,” “recommended, “effective”), but WWFH and The Community Guide also assign unfavorable ratings: WWFH assigns “evidence of ineffectiveness,” and The Community Guide assigns “recommended against.” WWFH is the only organization with the rating “expert opinion.” “Expert opinion” is assigned to new strategies or innovations that have limited or no qualifying research but are recommended by credible, impartial experts. Additionally, this category may be indicated for strategies with benefits that are not described in empirical literature (eg, adding a dental clinic in a rural area without dental providers improves access to oral health care for at least some residents) or are difficult to test. RCTs are not always practical, as clearly pointed out by Smith and Pell in their systematic review of studies examining parachute use (19). WWFH also differentiates between “mixed evidence” (when strategies have been tested more than once in strong studies and results are inconsistent) and “insufficient evidence” (when too few studies assess the strategy of interest), whereas other clearinghouses might not; for example, The Community Guide covers both categories under “insufficient evidence.”
Key Lessons in Considering Evidence of Effectiveness Ratings Provided by Evidence Clearinghouses
Look for information about the scope of interest and types of strategies included. Most evidence clearinghouses in our review clearly define their scope of interest and outline a framework for the topics covered. However, the types of strategies assessed (eg, policy, program) might not be so well defined. Understanding the scope and types of strategies covered can help users search appropriately for strategies to address local priorities.
Ascertain what constitutes “evidence-based” for each clearinghouse, because no consensus exists. Among the clearinghouses we examined, there is no universal definition of “evidence-based.” Clearinghouses vary in the terminology they use to describe levels of evidence and effectiveness and the criteria used to assign their ratings. Although evidence clearinghouses provide a streamlined way to learn about evidence, it is important for practitioners to pay attention to how each clearinghouse defines each term used in their rating classifications.
Understand that evidence clearinghouses weight research designs differently. Some, but not all, clearinghouses give greater weight to evidence from systematic reviews, RCTs, and strong QEDs than to other study types, particularly in their highest evidence rating categories. Systematic reviews and RCTs are recognized as the gold standard of effectiveness; seeking out interventions with this level of evidence can be important when a community is scaling up an intervention or investing substantial time or money, or when political stakes for success are high.
Recognize differences in evidence clearinghouses’ requirements for literature review and their considerations of study quality and quantity. Some clearinghouses search for evidence more systematically and judge study quality and design more strictly than others. Some also emphasize replicability more heavily. Yet others focus more on dissemination of early practice-based results. Understanding the breadth and replicability of studies provides practitioners with critical information as they consider deploying interventions in their own community.
Be aware that most evidence clearinghouses do not assign ratings for ineffectiveness, expert opinion, or mixed results. Only 2 clearinghouses that we examined closely include information about strategies with evidence of ineffectiveness, and WWFH is the only one that has the category “expert opinion.” Exploring evidence along the entire continuum of effectiveness can provide practitioners with information about ineffective policies or programs that might need to end, strategies with mixed evidence that may need a closer look, and strategies rated “insufficient evidence” or “expert opinion” that may especially benefit from more rigorous evaluation designs.
In general, more focus appears to be on what works rather than on what does not or is unknown. This discrepancy is likely due, at least in part, to the fact that more literature is available for what works than what does not — partially a result of publication bias (20). This focus on what works raises 2 important caveats. First, inclusion of a strategy in an evidence clearinghouse should not be considered a recommendation for implementation, because included strategies are sometimes ineffective. Second, little is known about strategies that are not listed in evidence clearinghouses. Are they ineffective, or have they simply not been studied or reviewed for inclusion?
Guidance for Public Health Practitioners, Community Members, and Policy Makers
What knowledge do community leaders and policy makers need to be informed consumers of evidence clearinghouses that summarize evidence about health improvement efforts? As demonstrated in our qualitative review of publicly available data and in a 2016 assessment of education-related evidence resources, “the methods used in these syntheses vary in fundamental ways” (20). In using any evidence clearinghouse, paying attention to the fine print is important. Each clearinghouse has a unique approach to assessing evidence and communicating effectiveness. Particularly, the top evidence rating for some clearinghouses — communicating strategies that are most effective — requires a less thorough literature search with less robust results in some clearinghouses than others. This variability reflects different choices in search methods, replication requirements, and often, the scope of strategies included. Users of such clearinghouses can consult our list of key lessons as they examine the criteria of each clearinghouse to ensure that they understand the ratings and confirm ratings align with their local expectations and goals. Going forward, evaluation is needed to ensure that selected strategies work in the local population, setting, and context, as well as to add new examples to the evidence base.
Caution should be taken in implementing strategies that are found to have no effect or mixed results; communities interested in such strategies should consider study results, possible modifications to the strategy, and implications of implementation fidelity. Strategies for which literature reviews yield no qualifying studies might simply be too new to determine likely effectiveness. In these situations, conducting a pilot or implementing a rigorous evaluation to be sure that these strategies do, in fact, achieve expected outcomes is a wise approach.
Finally, evidence clearly matters to decision making but so do other factors. Knowledge building is a continuous process, and the creativity of local communities in addressing perplexing challenges, accompanied by a “test and see” approach, is often a source of new evidence. Local culture, potential effect on disparities, feasibility, and cost, are also important considerations. Purposeful approaches to balance these factors, along with evidence of effectiveness, can best support efforts to select strategies that will appropriately address local priorities.
Acknowledgments
We are grateful for funding from the Robert Wood Johnson Foundation and the Wisconsin Partnership Program at the University of Wisconsin School of Medicine and Public Health and assistance from current and former evidence analysts and project assistants: Jessica Rubenstein, Bomi Kim Hirsch, Jessica Solcz, Jennifer Russ, Katharine Austin-Stanford, Kiersten Frobom, and Jane Sachs.
Author Information
Corresponding Author: Alison Bergum, MPA, University of Wisconsin Population Health Institute, 610 Walnut St, WARF 524, Madison, WI 53726. Telephone: 608-263-2624. Email: alison.bergum@chrr.wisc.edu.
Author Affiliations: 1Population Health Institute, University of Wisconsin–Madison, Madison, Wisconsin. 2Wisconsin Department of Health Services, Madison, Wisconsin.
References
- Jenicek M. Epidemiology, evidenced-based medicine, and evidence-based public health. J Epidemiol 1997;7(4):187–97. CrossRef PubMed
- Puddy RW, Wilkins N. Understanding evidence part 1: best available research evidence. A guide to the continuum of evidence of effectiveness. Atlanta (GA): Centers for Disease Control and Prevention; 2011. https://www.cdc.gov/violenceprevention/pdf/understanding_evidence-a.pdf. Accessed April 11, 2019.
- Akobeng AK. Understanding randomised controlled trials. Arch Dis Child 2005;90(8):840–4. CrossRef PubMed
- Briss PA, Zaza S, Pappaioanou M, Fielding J, Wright-De Agüero L, Truman BI, et al. Developing an evidence-based Guide to Community Preventive Services — methods. Am J Prev Med 2000;18(Suppl 1):35–43. CrossRef PubMed
- Braveman PA, Egerter SA, Woolf SH, Marks JS. When do we know enough to recommend action on the social determinants of health? Am J Prev Med 2011;40(Suppl 1):S58–66. CrossRef PubMed
- Satterfield JM, Spring B, Brownson RC, Mullen EJ, Newhouse RP, Walker BB, et al. Toward a transdisciplinary model of evidence-based practice. Milbank Q 2009;87(2):368–90. CrossRef PubMed
- Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008;336(7650):924–6. CrossRef PubMed
- Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: a fundamental concept for public health practice. Annu Rev Public Health 2009;30(1):175–201. CrossRef PubMed
- Breitenstein SM, Gross D, Garvey CA, Hill C, Fogg L, Resnick B. Implementation fidelity in community-based interventions. Res Nurs Health 2010;33(2):164–73. PubMed
- Brownson RC, Eyler AA, Harris JK, Moore JB, Tabak RG. Getting the word out: new approaches for disseminating public health science. J Public Health Manag Pract 2018;24(2):102–11. CrossRef PubMed
- Pew-McArthur Results First Initiative. Results First clearinghouse database user guide. 2015. http://www.pewtrusts.org/~/media/Assets/2015/06/Results_First_Clearinghouse_Database_User_Guide.pdf. Accessed April 11, 2019.
- The Pew Charitable Trusts. Results First clearinghouse database. 2018. http://www.pewtrusts.org/en/multimedia/data-visualizations/2015/results-first-clearinghouse-database. Accessed April 11, 2019.
- Youth.gov. Implementing evidence-based programs: program directory. https://youth.gov/evidence-innovation#program-directory. Accessed June 3, 2019.
- Davies E, Silloway T. Research clearinghouses. Evidence-Based Policymaking Collaborative; 2016. http://www.evidencecollaborative.org/toolkits/research-clearinghouses. Accessed April 18, 2019.
- Neuhoff A, Axworthy S, Glazer S, Berfond D. The what works marketplace: helping leaders use evidence to make smarter choices. The Bridgespan Group, Results for America; 2015. http://results4america.org/wp-content/uploads/2016/11/WhatWorksMarketplace-vF.pdf. Accessed April 18, 2019.
- Corporation for National & Community Service. Clearinghouses and evidence reviews for social benefit programs. 2016. https://www.nationalservice.gov/sites/default/files/documents/Clearinghouses%20and%20Evidence%20Reviews.pdf. Accessed April 18, 2019.
- McGovern L, Miller G, Hughes-Cromwick P. The relative contribution of multiple determinants to health outcomes. Health Affairs Health Policy Brief 2014. https://www.healthaffairs.org/do/10.1377/hpb20140821.404487/full/ Accessed April 11, 2019.
- Remington PL, Catlin BB, Gennuso KP. The County Health Rankings: rationale and methods. Popul Health Metr 2015;13(1):11. CrossRef PubMed
- Smith GC, Pell JP. Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials. BMJ 2003;327(7429):1459–61. CrossRef PubMed
- Slavin RE. Perspectives on evidence-based research in education: what works? Issues in synthesizing educational program evaluations. Educ Res 2016;37(1):5–14. CrossRef
No hay comentarios:
Publicar un comentario