The importance of meta-analysis and systematic review: How research legacy can be maximized through adequate reporting
Systematic reviews are widely accepted as a ‘gold standard’ in evidence synthesis and the meta-analysis within provides a powerful means of looking across datasets. Neal Haddaway argues that while certain fields have embraced these reviews, there is a great opportunity for their growth in other fields. One way to encourage secondary synthesis is for researchers to ensure their data is reported in sufficient detail. Thinking carefully about legacy and future use of data is not only sensible, but should be an obligation.
How does research make its way into policy?
Many academics, my former self included, have a rather romantic view of the science-policy interface: a bedraggled researcher runs to Westminster clutching their latest research paper, ready to thrust the ground-breaking findings beneath the waiting noses of thumb-twiddling policy-makers, desperate for some form of evidence to help them to reach a decision. In reality, decision-makers use a wide variety of evidence, from constituents’ opinions to financial and from scientific research to media attention. With the exception of a technocracy, research forms only part of the picture in decision-making. That said, decisions should always be based on the best available evidence, whatever form that might take.
Whilst policy-makers may turn to science to guide their paths, they often do not have the time or training to trawl some of the 100,000+ academic journals to find relevant research evidence. Some organisations advocate the building of relationships between researchers and policy-makers, enabling research findings to be provided to those with an evidence need. Others believe that this can lead to unacceptable subjectivity where a topic is contentious or the published research is contradictory. Either way, a reliable review of research literature provides a quick and relatively low-cost means of summarising a large body of evidence and can provide an unbiased assessment where there are contradictory findings. Reviews have long been commissioned as part of the policy-making process, but the past decade has seen a rise in the number of systematic reviews being commissioned by national and international policy-makers.
Systematic reviews are, in essence, literature reviews that are undertaken in a specific way according to strict guidelines that aim to minimise subjectivity, maximise transparency and repeatability, and provide a highly reliable review of evidence pertaining to a specific topic. Methods in environmental sciences and conservation are outlined by the Collaboration for Environmental Evidence, in social sciences by the Campbell Collaboration and in medicine by the Cochrane Collaboration. However, systematic reviews are now widely used in a plethora of topics, including construction, psychology, economics and marketing to name a few. In brief, systematic review methods use peer-reviewed and published protocols to lay out the methods for a review, and then searches for studies, articles screening for relevance and quality, and data extraction and synthesis are undertaken according to a predetermined strategy. Where possible, meta-analysis provides a powerful means of statistically combining studies to look for patterns across studies and to examine reasons for contradiction in results where they occur. Systematic reviews are widely accepted as a ‘gold standard’ in evidence synthesis, but other methods, such as civil service rapid evidence assessments, have been developed that aim to offer a faster review, albeit to a lower level of rigour.
The PRISMA flow diagram, depicting the flow of information through the different phases of a systematic review (Wikimedia,GFDL)
Why should we care about secondary synthesis?
Systematic reviews are an attractive method for decision-makers for a number of reasons. Not only do they summarise large volumes of evidence, but they are often much cheaper than commissioning new primary research. Furthermore, they can include a wide range of evidence from diverse situations and long time periods, meaning that factors such as long timescales and wide geographical ranges can be examined that would be highly challenging in a primary research project.
Systematic reviews may seem rather time consuming and expensive (typical times range from around 9 to 24 months and costs are quoted at between c. £20,000 and £200,000), but these resource requirements are low relative to costly interventions that could be ineffectual or even cause more harm than good. Bat bridges, for example cost between c. £30,000 and £300,000 each, but if it were feasible, a systematic review on the topic may show that they are ineffectual at reducing road-related bat mortality, saving orders of magnitude more money than their cost when used in national policy-making.
In medicine systematic reviews have become an industry standard and are well understood by all contemporary practitioners. In the environmental sciences, however, despite their high utility and increasing acceptance by decision-makers, systematic reviews and meta-analyses are not particularly well understood. This lack of understanding has led to the present situation, where reviewers must spend considerable effort extracting data in the right format to use in systematic reviews and meta-analyses and where the information that reviews need is often not provided.
How can we make secondary synthesis easier?
In order to be able to accurately assess the reliability and applicability of individual primary research, reviewers must be able to extract information relating to study design, experimental procedure and the studies’ findings. In order to be able to include study findings in a meta-analysis, data must be reported either as a standard effect size or as means, and both must be accompanied by sample sizes and measures of variability. In my experience of systematic reviews in conservation and environmental management, a shocking proportion of published research fails to provide details on experimental design or fails to report means, variability and sample size: 46% of studies failed to report any measure of variability in one systematic review currently underway in agriculture and soil science. Without this information it is impossible to assess whether observed differences between groups are likely to be real differences or just chance variation in sampling.
Some journals and publishers now require raw data to be published alongside research articles (e.g.PLOS ONE), and this will undoubtedly help in improving the rates of inclusion of primary research in meta-analyses. However, until this policy becomes universal, researchers should ensure that they report in sufficient detail to allow for their methods to be critically appraised and their data to be assimilated in secondary syntheses.
As decision-making moves towards secondary synthesis of existing evidence, researchers in all disciplines should be thinking about how to maximise the impact of their research. Thinking carefully about legacy and future use of data is not only sensible, but should be an obligation.
This post is based on the article A call for better reporting of conservation research data for use in meta-analyses written by Neal Haddaway and published in Conservation Biology on the 14 January 2015
Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.
Neal Haddaway is a conservation biologist working for MISTRA EviEM at the Royal Swedish Academy of Sciences in Stockholm. Neal has a background in aquatic conservation and ecology and for the last three years has been researching how scientists interact with decision-makers through evidence reviews.
No hay comentarios:
Publicar un comentario