martes, 1 de noviembre de 2016

MercatorNet: ‘Peer review’, a buzzword in deep trouble

MercatorNet: ‘Peer review’, a buzzword in deep trouble

‘Peer review’, a buzzword in deep trouble



‘Peer review’, a buzzword in deep trouble

Medical research, psychology, and economics are all in the grip of a 'reproducibility crisis'
Donna Laframboise | Nov 1 2016 | comment 


We've all heard the buzzword. Whether it's an anti-bullying program in Finland, an alcohol awareness initiative in Texas, or climate change responses around the globe, we're continually assured that government policies are “evidence-based”. Science itself guides our footsteps.
There's just one problem: science is in deep trouble. Last year, Richard Horton, editor of The Lancet, admitted that "much of the scientific literature, perhaps half, may simply be untrue." In his words, "science has taken a turn toward darkness."
Medical research, psychology, and economics are all in the grip of a “reproducibility crisis”. A pharmaceutical company attempting to confirm the findings of 53 landmark cancer studies was successful in only six instances, a failure rate of 89 percent. In 2012, a psychology journal devoted an entire issue to reliability problems in that discipline, with one essay titled "Why science is not necessarily self-correcting." Likewise, a 2015 report prepared for the Board of Governors of the US Federal Reserve concluded that "economics research is usually not replicable." Its authors were able to verify the findings of only one-third of 67 papers published in reputable economics journals. After enlisting the help of the original researchers, the success rate rose to a still dismal 49 percent.
Government policies can't be considered evidence-based if the evidence on which they depend hasn't been independently verified. Yet the vast majority of academic research is never put to this test. Instead, something called peer review takes place. When a research paper is submitted, journals invite a couple of people to evaluate it. Known as referees, these individuals recommend that the paper be published, modified, or rejected.
If one gets what one pays for, it's worth observing that referees typically work for free. They lack both the time and the resources to perform anything other than a cursory overview. Nothing like an audit occurs. No one examines the raw data for accuracy or the computer code for errors. Peer review doesn't guarantee that proper statistical analyses were employed or that lab equipment was used properly.
Referees at the most prestigious of journals have given the green light to research that was later found to be wholly fraudulent. Conversely, they've scoffed at work that went on to win Nobel Prizes. Richard Smith, a former editor of the British Medical Journal, describes peer review as a roulette wheel, a lottery, and a black box. He points out that an extensive body of research finds scant evidence that this vetting process accomplishes much at all. On the other hand, a mountain of scholarship has identified profound deficiencies.
Peer review's random and arbitrary nature was demonstrated as early as 1982. Twelve already-published papers were assigned fictitious author and institution names before being resubmitted to the same journal 18-32 months later. The duplication was noticed in three instances, but the remaining nine papers underwent review by two referees each. Only one paper was deemed worthy of seeing the light of day the second time it was examined by the same journal that had already published it. Lack of originality wasn't among the concerns raised by the second wave of referees.
Anyone can start a scholarly journal and define peer review however they wish. No minimum standards apply and no enforcement mechanisms ensure that a journal's publicly described policies are actually followed. Some editors admit to writing up fake reviews under the cover of anonymity rather than going to the trouble of recruiting bona fide referees.
In 2014, a news story reported that 120 papers containing computer-generated gibberish had nevertheless survived the peer review process of reputable publishers.
Politicians and journalists have long found it convenient to regard peer-reviewed research as de facto sound science.
If that were the case, Nature would hardly have subtitled a February 2016 article: "Mistakes in peer-reviewed papers are easy to find but hard to fix." Over a period of 18 months, a team of researchers attempted to correct dozens of substantial errors in nutrition and obesity research. Among these was the claim that the height change in a group of adults averaged nearly three inches (7 cm) over eight weeks.
The team reported that editors "seemed unprepared or ill-equipped to investigate, take action or even respond." In Kafkaesque fashion, after months of effort culminated in acknowledgement of a gaffe, journals then demanded that the team pay US$1,700 in one instance and $2,100 in another before a letter calling attention to other people's mistakes could be published.
Which brings us back to the matter of public policy. We've long been assured that reports produced by the UN's Intergovernmental Panel on Climate Change (IPCC) are authoritative because they rely entirely on peer-reviewed, scientific literature. A 2010 InterAcademy Council investigation found this claim to be false, but that's another story.
Even if all IPCC source material did meet this threshold, the fact that one out of an estimated 25,000 academic journals conducted an unspecified and unregulated peer review ritual is no warranty that a paper isn't total nonsense.
If half of the scientific literature "may simply be untrue," then half of the climate research cited by the IPCC may also be untrue. This appalling unreliability extends to work on dietary cholesterol, domestic violence, air pollution -- in short, to all research currently being generated by the academy.
The US National Science Foundation recently reminded us that a scientific finding "cannot be regarded as an empirical fact" unless it has been "independently verified."
Peer review does not perform that function. Until governments begin authenticating research prior to using it as the foundation for new laws and huge expenditures, don't fall for the claim that policy X is evidence-based.
Donna Laframboise is the author of "Peer Review: Why skepticism is essential," a report published yesterday by the London-based Global Warming Policy Foundation.


MercatorNet

November 1, 2016

I would like to lodge a complaint with Fate, or Destiny, or That's The Way The Cookie Crumbles. Or whoever. It happens every year on the first Tuesday of November in Australia. I always have a horse in the Melbourne Cup, the race that stops the nation, and I never win anything.
This year, I backed Excess Knowledge at 60 to 1. His name appealed to me, as a person who knows too much about everything and not very much about anything. He placed 16th. You may say, "well, what did you expect?" And I will respond, well, I expected better after last year when Prince of Penzance galloped home at 100 to 1.
I wouldn't mind if it happened just once, but it happens often. In fact, Every. Single. Year. With. Out. Fail.
Sorry, I just had to get this off my chest. Editors are human, too. you know. 
Canadian journalist Donna Laframboise has written the lead story today, on peer review and the reproducibility crisis in science. Last year, she reports, Richard Horton, editor of The Lancet, admitted that "much of the scientific literature, perhaps half, may simply be untrue." If this is the case in medicine and other scientific disciplines, how about the social sciences? 


Michael Cook
Editor
MERCATORNET



Election 2016, one week out
By Sheila Liaugminas
How to summarize?
Read the full article
 
 
How a reforming pope can help heal the Reformation rift
By Austen Ivereigh
The visit of Pope Francis to Sweden is a sign of a slow thaw in relations with Lutherans
Read the full article
 
 
‘Peer review’, a buzzword in deep trouble
By Donna Laframboise
Medical research, psychology, and economics are all in the grip of a 'reproducibility crisis'
Read the full article
 
 
The benefits of boredom
By Tamara El-Rahi
Keeping the kids entertained may be doing more harm than good.
Read the full article
 
 
Euthanasia tyranny expands in Canada
By Will Johnston
People who think differently are not even to be allowed into medical school.
Read the full article
 
 
IT prof’s advice: quit social media
By Michael Cook
A brilliant TED talk on how fragmented attention harms us
Read the full article
 
 
We must not surrender to ‘gross conforming stupidity’
By Campbell Markham
Generations have fought for freedom of conscience. We cannot buckle now under pressure from same-sex marriage advocates.
Read the full article
 
 
Why do women share the crush on Candy Crush Saga?
By Fabrizio Piciarelli
Perhaps loneliness is behind the game app addiction.
Read the full article
 
 
Newspaper tycoon seeks to influence election
By Jennifer Minicus
His twelve-year-old maid is on to him!
Read the full article


MERCATORNET | New Media Foundation
Suite 12A, Level 2, 5 George Street, North Strathfied NSW 2137, Australia

Designed by elleston

New Media Foundation | Suite 12A, Level 2, 5 George St | North Strathfield NSW 2137 | AUSTRALIA | +61 2 8005 8605

MercatorNet: ‘Peer review’, a buzzword in deep trouble

No hay comentarios: