Richard Smith: Medical research—still a scandal!
Richard wrote a great article on the BMJ blog reflecting on a previous editorial by statistician Doug Altman 20 years ago, and how little had changed in the quality of medical research. I often ponder this myself, not just within science and medicine, but education research where I tend to hang out these days.
(From George White’s Scandals of 1934)
Altman’s original reasoning was that researchers feel compelled for career reasons to carry out research that they are ill equipped to perform, and nobody stops them. Ethics committees, who approve research, were ill equipped to detect scientific flaws, and many journals lacked statistical skills and published misleading research. He suggested back then that we needed less but better quality research.
Richard observes that 20 years on, Altman’s article could be published again unchanged, with the observations that poor methodological quality and researchers who chase careers and are under high pressure to publish, are in fact part of the problem today .
I agree with all of that, but also think this.
It is pretty safe to say, if you set a target, then people will work towards that target. Research excellence frameworks are geared around publications, and there is no getting away from the fact that papers are the academic currency for research returns and for promotion. Anyone interviewed for an academic post will be asked if they are REFable? I think the judgment that we are all career-crazed is a bit harsh; most researchers I know are just very passionate (and slightly mad) about their area of work and are just keen to share the results by publishing. Saying that I do know folk who withhold their papers from publishing to meet REF targets.
What else might be part of the problem?
I can understand publishers wanting articles that make great discoveries, but this biases the literature toward positive results (part of publication bias to which we all contribute); but just as important are the things that don’t work. I can think of one of my own which didn’t advance our understanding of fish oil as a therapy for inflammatory bowel disease in children, but did apply novel methods of fluorescence confocal microscopy for studying human intestinal biospies structurally and functionally (based on Naftalin’s work that we adapted for human). The paper was “not interesting enough”.
Uninteresting intestine – stained for f-actin with BODIPY, Viv Rolfe CC BY SA.
What about quality assurance for research?
I’d be the last to advocate yet another series of measures or frameworks, but there is no quality assurance for research methodologies within universities or hospitals, and that is not the function of the ethical committee I feel. Lordy, I have already known NHS ethics to take up to two years to be granted in some cases, so we don’t want to add to the process. As a researcher if you are lucky, you can be allied to a team where much of your methodology learning can take place, but I can’t remember at any time in my PhD or PostDocs being taught about experimental design or the correct statistics. Some of my best learning has been through journal peer-review – that is, receiving comments from accomplished reviewers, and you soon learn to be a hard nut to take it on the chin.
What about education research?
The problem for me as a novice moving from science to education research is that many of the really informative papers are locked behind paywalls. I am trying to write a research methods paper with bioscience colleagues at the moment, and a number of seminal references are not openly available. Surely, items so fundamentally important to the quality of “our business” should be openly available to be accessed by all? Who in reality these days is inclined to wait 2-3 weeks for an article to be purchased through intra-library loans? We just don’t do it, so our learning is not made easy.
What do we teach to science and medical undergraduates? On my Medical Science degree at De Montfort Univeristy I had modules on “Health Informatics” that looked at data retrieval, search strategies, and appraisal of medical research. A second module on “Evidence Based Medicine” covered clinical trial design, publishing industry, critical appraisal, systematic review and meta analysis. Come to think of it, Medical Sciences also had a “Medical Statistics” module and this culminated in a final year project. I have never come across another EBM module taught to undergraduate scientists.
There is good news for medical research!
The good news for medical research is at least there is a reasonable number of good quality studies that are published. If you were in the field of education research, that would be a different matter. In my recent systematic review of how massive online open courses (MOOCs) support the student experience, I started with 141 potential articles that had investigated the subject, of which only 25 were empirical studies, of which only 1 had a control group so could be deemed of any reasonable quality (Rolfe 2013).
If this were a medical systematic review, none of these studies would have passed the quality checks, although it is acceptable to broaden the entry gates for education research to encompass the practicalities of the classroom (Evans and Benefield 2001), and many other articles have discussed this. In another systematic review by Douglas Gray and myself, 176 studies were identified through the keyword search and were whittled down to just 38 empirical studies of which 21 were excluded because of having no control group, or due to having crucial data being missing. The concept that papers publish even without including numbers of participants, or results of statistical tests, is quite amazing!
Just to convince myself I’m not going mad, here is one further systematic review led by Barbara Means in 2010 at the US Department of Education that looked at literature to compare blended learning with face-to-face approaches, and were there benefits to the learner?
The article states:
The most unexpected finding was that an extensive initial search of the published literature from 1996 through 2006 found no experimental or controlled quasi-experimental studies that both compared the learning effectiveness of online and face-to-face instruction for K–12 students and provided sufficient data for inclusion in a meta-analysis. A subsequent search extended the time frame for studies.
What can we do about it?
“My confidence that things can only get better has largely drained away” says Richard. I don’t think I’m quite at that point although the water might be starting to swirl toward the plug hole!
In our Bioscience community (which will never close), the Higher Education Academy runs research workshops and events that are always well attended. For our subject there is an active community on email. A group of us are working on a guide to getting started in research, although our reviewers suggest we aim for a higher academic and theoretical starting point than originally intended, which is exactly what we didn’t want to do as picking up the language of research is often a difficult starting point.
“OPEN” is the way forward I am sure, with more:
- Open access articles for basic methodology papers
- Open educational resources instructing researchers in how to get started
- Open educational resources and research method modules for our students
- On-line communities working together
Now there is an idea! I might have to put my money where my mouth is now.
And for some of the real George White’s Scandals of 1935, go to YouTube! Now HERE is what I call a stage entrance.