As clinicians, we must be able to screen and then critically appraise research articles that will help us practice evidence-based medicine and, hopefully, take better care of our patients. More importantly, journal editors have a duty to do be diligent in adequately analyzing the papers submitted to them for review. This burden for editors is even more critical for us given the conflict between the limited time available for reading journals and integrating relevant articles for use in one's daily practice, the exploding volume of articles and journals and other new media (like blogs!), and our insufficient tools and skills for properly evaluating articles. Young and Solomon, in the February 2009 Nature Clinical Practice Gastroenterology & Hepatology, have published an article called, "How to Critically Appraise an Article," which provides a "user-friendly" format that a busy clinician could adapt for routine use.
I like the David Letterman-style "top 10" questions (in bold) that one can use to quickly assess whether an article is worthy to more critically read. The comments following the questions are mine.
- Is the study relevant? To whom? To me! When I first receive a journal, I quickly scan the TOC for any article that is immediately relevant either experience with a recent problematic case, something that will help me teach, or something relevant to a research problem.
- Does the study add anything new? This one, for me, is more difficult to immediately answer. But, I think if one regularly reads the standard pathology (or substitute your own specialty) journals, one can identify "hot" publishing topics. For example, a couple of years ago, it seemed that papers on columnar cell lesions of the breast were being published every month. Other recent examples are papers on serrated polyps and eosinophilic esophagitis. The authors do a nice job of pointing out the elephant in the room--that seminal "new" ground-breaking research papers are relatively rare--they are like home runs. However, papers that validate previous research by trying to replicate others' findings or generalize or refine previous results would be more useful to those in practice. My experience is that people who review grant proposals are "less than enthusiastic" about funding these types of studies.
- What type of research question does the study pose? Most relevant to pathologists are questions concerning "frequency of events," that is, the association between a diagnosis, or observation (positive or negative for such-and-such stain, etc.) and a disease outcome or treatment response.
- Was the study design appropriate for the research question? Fortunately, pathologists do not have to deal with randomized controlled trials, like oncologists, as most pathology literature relies on observations studies. I find it very difficult to adequately evaluate RCTs with respect to oncology studies but the academic oncologists are generally brilliant at this--at least the folks I listen to on my Research-to-Practice mp3s.
- Did the study methods address the key potential sources of bias? Again, this is critical in RCT design but still, how often do you just blow through the "Materials and Methods" section? The proper use of controls is crucial in interpreting pathology and lab studies.
- Was the study performed according to the original protocol? What patients or specimens were NOT included in the final set for analysis? Important source of bias.
- Does the study test a stated hypothesis? I wonder sometimes if the hypothesis is written after the study is done. Like if I'm fishing for crappie using minnows and I happen to have caught a catfish, then the a posteriori hypothesis of my fishing study is that the best bait for catching catfish is minnows.
- Were the statistical analyses performed correctly? Do you actually read that part of the "Methods"? Really? At least, one should check to see that the authors state the tools that they used and their rationale for this approach. The next level would require a more in-depth understanding of these tools and their appropriate use.
- Do the data justify the conclusions? Yeah, remember that hypothesis thing? Beware of the "statistically significant" findings that demonstrates a difference that is too small, in reality, to be of much clinical value. On the other hand, look out for small sample sizes that may underestimate real differences between groups and not be powered enough to show any statistically significant differences.
- Are there any conflicts of interest? Given recent disclosures in the "mainstream media," this really cannot be ignored. An example highlighted at one of the recent USCAP meetings by Dr. Noel Weidner regards the OncotypeDx assay in breast cancer. Who is really sponsoring and/or writing these articles? Good question to ask but maybe that's my own distrust of "city slickers."
What were Dr. Weidner's comments regarding the Oncotype Dx? There seems to be a growing feeling that the assay's current clinical use (and marketing) is well ahead of its science.
Posted by: Keith | August 31, 2009 at 10:51 AM