Monday, June 21, 2010

On Quality Control in Research Publication

In a comment to the previous post, Neo wrote:

Can you please discuss something about this? I would be very thankful if you can go for a separate post. The title of article (from the chronicle) is
"We Must Stop the Avalanche of Low-Quality Research"


Neo, thanks for the suggestion. I really wasn't planning on posting on this topic as it did get a good and thorough analysis on FSP's blog; the post can be found here. I like FSP's blog very much in general, and my thoughts on this particular issue are well-aligned with what she wrote, plus there are some very thoughtful comments there.

But I think we all need a break from my previous post, and science is always a good way to recharge, so I can try to give you a short version of what I think. My first impression was that this article in the Chronicle of Higher Ed had a fairly strange composition of authors (from English, mechanical engineering, medicine, management, and geography departments). I am wondering if any recommendation could ever be valid for research in each one among such a wide variety of disciplines. For instance, I really can't say anything about how people in English or management should publish their work, as I have very little knowledge of how their work is evaluated. But as far as science and engineering go, peer-reviewed publications are the norm and I think they are a good norm. Nowadays, you have to have a balance of high-impact papers with the overall number of papers; moderate-impact papers are sometimes not cited too highly, but that doesn't mean they are worthless. There are a lot of important details to the scientific process that are not 'hot' but need to be publically available so that someone else wouldn't have to duplicate everything. DrugMonkey wrote about that here. Most faculty do a good job of balancing high-impact work with moderate-impact work (see a related post by FSP here). We have duties to students and postdocs who have to come out with papers (e.g. I think it's unfair to keep a postdoc for many years with zero papers, holding off to get one big splashing paper in the end), and we have a duty to the scientific community and funding agencies to document both the results of the work and the process.

I felt that every paper that I was a coauthor on was a good one: we felt we had something important to say and that we wrote it as clearly as we could. Some papers were exceptional, but all were, in my mind, solid pieces of work. If I hadn't thought so, I would not have written/coauthored them. I cannot speak for everyone, but my feeling is that a vast majority of scientists are ethical people who love their work and are serious about it, and would never publish what they felt was trivial.

How frequently something gets cited is a different issue. Papers on hot topics in prestigious journals get cited a lot, but so do papers with egregious errors and so do review articles. So I am wary of any system where someone's worth is determined by the number of citations alone, because that immediately eliminates people who, for instance, work in not-currently-hot, but important areas with difficult challenges. And it eliminates whole disciplines because the number of citations scales with the size of the field. There are some relatively robust metrics such as the Hirsch h-index that tell you about a relative merit of a scholar, but again these are strongly discipline-dependent and are not infallible.

I personally really enjoy writing comprehensive articles for Very Good Journal(s) in my discipline, with impact factors 2-6; I always get good constructive feedback in review, and the reviewers are almost invariably well-chosen and knowledgeable. These papers do get cited per year about as much as the impact factor says, but some get cited considerably more and some less. However I think every paper of mine in Very Good Journals was cited at least a few times after 2-3 years (i.e., none had zero citations at end of year 3, but a couple of papers had only a couple of citations). Of course, this is anecdotal, but I would say this means the review process works and the papers are read, and, from a quick scan of who cited these papers, they are largely specialists in my discipline. Papers in Hot Jurnals (IF 6-15) or Super Hot Journals (IF above 15; the categorization into hot and super hot is mine and the limits are totally approximate and based on my field's norms) have appeal beyond one's immediate discipline and a broader readership, and a higher citation potential. However, they can be a pain as they are commonly published after unpleasant and long battles, where in addition to the paper's technical merit, other effects play roles, such as the notoriety of coauthors and everyone's egos. However, there is no doubt publications in Hot and Super Hot Journals are significant boosts to your CV and are widely read, so if you feel you have done some super hot science, don't sell yourself short.

Science is a creative enterprise, so imposing external quality and volume control cannot be done the same way as imposing it on, say, the quality or production volume of tomato soup. I am not sure what type of quality control in science, except internal, we could envision without squashing certaing types of disciplines or smaller group efforts. I also don't think there can be any regulation that would reign in the prolieration of papers: there are more scientists, more work is being done, and all participants are evaluated on merit and output volume too, and going for merit without regard for volume means simply ignoring the current laws of the game. A good CV shows a balance of high-imact with moderate-impact papers (composition varies with discipline and career stage), and the number of papers published per unit time that scales non-linearly with group size and is, again, discipline-dependent. Quality control is one of those issues where a single researcher should strive to instill good practices in his/her own sphere of influence, but beyond that it's simply an ill-posed problem with too many variables. I train my students to differentiate between good solid work and work that is more hype than substance and work that is truly transformative; we strive to present good-quality science in every single paper we submit, flashy or not. I believe most scientists do the same, and that's the best quality control we can ask for.

3 comments:

Venkat said...

It seems that those authors consider 'low-quality' and 'less cited' research to be almost synonymous. How well/often that assumption holds (of which I'm not so clear) mostly determines how meaningful their article is.

Neo said...

"There are a lot of important details to the scientific process that are not 'hot' but need to be publically available so that someone else wouldn't have to duplicate everything."

I would completely agree with this. This thought alone speaks a lot.

Thank you so much, GMP, for the post. It's always a pleasure to hear from you. :)




Neo

GMP said...

Venkat, Neo, thanks for the comments!