dinsdag 5 maart 2013

Peer review by BMC gone really bad!

A recent article by Jean-Francois Gehanno et al in BMC Medical Informatics and Decision Making already gained a lot of attention, and may I say not in a positive way. The authors found all references from published systematic reviews in Google Scholar, and therefor concluded that 'if the authors of the 29 systematic reviews had used only GS, no reference would have been missed'.

When I investigated this deeper I ran across the peer reviewers' comments, and found out that they were not all that enthousiast either.

The peer review process

I wanted to write a comment to the journal, but other comments had already been sent with equal contents, so I did not do that. But when I investigated the proces a bit more I came across some interesting points in the peer review process.

On the BMC website you can review the prepublication history of this article. One of the reviewers (Miguel García-Pérez) concludes that 'No revision (major, minor, or discretionary) will save this work'. Reviewer number 2 (Henrik von Wehrden) only comments: 'Great job on that short yet highly interesting manuscript'. Of course the authors agree with reviewer von Wehrden and only apply those minor changes that he deems necessary.

The different opinions of the reviewers can be traced back to their background. MA Garcia-Perez has written 8 reviews according to Pubmed, H von Wehrden 0. If you are not experienced in writing systematic reviews how can you have an informed opinion on this topic.

Now what would you expect an editor of the journal to do? If two reviewers so strongly disagree you need to have a third independent opinion. The editors decide to publish this article without further reviewing.

What would a per reviewer with lots of Systematic Review experience be able to tell the editors?
  • When searching Google scholar your search is limited to 256 characters. Most searches for systematic reviews are very long and complex and do not fit
  • Because you cannot truncate, nor use a controlled vocabulary using google scholar you would need much more word variants than in a regular database, thus exceeding the 256 limits far more easy
  • You can only view the first 1000 hits, based on the relevance ranking of scholar
  • You cannot export your complete resultset, but only per record.
  • You cannot replicate your search, another reviewer might find completely other results
  • You cannot be sure what the contents at the search date were because you cannot limit your google scholar search to a certain date.
  • etc etc etc
So there's no discussion this paper is not very useful and helpful. In stead: if managers of large academic hospitals read this they might be tempted to drop all budget for expensive search engines, because everyone can find everything in google. Or reviewers might be tempted to only use google scholar for their searches, because eveything is in there.

That would be the only way to test if google scholar can replace all other databases: by performing two systematic reviews at the same time on the same topic: one using all databases that are regularly used, and one using google scholar, and then compare the quality.

So in stead of using only google scholar, we information specialist stick to our primary databases: PubMed, embase, cochrane etc. Only those can give a complete and systematic overview of the literature on a certain topic.