Surveys and Reviews at JoEP

Surveys and Reviews at JoEP

At JoEP, we love Review Articles. We have a special category for them, we allow them to be longer than standard articles, and we try to expedite their handling. Yet, we are currently desk-rejecting most of the reviews/surveys we receive. Why?

There seems to be an expectations mismatch. Regrettably, many of the Review Article submissions that we currently receive follow a mechanical, mainly bibliometric approach. They address topics as how much has it been published on the topic, who has published it, country affiliations, which keywords are used, etc. This is very interesting for bibliometrics (which is a research field on its own), but it is not interesting for us. At all. Sorry!

A review article for JoEP should be conceptual, identifying the key developments in a field and analyzing and explaining their conceptual, content-based relations and implications. It should provide a scientific state of the art on what we know about a topic, and not on who has published more or which keywords are used. Ideally, a review article should allow a researcher to enter a new field (here is an example published in JoEP; shameless plug: here is a review I wrote). Knowing how many researchers from this or that country have published and whether this has increased over the last years is not particularly useful for this purpose.

Also, some of the review articles we receive seem to have fallen into the bean-counting trap, using the number of papers following one approach or the other, or supporting one position or the other, as a substantive argument. Sorry, that is not a scientific position. To be blunt, the percentage of papers saying this or that lacks any scientific interest. To bring it to the extreme, if 95% of papers in a field make a false statement or use a suboptimal paradigm, we are interested in the other 5%. For the purposes of a review article, not all papers are equal. The task of the authors of a review article is to tell the reader which are the key developments in a field, not how many manuscripts have been written. If I have to read all, say, 300 articles in the field myself to get that, the review article has failed to achieve its objective.

Don’t get me wrong. Especially in empirical fields as ours, collecting evidence from many studies, independently of the individual impact of each study and even independently of whether they are published, is very valuable. But the tool for that is not a bean-counting manuscript. It is a meta-analytical review (assuming that studies are comparable enough to allow for a meta-analysis). Those are very welcome at JoEP! Here is an example. And here is another.

A point requiring discussion is the strategy of search. Many of the reviews that we receive make a point of being “systematic reviews,” by which they mean that a standard internet-search methodology has been used to collect papers, e.g. PRISMA. Well, that is all well and good, but it is not interesting. While we acknowledge that this is of extreme importance for large fields where it is impossible to cope with the number of publications (e.g., clinical studies), this is simply not our case. We expect the authors of a survey to be experts in the field and know all relevant literature already. If they need a search to find the key developments, then they are probably not yet qualified to write a review article for us on that topic (sorry!). Of course, conducting an additional, standardized search following a recognized methodology to make sure you are not missing anything and you are truly up to date is both desirable and welcome. By all means, do it if you think it is necessary. But the bibliometric details of how the search was conducted, the elaborate decision trees and the pie-charts with percentages of papers here and there do not belong into the text of the article. They should just be summarized as much as possible and included in the online appendix, if at all.

Leave a comment