Open Letter to JoEP Authors, Part III: The Big Standards

This is the third entry in a short guide for prospective authors of the Journal of Economic Psychology. The first entry contained a short overview. The second discussed which papers are appropriate for the journal. This one reviews the big “red flags,” in terms of journal policies, that we look at upon submission.

Our journal actively and systematically enforces a number of policies, as laid out in our 2020 Editorial and the Guide for Authors. Those lead to a number of specific criteria which will typically lead to a desk-rejection, and that you can use as a checklist. If you answer “no” to any of the following questions, and the problem cannot be fixed, it’s probably better to submit elsewhere. If the problem can be fixed, but you submit anyway without fixing it, the best-case scenario is that you will waste several weeks (and also our time) until we simply return the paper to you (with all due respect, we do not give priority to papers that ignore our policies).

Question 1: Are you using a 5% significance threshold everywhere? As clearly explained in our Guide for Authors, JoEP has a strict, unmovable significance threshold set at 5%. This has been journal policy for a very long time, and we have zero tolerance here. No paper will be handled if this policy is violated, full stop. Results with p>0.05 are not “significant at 10%” or “marginally significant,” they are not significant and should not be interpreted. The very expressions “significant at 10%” and “marginally significant” should not appear anywhere in your paper. The only way to argue about what such results mean is to have an explicit ex ante power analysis for previously determined effect sizes. And then, what you are interpreting is the fact that those results are not significant! If you are convinced that there is still something there, consider a replication with larger power (but still tell us about the original study).

For regressions, tables, and elsewhere, the convention is “* p<.05 ** p<.01 *** p<.001.” This is written in stone, and we do not accept any “creative” deviation. Please include those codes in all tables using the stars notation. And by the way, you should never drop part of that list of codes. Again, creativity is not welcome in this point. Include all three. Even if there is no “***” anywhere in your table, the inclusion of the code “*** p<.001” still carries important information (namely, that nothing is significant at the 0.1% level!).

And yes, of course, you will be sorely tempted to say something about that one result with p=0.051. In the case of tests, it might be occasionally acceptable to report the exact p-value for p>.05 in the text and report that the test “missed significance,” but this option should not be overused.

Question 2: Will your data be available? If your paper discusses new data, we expect the data to be made publicly available before acceptance. That is, the last iteration before acceptance will ask you to make the data available, and if you cannot or will not do that, all the hard work (yours, ours, the reviewers’) will have been for nothing. If you cannot make your data publicly available, please go elsewhere. And yes, it is a pity that you signed that agreement with that organization guaranteeing them that the data would remain strictly confidential (instead of simply guaranteeing anonymity). Maybe you shouldn’t have. That is not an argument for us. Sorry! We are very, very unlikely to consider exceptions to our data availability policy, unless you have a particularly extreme situation (for instance, your data includes genetic markers and guaranteeing anonymity is impossible).

Question 3: Are your studies incentivized? For experimental studies (including field data), we expect the main effects in experimental manuscripts to be established in properly-incentivized studies. That means pay per performance, not flat payment, and certainly not course credit. Don’t get us wrong: This is not a comment on other fields at all! The point is that incentives are hugely important for Economic Psychology, as they are for Economics. Of course, there are questions which are hard or impossible to incentivize, and there are formats (survey data, existing panels) where this is not always doable. But if you are studying standard economic tasks as intertemporal investments, decisions under risk, interpersonal allocations, etc, and you use only hypothetical questions, you will most likely be desk-rejected. Sorry!

One obvious exception to that is if your research question is on the effect of incentives themselves, and you include a “flat payment” control treatment. That is very welcome. But, of course, you will have incentives in your other treatment(s), right?

Question 4: Did you avoid deception? We enforce the standards in behavioral economics and we generally do not consider studies using deception. That means that if you lied to participants in any way or misrepresented the incentives, it is better to submit elsewhere. In principle, we would be open to studies using deception in cases of truly-exceptional interest, provided that the studies could absolutely not have been carried out without deception (saving experimental costs or time is not an argument; increasing experimenter control is also not an argument). However, to the date of this post, and since I took over as Editor in January 2019, there has not been a single exception. Our experience is that almost all deceptive designs can be made non-deceptive with a little bit of work and some thinking. If you think you have an exception, this is one of the rare instances where you should submit a cover letter, explaining the reasons in detail. Just be aware that you need to have a very, very strong case.

Again, this is not a comment on other fields at all, nor is it related to ethics in the slightest (that is just a misunderstanding). The reason deception is banned in economics (and economic psychology) is simply lab reputation. If you run studies with deception in your lab, participants might (and typically will) find out. And in the next study, they (or the people they talk to) will not trust the instructions or the incentives. That might be irrelevant for the tasks proper of other disciplines, but it is absolutely essential for economic tasks. We need to know that participants are reacting to the tasks and incentives as stated. If not, the experiment is confounded. That’s all. That is why economic labs are generally separated from social psychology labs, and very especially why they do not share participant pools.

You might be asking what, exactly, constitutes deception. Maybe it is worth to be more clear on this point, as there is some discussion in the social sciences. I use two rules of thumb. The first is lying to participants. Any direct lie is deception, no further discussion. Debriefing does not fix it. For example, telling them that there were other participants in a certain role when in reality such decisions were pre-determined or made by a computer (or a confederate). The second rule of thumb is misrepresentation of incentives in a broad sense (which goes beyond purely selfish, monetary arguments—we are the Journal of Economic Psychology!). As an extreme example, if a participant would have clearly and objectively decided otherwise had a certain piece of information on the design been known before the decision, but that information was purposefully withheld, that is deception. For instance, suppose I play a dictator game against person B and make a decision. Then (surprise!) I am told that now the same person B will now play a dictator game against me, fully-knowing what I have decided in the previous game. Obviously lots of people would have decided otherwise had they known that, and hence this design is deceptive.

As a side comment, implicit deception is a thing. Some people get cute and say things like “I told my participants that the outcome was random, but I never told them that I would use uniform probabilities.” Or: “I never told them that the other person in the room was also an experimental participant.” Nice try, but no cigar. If you tell people that an outcome out of four was randomly determined, you have induced the expectation that the probabilities were 1/4 each. Using probabilities of 0.9, 0.05, 0.03, and 0.02, is implicit deception. If a participant interacts with another person, the expectation is that the other person is also a participant. Of course, if you tell them that an outcome is random but that they do not know the probabilities (as you might want to do for, say, ambiguity research), that is an entirely different matter.

Continued in Part IV: Are you familiar with our journal? (or: Read the Guide for Authors!).

Leave a comment