Getting published. An author, reviewer, and editor perspective.
Publishing your research in an academic journal is a difficult but necessary part of being a researcher and/or an academic. Over the years I’ve observed the publishing process as an author, a reviewer, and most recently an editor. Here are some of my thoughts on that process from those perspectives, including some of the pitfalls to avoid.
Research papers published in academic journals such as the Journal of Sports Sciences go through a number of steps before they are published either online or in the paper-based journal. These are:
1. Ethical approval.
2. Study conducted.
3. Paper written that summarises the aims, methods, results and conclusions.
4. Paper submitted to a peer-reviewed journal.
5. Paper assessed by an editor for publishing suitability.
6. Paper sent to three ‘peers’ for them to review and rate.
7. Based on the comments of the reviewers, the paper is accepted as-is, accepted with minor/major corrections, or rejected.
8. If accepted, the authors make the recommended changes. Step 7 and 8 can be repeated a number of times before proceeding to step 9.
9. Paper is published.
I would like to discuss steps 5 and 6 in relation to step 3. That is, based on my experience as an author, reviewer and editor, what are the things that you should do as an author to keep reviewers and editors happy.
When I receive a submission to the Performance Analysis section of JSS there are a number of things I have to check. The first, and one of the most important, is that the study has been approved by an ethics committee. I’ve had to reject a number of papers at this early stage because they either hadn’t been approved by an ethics committee or that they hadn’t stated that the study had been approved. So, first piece of advice: get your study approved by an ethics committee prior to conducting the study and explicitly state this at the start of the Method. I next check if the paper conforms to the Journal word limits and referencing style. JSS has a very generous limit of 4000 words for the main manuscript and 200 words for the abstract. Being within these limits is important for two reasons. First, publishing is an expensive business and every extra word adds to the cost. Second, abstracts are indexed in databases such as Pubmed, which has its own word limits for abstracts. If the abstract is over the word limit then the abstract might be cut off when viewed in Pubmed. This makes it harder for those who are searching for your paper to assess it’s worth if the abstract is not complete. Also make sure your referencing style conforms to the journal style. I’ve already returned submitted manuscripts to authors for using different styles (obviously submitted to another journal prior to submission to JSS). If those steps are passed I then select three reviewers. Authors are required to recommend three reviewers. I will often select at least one of these reviewers but will also make my own choice as to the reviewers selected. Once selected, reviewers receive an email inviting them to review the paper. If they accept, they will have 21 days to review the paper. Once I’ve received all three reviews I can then make a decision whether to accept or reject the paper.
So what are reviewers and editors looking for in a paper? Here’s a few things:
- Importance of the study.
- Justification of the method.
- Clarity of expression (including results).
- Read the fine print.
Importance - As a reviewer I’ve lost count of the number of times I’ve read a paper that was really well conducted and written, yet didn’t tell me why the study was needed or why it was an improvement on previous studies. The Introduction is really a sales pitch for your study. Why is this new study required, and what impact will the results have on theory or practice? Without this information studies can read like a fishing expedition, just looking to ‘see what happens’.
Justification of the method - As a reviewer of applied sport science studies I also look for justification of the method. For example, why did you choose a particular performance trial, like a 5 km cycling time trial? How does this performance trial relate to the internal and external validity of the study? Such a performance trial might have high internal validity, but if no athlete has to perform over such a distance then the external validity will be low. This is very important for applied projects.
Clarity of expression - Science is best when concise and clear. Too often authors use too many acronyms and too many words. Try to write in plain english. For those whose native language is not English then please use a native English speaker to proof read the paper prior to submission. If the English is poor then this makes the job of the reviewers much more difficult. Clarity also applies to the results of the study. For example, I like to see the mean difference between groups reported with 90% or 95% confidence intervals, together with a measure of effect size (such as Cohen’s d). Statistical significance can also be reported but you should report the exact p value rather than p < 0.05. An example might be written as “The mean difference in running speed between the control and experimental groups was 0.6 ± 0.1 km/h (95%CI: 0.4 - 0.8 km/h; p = 0.02; moderate effect).
Impact - ‘Impact’ is a difficult quality to judge. Impact on what or who? Athletes? Coaches? Theory? Well, I guess it could be any of those. However, as researchers what we really want is that the results of our studies are used or implemented, and not just confined to the shelves in a library. Reviewers and editors must rate how each study will have an impact on the topic area. Every journal is looking for studies that will have a high impact. As an author you probably think that your area of study is the most important! However, before you begin a research study you should ask yourself honestly how much the results of your study will change current practice or current theory. My own opinion is that too many studies are the result of ‘slicing and dicing’. That is, authors break up a larger study into smaller studies in order to increase the number of publications. This is misguided and probably lowers the impact of the study. In the UK, academic research is assessed by the Research Excellence Framework. Over the period of assessment academics are only allowed to enter four publications for consideration. Consequently it is pointless trying to artificially increase your number of publications, simply to get more publications. What authors should focus on is conducting high quality research. Most Nobel Prize winners receive the award based on a single discovery based on a single paper, not a body of accumulated knowledge.
Read the fine print - As an editor it is frustrating when authors don’t follow the submission guidelines. They are there for a reason! Please, please, please, read the submission guidelines and follow them. If you do then step 5 outlined above will be passed no problem. However, if you don’t then you might find your manuscript being returned to you without entering the peer-review process.
So, as an author submitting a paper to a peer-reviewed journal, remember to be clear about why your study is important, justify the method you used, be clear when reporting results, focus on quality, not quantity, and finally, read and follow the submission guidelines. That way, you’ll have a better chance of getting that Nobel Prize.
Post-note: It’s ironic in a way that I’m advocating you follow ‘the rules’. The Nobel Prize winning study that I’ve linked to above was initially rejected by a journal because the results suggested that the status quo thinking was incorrect. So, if you want to break the rules, then go ahead, but break the scientific rules, not the submission rules!