Each year the editors of Optical Engineering are asked to review the papers that they handled during the previous year and recommend to the Kingslake Award committee those papers they feel should be candidates for the prize awarded annually for the “most noteworthy original paper” published in this journal. It’s a difficult task. When an associate editor assigns a paper, he or she scans it to understand the topic that is being presented. Based on this scan, a number of potential reviewers are selected and the editor lets the evaluation take its course. There are times when the editor may pay added attention to the paper. Sometimes it is because the manuscript piques their interest or, at the other end of the spectrum, they recognize that the work has already been done. In the second case, the editor decides that it is not worth bothering a number of reviewers and declines the paper outright. But in the majority of cases, although our associate editors know their field, they cannot appreciate all of the papers assigned to them in the time they dedicate to selecting reviewers. They must, as a matter of economy, depend on the reviewers to make that assessment. So in their initial contact with the papers there is only a limited attempt to judge the relative worth and originality of each one. That is done later—at the time of the Kingslake recommendations.
Last year, when two of the associate editors left the Board of Editors, I assumed their Kingslake Award evaluations. I had to find some way to deal with a lot of published papers in a reasonable amount of time. The strategy I developed was to return to the original evaluations of the papers and look up the ratings that the reviewers had given each paper. I took the numerical values assigned to each rating of journalistic criteria and technical merit, added up the values for the two categories, and sorted the papers according to the total rankings, paying greater attention to scores for technical merit. What I found was that top-ranked papers were, indeed, excellent and I strongly recommended them to the Kingslake Award committee.
But before wrapping up my task, I decided to look at the papers in the list that received lower ratings. My reaction after reading a few of these papers was “We published that?” I was not trying to second-guess the editors on their decisions. I simply compared those papers that had gotten favorable reviews and had been rated highly to those with the lowest ratings and found that there was a marked difference. The quality of the lowest rated papers concerned me. I commented to one of the editors whose papers I had evaluated about his low-rated papers and he told me that he had some qualms. But he felt that because the reviewers had recommended that they be published, usually after the authors had made required revisions, he shouldn’t decline to publish the paper. There was, in a sense, nothing wrong with the paper. Although they might not be wrong, they were not particularly compelling either. I call them “not wrong” papers.
With the introduction of Peer X-Press, the American Institute of Physics’ browser-based manuscript handling software, it now possible to improve the evaluation procedure for our reviewers. Previously, reviewers ranked various aspects of a paper as Excellent, Good, Satisfactory, Marginal, or Poor for each of 12 ratings of the paper along with their substantive comments. But what constitutes an Excellent paper as opposed to a Good or Marginal one? To provide a guide for reviewers I constructed a set of statements to describe the paper for each of the aspects. For example, when a reviewer turns in the review using our web-based interface and ranks the originality of the paper being evaluated, he or she sees a drop-down menu with the following statements instead of the one-word descriptions:
Originality | Previous Labels | New criterion-based evaluation statements |
| Excellent | Novel contribution of fundamental importance. |
| Good | New work. I know of no comparable effort. |
| Satisfactory | Derivative work, but provides new results. |
| Marginal | This paper is very similar to the work of others. |
| Poor | This has been done before. The paper should be rejected. |
By evaluating a paper through this set of criterion-based statements, we can establish a relative ranking to other papers at a point when it counts most, when a decision must be made whether it should be published. Because the reviewers are possibly the only persons other than the authors who will examine the paper this closely, their evaluations carry considerable weight. If an Associate Editor finds that both reviewers recommend that the paper should be published because there are no errors, but they give low rankings for most of the ratings of the paper, he or she will review the paper and the substantive comments on the paper and the ratings, particularly those for technical merit. If the Associate Editor finds that it is a “not wrong” paper, he or she will inform the author that we decline to publish because it does not meet standards of originality and importance set for Optical Engineering. Who sets the standards? We, the Board of Editors, do.
Donald C. O’Shea
Editor