When reviewing the qualities of a good peer-review, information during lecture was primarily drawn upon Michael Tyworth’s Blog post ‘How to Conduct a Peer Review’. As was mentioned, the first quality in a good review is to take the ‘I-want-my-paper-to-get-published’ approach straight to the reviewer. What I actually found most useful is when Prof. Galey made a thought-provoking critique of this point, stating that as researchers our mission is not to get published, but rather to ensure that our article enriches readership through its publication. We should want the journal to succeed first and foremost as a reliable source of knowledge advancement and enhancement, and not for the selfish, self-rectifying reasons that often lead authors to praise their work and remain ignorant of its flaws. My analogy here is that publishing an article requires the same care and thought as erecting a monument or statue, for who would agree to subject external spectators and consumers of knowledge to such an impermeable object without considering its validity, meaning, purpose and contribution. For this reason, it is important for the peer-reviewer to not only play the role of critic, but also be a coach who crafts advice in an accessible and useful way that motivates the author to improve the article.
With Prof. Galey’s critique style, a ‘peer-review of peer-review methods’, I’d like to bring to your attention an additional critique of review questioning methods. As mentioned in lecture, in the reviewing process it is crucial to ask several questions involving the research methods employed, the source of data (or how the data was collected and analyzed), whether the research design has reliability and whether the data has internal and external validity. The last point here is quite problematic and doesn’t go without mentioning that, for obvious reasons, there is no guaranteed way of regulating whether experimental research data itself is entirely valid. In this sense, I agree with YAAWESOMESAUCE’s post (a.k.a. Brooke Windsor) in that peer-review in scientific, experimental research fields requires constant review and validation beyond publication. William Y. Arms makes this fact clear in his 2002 article which I’ve linked below. He brings in the example of the Journal of the ACM, a highly validated journal of theoretical computer science. Arms states of a particular article he wrote for the journal that only twenty years later did he discover that the particular data-set he used from a prior research study was in fact ‘fraudulent’. The problem here is that one cannot blame the peer-reviewers, for as he states “[they] had no way of knowing this fact. The hypothesis in the paper has been confirmed by other experiments, but the erroneous paper — in a respectable, peer-reviewed journal — can still be found on library shelves.” This illuminates the bigger problem that one cannot simply answer whether data has internal and external validity, for a peer-reviewer must repeat the experiment in order to know how to answer this truly (a time consuming, costly, and nearly impossible feat). Unfortunately, this leaves erroneous information in so-called reliable, scholarly journals subject to consumption by the unsuspecting scholar.
In this case, Arms asserts that peer-review is “little more than a comment on whether the research appears to be well done”. What do you think? Would you agree with Arms’ assessment here? Or should we all just stop worrying and accept the flaws of peer-review, for, truly, how else would we ever contribute to scholarship and bring new knowledge out to the world?
Arms, William Y. (2002). “What are the alternatives to peer review? Quality Control in scholarly publishing on the web.” The Journal of Electronic Publishing, 8 (1). DOI: 10.3998/3336451.0008.103. Retrieved from: http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0008.103