Monthly Archives: November 2013

Week 12: Final Thoughts

At the beginning of the course, I had no idea where my research interests were and whether I could handle the role of researcher, even on a junior level.   Although I conduct research for almost every major assignment, the task of developing my own research proposal was quite intimidating. However, proceeding incrementally and being able to discuss my research as it evolved has been very helpful – thank you, my fellow blog members for your comments, opinions, and advice!

My research question has not changed in substance since I submitted my SSHRC program of work. It still focuses on the association between Facebook use (of a specific nature to do with school) and student satisfaction with university life. My biggest problem was deciding in detail on the method. Finally, after considerable indecision, I settled on asking students to use a diary to track their Facebook use and a questionnaire to assess their satisfaction. Hopefully the diary will help avoid the problem of participants guesstimating the time they spend on Facebook.

I developed a theoretical framework to assist me. It has been very helpful in terms of clarifying my thoughts. Similar to the “bedraggled daisy” exercise, drawing a diagram of the framework really helps in seeing the connections between the variables under investigation. Because of this project, I have come to appreciate the difficulties in designing research proposals. Good luck to all, and thanks again!

-Camille Johnson

6 Comments

Filed under Uncategorized

Week 11: bona fides and mala fides in peer review

Like Portia (and for the exact same reasons – my own group presented on the same topic in the morning Foundations class) peer review in the context of the debates over Open Access has been at the front of my mind of late. In her post, Portia brings up a very recent case, not unlike the Sokal affair: John Bohannon’s “sting” on Open Access journals in Science. I would like to use my blog entry to build on her commentary.

One interesting detail about Bohannon’s project is that out of the 304 Open Access journals he submitted his bogus article to, he knowingly included 122 journals listed at the time on Jeffery Beall’s list of predatory publishers (http://scholarlyoa.com/publishers/). Beall’s list has been criticized in its own right (generally for being too inclusive); but the fact remains that 40% of Bohannon’s submissions were to journals already known for, or at least suspected of unscrupulous, exploitative, & otherwise predatory practices. 82% of the journals on Beall’s list that completed some kind of review process accepted the paper – compared with 45% of journals listed in the Directory of Open Access Journals (and there is overlap between the two lists – I couldn’t find the numbers exclusively for journals not on Beall’s list). It seems to me that the real control group for this experiment were the Open Access publishers not on Beall’s list, and the numbers in Bohannon’s report are more effective at indicating the validity of Beall’s list than they are able to indicate anything about Open Access publishing in general.

Science magazine’s vested interest in promoting the traditional over Open Access publishing model may have contributed to the spin on Bohannon’s results (i.e. choice of title for the article). It’s difficult to say. In any case, the role of Science magazine in all this is rendered rather reflexive when one puts the flaws in Bohannon’s own “experiment” in the context of the journal’s own notorious misadventures in the peer-review process.

The question at the heart of Bohannon’s experimental design and Science‘s role in the publication of his results is whether Bohannon and Science magazine were truly acting in good faith. And here’s a thread that I’ve seen running through in the discourse on peer review, made most explicit in Stanley Fish’s NY Times piece on the Sokal Affair: trust. Both traditional scholarly publishing, and suggested alternative models depend to some extent on being able to trust that the various parties involved are acting in good faith.

Traditional models of peer review rest on the assumptions that authors are providing work they honestly believe in (including real data and a genuine attempt to accurately portray methodology and analysis), and that editors and peer reviewers will honestly, to the best of their knowledge and ability, determine whether an article is worth presenting to a wider audience, or whether it will waste the academic community’s time (and, with non-Open Access, money).

Alternative models of peer review put trust in slightly different areas, but being able to rely on everyone acting in good faith is no less important. The kind of open review described by Fitzpatrick depends on reviewers engaging with the process in good faith. For open peer review to work, those involved need to be both honest in their criticisms, and neither take advantage of knowing which criticisms come from whom (especially regarding young and otherwise vulnerable scholars) nor expect others to take such advantage.

I suspect that the publishing of advancements in highly specialized fields of knowledge will always depend, to some extent, on the assumption that most everyone involved is acting in good faith. As long as this is the case, peer review (no matter the business model and rights management practices of the journal involved) will always be vulnerable to theatrical stunts by the likes of Sokal and Bohannon.

Peer review, though clearly in need of some re-thinking, can only ever be one check point at the start of the long road to acceptance by the scholarly community. In many ways, peer review is just the beginning of that process. The fact is that the bulk of the burden lies (and in one way or another, will always lie) with the community at large to decide whether a bit of research is or is not bunk.

For Bohannon’s own summary and explanation, see: http://www.sciencemag.org/content/342/6154/60.full

For the most succinct criticism I’ve ever read of the “sting” operation, as well as an up-to-date list of other responses to Bohannon, see: http://svpow.com/2013/10/03/john-bohannons-peer-review-sting-against-science/

Cited:

Fitzpatrick, K. (2009). Chapter 1: Peer review. In Planned Obsolescence: Publishing, Technology, and the Future of the Academy.

3 Comments

Filed under Uncategorized

Peer-Review, or: How I Learned to Stop Worrying and Accept the Flaws

When reviewing the qualities of a good peer-review, information during lecture was primarily drawn upon Michael Tyworth’s Blog post ‘How to Conduct a Peer Review’. As was mentioned, the first quality in a good review is to take the ‘I-want-my-paper-to-get-published’ approach straight to the reviewer. What I actually found most useful is when Prof. Galey made a thought-provoking critique of this point, stating that as researchers our mission is not to get published, but rather to ensure that our article enriches readership through its publication. We should want the journal to succeed first and foremost as a reliable source of knowledge advancement and enhancement, and not for the selfish, self-rectifying reasons that often lead authors to praise their work and remain ignorant of its flaws. My analogy here is that publishing an article requires the same care and thought as erecting a monument or statue, for who would agree to subject external spectators and consumers of knowledge to such an impermeable object without considering its validity, meaning, purpose and contribution. For this reason, it is important for the peer-reviewer to not only play the role of critic, but also be a coach who crafts advice in an accessible and useful way that motivates the author to improve the article.

With Prof. Galey’s critique style, a ‘peer-review of peer-review methods’, I’d like to bring to your attention an additional critique of review questioning methods. As mentioned in lecture, in the reviewing process it is crucial to ask several questions involving the research methods employed, the source of data (or how the data was collected and analyzed), whether the research design has reliability and whether the data has internal and external validity. The last point here is quite problematic and doesn’t go without mentioning that, for obvious reasons, there is no guaranteed way of regulating whether experimental research data itself is entirely valid. In this sense, I agree with YAAWESOMESAUCE’s post (a.k.a. Brooke Windsor) in that peer-review in scientific, experimental research fields requires constant review and validation beyond publication. William Y. Arms makes this fact clear in his 2002 article which I’ve linked below. He brings in the example of the Journal of the ACM, a highly validated journal of theoretical computer science. Arms states of a particular article he wrote for the journal that only twenty years later did he discover that the particular data-set he used from a prior research study was in fact ‘fraudulent’. The problem here is that one cannot blame the peer-reviewers, for as he states “[they] had no way of knowing this fact. The hypothesis in the paper has been confirmed by other experiments, but the erroneous paper — in a respectable, peer-reviewed journal — can still be found on library shelves.” This illuminates the bigger problem that one cannot simply answer whether data has internal and external validity, for a peer-reviewer must repeat the experiment in order to know how to answer this truly (a time consuming, costly, and nearly impossible feat). Unfortunately, this leaves erroneous information in so-called reliable, scholarly journals subject to consumption by the unsuspecting scholar.

In this case, Arms asserts that peer-review is “little more than a comment on whether the research appears to be well done”. What do you think? Would you agree with Arms’ assessment here?  Or should we all just stop worrying and accept the flaws of peer-review, for, truly, how else would we ever contribute to scholarship and bring new knowledge out to the world?

Olivia Wisniewski

Arms, William Y. (2002). “What are the alternatives to peer review? Quality Control in scholarly publishing on the web.” The Journal of Electronic Publishing, 8 (1). DOI: 10.3998/3336451.0008.103. Retrieved from: http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0008.103

2 Comments

Filed under Blogging Question

Week 11 – Peer Review

I think that both The Social Text and Sokal are complicit in the degradation of scientific principles.  That is a harsh statement I should explain, both parties had the best interest at heart: The Social Text providing an outlet for scientific exchange to help foster creativity and inspiration for further research, where as Sokal wanted to address bias and non-regulation over scientific publications. Does their intent justify the impact that they had?

 

Sokal was right in voicing a concern over practices which he felt were inappropriate, and pushing for the principal of more objective scientific rigor, he could have gone about it in a more diplomatic way. Instead of explaining that his article was a farce through another publication he could have spoken to The Social Text directly after being published. In addition, he himself did not follow the scientific method by submitting the article to another peer reviewed journal as a control, a point which was raised in class. I might understand if the incident has occurred in present day where information is so widely accessibly (thanks to the internet) any may not be scrutinized for it’s validity or source.

 

The Social Text is also at fault for it’s practices in the way it published materials. I don’t believe that any material should be openly accepted without first having gone through some sort of review. Allowing for external input would do nothing but strengthen the credibility of the journals and of the articles published within, and decreases the pitfall of becoming self referential. Perhaps they were just naive and to some extent were being taking advantage of by those who chose to publish with them. They accepted Sokal’s work based on his academic standing and had no reason to question is integrity or motives.

 

I think it is good to questions what is presented to us, to not hold one idea as an absolute truth while humoring others. When using the scientific method, you come up with an idea and then immediately try to prove that it wrong, the null hypothesis, before coming to the conclusion that it may be right. (I may be paraphrasing Richard Dawkins but I cannot recall a source).

Leave a comment

Filed under Uncategorized

Week 11 – Peer Review

I found Lovejoy, Revenson and France’s (2011) article thoroughly informative, especially for someone who is not too familiar with the peer reviewing process. It was interesting to see that Lovejoy acknowledges that many people are unaware of the process and states that they “may never attempt these activities, despite the desire to do so, because they feel ill-equipped to conduct a review” (p. 2). I, for one, feel this way, especially since I never really had the confidence to even review my own work, let alone someone else’s. I looked over my peers’ papers on several occasions, but I found this really difficult to do. I remember looking over paragraph after paragraph, and there were times where I could not find anything to critique on.

Throughout my years in university, I have often asked family members and friends to look over my work, and to provide me all the honest critiques they can give. I feel more comfortable with people that I know and trust to review my work, especially those who are familiar with my style of writing. They acknowledge the common mistakes I make when writing papers and they point this out to me and tell me a better way to go about with the paper. I find to be at less ease when a stranger reviews my work because I am not always confident with my own writing, and I do not take it well when others criticize it. However, from Professor Galey’s lecture last week and the readings, I felt a bit more relaxed when I was told to not feel entirely discouraged or upset if the reviews come back and reject your paper or advises major changes. Lovejoy et al. (2011) also mention that papers are sometimes rejected because they would perhaps “be better suited for publication elsewhere” (p. 3). So, a rejection does not necessarily mean that the paper is bad in the sense of quality of writing. Even so, I am still a bit apprehensive about submitting my paper to a journal; yet, I hope that I will eventually gain enough confidence to do so someday.

Lovejoy, T.I., Revenson, T.A., France, C.R. (2011). Reviewing manuscripts for peer-review journals: A primer for novice and seasoned reviewers. Annals of Behavioral Medicine, 42(1), 1-13.

2 Comments

Filed under Uncategorized

Peer Review and Open Access

Peer review is something I’ve been thinking a lot about lately. In another class I’m in right now, Foundations in LIS with Nadia Caidi, we all had to do a project on a current issue in Library and Information Science and present to the class. My group presented on Open Access publishing.

If you don’t aware, Open Access is a movement with it’s foundations in the idea that the world is a better place when current research is available to the public freely (without a monetary barrier, also known as toll access). Most Open Access resources are available one of two ways: either through institutional/subject based repositories or through Open Access journals.

Open Access journals face barriers to becoming recognized as legitimate. One of the primary reasons for this barrier is because they are perceived as having a lax peer review process and consequently the perception is that they easier to publish in. This stereotype is not totally without a basis in reality.

Much like the Sokal case, there was a recent case where a biologist named John Bohannon submitted a fake science paper to several  Open Access journals. His experiment was flawed, most of all because he did not submit to any toll access journals. However, the fact remains that his paper was accepted at 70% of the journals that supposedly put it through the peer review process (about 40% of the journals overall). This is unacceptable. However, as Camille pointed out in her blog post with regard to the Sokal affair, I don’t think it is useful to go about embarrassing/shaming people… those involved in the peer review process had no reason to believe that they were being tricked. As well, Bohannon is well regarded in his field, and depending on whether or not the journal in question was using the double blind method this likely had an additional impact. However, unfortunately, I am sure that in some of these cases the article was accepted due to negligence, or a case of predatory open access journals– but that is another issue entirely.

 

http://www.theguardian.com/higher-education-network/2013/oct/04/open-access-journals-fake-paper

 

portia.

3 Comments

Filed under Uncategorized

The Good and the Bad of Peer Review

Let’s start with the good first when thinking about peer review.

 

The process of peer review can actually be a great way to identify fraudulent papers and research. The referees who read through the paper should have enough familiarity with the field to pinpoint any discrepancies. These foundational flaws can help in finding problems with the research or the overall premise. However, in many fields (especially scientific ones), peer review does not end with publication. The process continues by trying to replicate the findings and investigate the research procedures in depth. With this added level of peer review and research replication, many fraudulent studies have been proven false (think Andrew Wakefield and his vaccines equal autism hoopla).

Yet, peer review does obviously have its flaws like everything else. The referees are really just people in the field themselves. They have their own biases and assumptions which leak into their own work and can bleed into the works of others. Seemingly original research can sometimes get the short end of the stick when it comes to peer review. This problem is exacerbated with the added multitude of supporting sources—meaning truly original research has to grab from a limited number of foundational sources but still does not have the precedence that helps when being published in a peer review journal.

 

Overall, it can be a mixed bag with peer review. The type of paper created should certainly be taken into account since some forms of research are better for peer review than others.

 

Lancet retraction of Andrew Wakefield paper

http://www.theguardian.com/society/2010/feb/02/lancet-retracts-mmr-paper

Leave a comment

Filed under Uncategorized