Tag Archives: Blogging question
Here we are at the end, and what a journey it has been. At the beginning of this course I was a nervous wreck about having to create a viable research project. I wouldn’t say that I’m now fully confident in my abilities to craft research, but I am certainly more experienced. It will only get better with more practice! Now I have a much better foundation for research that is more based in the social sciences.
Looking back on my first post, I was all over the place and no where near certain of what I wanted to focus my research on. Slowly, my research question began to develop. Eventually, my research question settled on examining how open data is being used by the publicin Toronto, Montreal and Ottawa to create applications and other resources for the use of other Canadian citizens. It took quite some time to decide which methods to use and to decide on what sort of sample to use. I chose to use snowball sampling to select my participants, with inital contact being made at open data creation events. I also chose to send emails to the key players on websites that are dedicated to using municipal open data to request them to participate in my research. This seems to be the best sampling option to me, though I can’t help but wonder if there is a more applicable sampling technique that would have fit my research better. Any suggestions? It would be good if eventually the sample to could expand to include citizens using the open data from municipal open data programs all across Canada, but that simply wasn’t feasible in this size of a study.
Originally I was quite intimidated by the idea of interviewing, though I ended up using it as a data collection method in my research proposal. I had very little confidence in my ability to craft a well put together interview guide. Though I don’t think that any interview guide I could create now would be a master piece, I do feel that I would be able to create a servicable one thanks to my examination of some of the literature written about interviewing.
To analyze my collected data, I ended up settling on using grounded theory, beginning with open coding and progressing to selected coding to organize my information. Memoes were then used to develop theories based off of the themes that were uncovered during the coding process.
Whoa. It was a heck of a ride. And even though the course is over, my adventures with research methodology sure aren’t!
At the beginning of the course, I had no idea where my research interests were and whether I could handle the role of researcher, even on a junior level. Although I conduct research for almost every major assignment, the task of developing my own research proposal was quite intimidating. However, proceeding incrementally and being able to discuss my research as it evolved has been very helpful – thank you, my fellow blog members for your comments, opinions, and advice!
My research question has not changed in substance since I submitted my SSHRC program of work. It still focuses on the association between Facebook use (of a specific nature to do with school) and student satisfaction with university life. My biggest problem was deciding in detail on the method. Finally, after considerable indecision, I settled on asking students to use a diary to track their Facebook use and a questionnaire to assess their satisfaction. Hopefully the diary will help avoid the problem of participants guesstimating the time they spend on Facebook.
I developed a theoretical framework to assist me. It has been very helpful in terms of clarifying my thoughts. Similar to the “bedraggled daisy” exercise, drawing a diagram of the framework really helps in seeing the connections between the variables under investigation. Because of this project, I have come to appreciate the difficulties in designing research proposals. Good luck to all, and thanks again!
When reviewing the qualities of a good peer-review, information during lecture was primarily drawn upon Michael Tyworth’s Blog post ‘How to Conduct a Peer Review’. As was mentioned, the first quality in a good review is to take the ‘I-want-my-paper-to-get-published’ approach straight to the reviewer. What I actually found most useful is when Prof. Galey made a thought-provoking critique of this point, stating that as researchers our mission is not to get published, but rather to ensure that our article enriches readership through its publication. We should want the journal to succeed first and foremost as a reliable source of knowledge advancement and enhancement, and not for the selfish, self-rectifying reasons that often lead authors to praise their work and remain ignorant of its flaws. My analogy here is that publishing an article requires the same care and thought as erecting a monument or statue, for who would agree to subject external spectators and consumers of knowledge to such an impermeable object without considering its validity, meaning, purpose and contribution. For this reason, it is important for the peer-reviewer to not only play the role of critic, but also be a coach who crafts advice in an accessible and useful way that motivates the author to improve the article.
With Prof. Galey’s critique style, a ‘peer-review of peer-review methods’, I’d like to bring to your attention an additional critique of review questioning methods. As mentioned in lecture, in the reviewing process it is crucial to ask several questions involving the research methods employed, the source of data (or how the data was collected and analyzed), whether the research design has reliability and whether the data has internal and external validity. The last point here is quite problematic and doesn’t go without mentioning that, for obvious reasons, there is no guaranteed way of regulating whether experimental research data itself is entirely valid. In this sense, I agree with YAAWESOMESAUCE’s post (a.k.a. Brooke Windsor) in that peer-review in scientific, experimental research fields requires constant review and validation beyond publication. William Y. Arms makes this fact clear in his 2002 article which I’ve linked below. He brings in the example of the Journal of the ACM, a highly validated journal of theoretical computer science. Arms states of a particular article he wrote for the journal that only twenty years later did he discover that the particular data-set he used from a prior research study was in fact ‘fraudulent’. The problem here is that one cannot blame the peer-reviewers, for as he states “[they] had no way of knowing this fact. The hypothesis in the paper has been confirmed by other experiments, but the erroneous paper — in a respectable, peer-reviewed journal — can still be found on library shelves.” This illuminates the bigger problem that one cannot simply answer whether data has internal and external validity, for a peer-reviewer must repeat the experiment in order to know how to answer this truly (a time consuming, costly, and nearly impossible feat). Unfortunately, this leaves erroneous information in so-called reliable, scholarly journals subject to consumption by the unsuspecting scholar.
In this case, Arms asserts that peer-review is “little more than a comment on whether the research appears to be well done”. What do you think? Would you agree with Arms’ assessment here? Or should we all just stop worrying and accept the flaws of peer-review, for, truly, how else would we ever contribute to scholarship and bring new knowledge out to the world?
Arms, William Y. (2002). “What are the alternatives to peer review? Quality Control in scholarly publishing on the web.” The Journal of Electronic Publishing, 8 (1). DOI: 10.3998/3336451.0008.103. Retrieved from: http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0008.103
Though I generally support double-blind peer review as a more or less viable way of ensuring the quality of research for publication in scholarly journals, I do find myself wondering whether such reviews really are blind in most situations. In fields such as Information Science it is very often the case that researchers know what other scholars are working on and the type of research they generally do. They may likely be familiar with the writing style of their peers. With all that knowledge about what others in the field are working on, what they normally research and how they write, is it really reasonable to assume that the scholars asked to review research will be completely unaware of who created the material to be reviewed? I think not, and in the cases where reviewers do recognize the authorship of the piece they have been asked to review, I suppose the best we can hope for is that they attempt to remain as objective as possible. Are there any ways to ensure that a reviewer be unaware of the authorship of what they have been asked to review?
On a different note, I do applaud the attempts to find alternative methods of ensuring academic quality for publishing in journals since there are so many issues with the traditional methods. It seems that open peer review has significant potential as displayed by the success of Shakespeare Quarterly in 2010, which we discussed in class. It will be interesting to see what further developments will happen in open peer review. I would certainly consider submitting future research to an open peer review process if the results were binding for the editor. I find it very interesting how open peer review allows for so many different perspectives on a piece of research to be received by the author. It seems to me that this would increase the these different perspectives, possibly from different disciplines, could result in excellent critiques that could improve the research and would otherwise have gone undone.
The Robinson and Agne reading referred to Mohan J. Dutta, and his paper “The Ten Commandments of Reviewing: The Promise of a Kinder, Gentler Discipline!” (2006). I found his article through the University of Toronto library website and suggest you all give it a go! I really like his positive approach and, having read his article, don’t feel that the peer review process has to be a scary one! I’ve listed the ten guidelines below:
1. Approach reviewing as a collaborative task (act as a teacher instead of opponent)
2. Put aside your ego (be thoughtful, do not be upset if your work isn’t cited!)
3. Be reflexive (reflect on and check your biases)
4. Understand the paradigms
5. Understand the limitations of the project
6. Don’t feel that you need to demonstrate how much you know (be understated, subtle)
7. Be specific in your recommendations
8. Provide feedback in a timely manner (someone is depending on you, have high commitment)
9. Encourage! (you can be evaluative, challenging yet gentle and kind)
10. Do unto others as you would have them do unto you (write reviews you would like to receive)
I have mixed feelings about the Sokal affair. On the one hand, I think it is irresponsible for a journal to simply publish articles without going through the peer review process. This was a great read after our lecture this past week – it nicely complements the importance of peer review to the advancement of knowledge amongst academics and professionals. Although I understand where Alan Sokal is coming from, I would have handled it differently. I would have sent an open letter to Social Text or presented my thoughts at a conference. However, since I don’t feel anywhere as strongly about this as Sokal did, I would not be deceitful and take the time to write a hoax article to embarass the journal.
At the bottom of the Wikipedia article, a study done by Cornell sociologist Robb Willer with student participants is described. Two groups were given Sokal’s hoax article; one group were told it was written by a student, the other group were told it was written by a famous academic. The results showed that the latter group looked upon the text more favourably. I probably would have fallen for this trick as well. One of Dr. Robert Cialdini’s 6 Universal Principles of Influence is Authority. We comply with those who are perceived as authorities, even though it may not always be rational (e.g., a celebrity endorsing a car manufacturer when he/she knows very little about cars).