Author Archives: jbake006
Here we are at the end, and what a journey it has been. At the beginning of this course I was a nervous wreck about having to create a viable research project. I wouldn’t say that I’m now fully confident in my abilities to craft research, but I am certainly more experienced. It will only get better with more practice! Now I have a much better foundation for research that is more based in the social sciences.
Looking back on my first post, I was all over the place and no where near certain of what I wanted to focus my research on. Slowly, my research question began to develop. Eventually, my research question settled on examining how open data is being used by the publicin Toronto, Montreal and Ottawa to create applications and other resources for the use of other Canadian citizens. It took quite some time to decide which methods to use and to decide on what sort of sample to use. I chose to use snowball sampling to select my participants, with inital contact being made at open data creation events. I also chose to send emails to the key players on websites that are dedicated to using municipal open data to request them to participate in my research. This seems to be the best sampling option to me, though I can’t help but wonder if there is a more applicable sampling technique that would have fit my research better. Any suggestions? It would be good if eventually the sample to could expand to include citizens using the open data from municipal open data programs all across Canada, but that simply wasn’t feasible in this size of a study.
Originally I was quite intimidated by the idea of interviewing, though I ended up using it as a data collection method in my research proposal. I had very little confidence in my ability to craft a well put together interview guide. Though I don’t think that any interview guide I could create now would be a master piece, I do feel that I would be able to create a servicable one thanks to my examination of some of the literature written about interviewing.
To analyze my collected data, I ended up settling on using grounded theory, beginning with open coding and progressing to selected coding to organize my information. Memoes were then used to develop theories based off of the themes that were uncovered during the coding process.
Whoa. It was a heck of a ride. And even though the course is over, my adventures with research methodology sure aren’t!
Though I generally support double-blind peer review as a more or less viable way of ensuring the quality of research for publication in scholarly journals, I do find myself wondering whether such reviews really are blind in most situations. In fields such as Information Science it is very often the case that researchers know what other scholars are working on and the type of research they generally do. They may likely be familiar with the writing style of their peers. With all that knowledge about what others in the field are working on, what they normally research and how they write, is it really reasonable to assume that the scholars asked to review research will be completely unaware of who created the material to be reviewed? I think not, and in the cases where reviewers do recognize the authorship of the piece they have been asked to review, I suppose the best we can hope for is that they attempt to remain as objective as possible. Are there any ways to ensure that a reviewer be unaware of the authorship of what they have been asked to review?
On a different note, I do applaud the attempts to find alternative methods of ensuring academic quality for publishing in journals since there are so many issues with the traditional methods. It seems that open peer review has significant potential as displayed by the success of Shakespeare Quarterly in 2010, which we discussed in class. It will be interesting to see what further developments will happen in open peer review. I would certainly consider submitting future research to an open peer review process if the results were binding for the editor. I find it very interesting how open peer review allows for so many different perspectives on a piece of research to be received by the author. It seems to me that this would increase the these different perspectives, possibly from different disciplines, could result in excellent critiques that could improve the research and would otherwise have gone undone.
If I had to find another object to study and resources were unlimited, I would like to examine the pamphlets and posters about venereal diseases that were distributed to Canadian servicemen and women in the Second World War. I came across some of these little gems when I was studying women on the home front. They often contain illustrations or cartoons of “loose women” accompanied by two female companions labeled as syphilis and Gonorrhea. The choices that were made in how these diseases would be portrayed to servicemen reveals a variety of viewpoints about them from the time period.
This subject is interesting to me for a tons of reasons. First of all, ensuring the health of the troops was of vital importance for obvious reasons, and the way that the Canadian Government dealt with sexually transmitted diseases is very interesting. Some interesting differences in treatment can be seen by examining and comparing the pamphlets directed to both sexes. The information that the government chose to disseminate and how that information changed over time could show what the government was concerned with, thought was working to prevent disease, and what wasn’t. It is also interesting to see what forms of information the government thought was most effective at informing the service people.
The posters often have really intense imagery, or where sometimes obviously intended to be humorous. These different approaches taken can be analyzed to understand different strategies taken by the government to combat venereal diseases and how those approaches varied depending on the sex of the service people.
Here’s a nice example of a poster!
I found coming up with a topic for this week’s blogging question really difficult. In the end, I decided to explore some issues and concerns about my research topic. Luckily, I do not have much of a problem worrying about whether the human subjects of my research will want to participate in my study. There will of course be some people who do not want to be involved; however, the majority will likely be willing to fill out surveys and/or take part in an interview.
In general, the people who work with governmental open data to create applications for the broader populations tend to be politically active, especially when it comes to ideas of open access to information and accountability. These people tend to openly discuss what they are doing with the media and would likely be willing to be involved in my study. Additionally, groups that participate in hackathons and other events that involve engaging and creating with open data tend to be open to media and scholarly attention. These are active citizens who really like to discuss what they are doing, which can easily be seen just by looking at the forums in their online communities. If I explain the research that I am doing and they find it interesting or valuable, then I should have no issues finding willing participants. In addition, these groups of people are highly tech savvy and therefore very easy to contact through their various interactions online, either through email or other social media.
That being said, there is the distinct possibility that my perception of these groups could be flawed, or that they do not find my research worth spending their time on. In this case, I would still have access to their rich archives of forum discussions and posts regarding their actions at hackathons, and events.
So, while I have remained relatively relaxed when considering participants, I am concerned about writing a survey. It seems to me that one of the keys to ensuring the quality of a survey might be to have another researcher, or just an intelligent individual, to look over what you have created to avoid those oh-so embarrassing issues that we saw in Glen’s lecture about stats and surveys. Even then, creating a good survey seems to be a very time and energy consuming activity. Obviously the results are worth all the work if the outcome is a well crafted survey that makes it easy for your participants to contribute and provides you with all the answers you need. Still, I find the process intimidating. Do any of you have similar concerns when it comes to the creation of surveys or interview questions?