Category Archives: Blogging Question

Research Evolution

Here we are at the end, and what a journey it has been. At the beginning of this course I was a nervous wreck about having to create a viable research project. I wouldn’t say that I’m now fully confident in my abilities to craft research, but I am certainly more experienced. It will only get better with more practice! Now I have a much better foundation for research that is more based in the social sciences.

Looking back on my first post, I was all over the place and no where near certain of what I wanted to focus my research on. Slowly, my research question began to develop. Eventually, my research question settled on examining how open data is being used by the publicin Toronto, Montreal and Ottawa to create applications and other resources for the use of other Canadian citizens. It took quite some time to decide which methods to use and to decide on what sort of sample to use. I chose to use snowball sampling to select my participants, with inital contact being made at open data creation events. I also chose to send emails to the key players on websites that are dedicated to using municipal open data to request them to participate in my research. This seems to be the best sampling option to me, though I can’t help but wonder if there is a more applicable sampling technique that would have fit my research better. Any suggestions? It would be good if eventually the sample to could expand to include citizens using the open data from municipal open data programs all across Canada, but that simply wasn’t feasible in this size of a study.

Originally I was quite intimidated by the idea of interviewing, though I ended up using it as a data collection method in my research proposal. I had very little confidence in my ability to craft a well put together interview guide. Though I don’t think that any interview guide I could create now would be a master piece, I do feel that I would be able to create a servicable one thanks to my examination of some of the literature written about interviewing.  

To analyze my collected data, I ended up settling on using grounded theory, beginning with open coding and progressing to selected coding to organize my information. Memoes were then used to develop theories based off of the themes that were uncovered during the coding process.

Whoa. It was a heck of a ride. And even though the course is over, my adventures with research methodology sure aren’t!

 

Leave a comment

Filed under Blogging Question

Peer-Review, or: How I Learned to Stop Worrying and Accept the Flaws

When reviewing the qualities of a good peer-review, information during lecture was primarily drawn upon Michael Tyworth’s Blog post ‘How to Conduct a Peer Review’. As was mentioned, the first quality in a good review is to take the ‘I-want-my-paper-to-get-published’ approach straight to the reviewer. What I actually found most useful is when Prof. Galey made a thought-provoking critique of this point, stating that as researchers our mission is not to get published, but rather to ensure that our article enriches readership through its publication. We should want the journal to succeed first and foremost as a reliable source of knowledge advancement and enhancement, and not for the selfish, self-rectifying reasons that often lead authors to praise their work and remain ignorant of its flaws. My analogy here is that publishing an article requires the same care and thought as erecting a monument or statue, for who would agree to subject external spectators and consumers of knowledge to such an impermeable object without considering its validity, meaning, purpose and contribution. For this reason, it is important for the peer-reviewer to not only play the role of critic, but also be a coach who crafts advice in an accessible and useful way that motivates the author to improve the article.

With Prof. Galey’s critique style, a ‘peer-review of peer-review methods’, I’d like to bring to your attention an additional critique of review questioning methods. As mentioned in lecture, in the reviewing process it is crucial to ask several questions involving the research methods employed, the source of data (or how the data was collected and analyzed), whether the research design has reliability and whether the data has internal and external validity. The last point here is quite problematic and doesn’t go without mentioning that, for obvious reasons, there is no guaranteed way of regulating whether experimental research data itself is entirely valid. In this sense, I agree with YAAWESOMESAUCE’s post (a.k.a. Brooke Windsor) in that peer-review in scientific, experimental research fields requires constant review and validation beyond publication. William Y. Arms makes this fact clear in his 2002 article which I’ve linked below. He brings in the example of the Journal of the ACM, a highly validated journal of theoretical computer science. Arms states of a particular article he wrote for the journal that only twenty years later did he discover that the particular data-set he used from a prior research study was in fact ‘fraudulent’. The problem here is that one cannot blame the peer-reviewers, for as he states “[they] had no way of knowing this fact. The hypothesis in the paper has been confirmed by other experiments, but the erroneous paper — in a respectable, peer-reviewed journal — can still be found on library shelves.” This illuminates the bigger problem that one cannot simply answer whether data has internal and external validity, for a peer-reviewer must repeat the experiment in order to know how to answer this truly (a time consuming, costly, and nearly impossible feat). Unfortunately, this leaves erroneous information in so-called reliable, scholarly journals subject to consumption by the unsuspecting scholar.

In this case, Arms asserts that peer-review is “little more than a comment on whether the research appears to be well done”. What do you think? Would you agree with Arms’ assessment here?  Or should we all just stop worrying and accept the flaws of peer-review, for, truly, how else would we ever contribute to scholarship and bring new knowledge out to the world?

Olivia Wisniewski

Arms, William Y. (2002). “What are the alternatives to peer review? Quality Control in scholarly publishing on the web.” The Journal of Electronic Publishing, 8 (1). DOI: 10.3998/3336451.0008.103. Retrieved from: http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0008.103

2 Comments

Filed under Blogging Question

Peer Review

Though I generally support double-blind peer review as a more or less viable way of ensuring the quality of research for publication in scholarly journals, I do find myself wondering whether such reviews really are blind in most situations. In fields such as Information Science it is very often the case that researchers know what other scholars are working on and the type of research they generally do. They may likely be familiar with the writing style of their peers. With all that knowledge about what others in the field are working on, what they normally research and how they write, is it really reasonable to assume that the scholars asked to review research will be completely unaware of who created the material to be reviewed? I think not, and in the cases where reviewers do recognize the authorship of the piece they have been asked to review, I suppose the best we can hope for is that they attempt to remain as objective as possible. Are there any ways to ensure that a reviewer be unaware of the authorship of what they have been asked to review?

On a different note, I do applaud the attempts to find alternative methods of ensuring academic quality for publishing in journals since there are so many issues with the traditional methods. It seems that open peer review has significant potential as displayed by the success of Shakespeare Quarterly in 2010, which we discussed in class. It will be interesting to see what further developments will happen in open peer review. I would certainly consider submitting future research to an open peer review process if the results were binding for the editor. I find it very interesting how open peer review allows for so many different perspectives on a piece of research to be received by the author. It seems to me that this would increase the these different perspectives, possibly from different disciplines, could result in excellent critiques that could improve the research and would otherwise have gone undone.

Leave a comment

Filed under Blogging Question

Like others, I have been preserving my own research notes and materials for quite some time. I have kept them for my own purposes and I have never really considered making those preserved items available for others. Even with my completed Master’s research I was not overly concerned about digital preservation since I knew that my advisor, reader and university had all retained a print copy of my work for preservation purposes. In retrospect, that is pretty inadequate since it is likely that my work will remain preserved but is largely inaccessible to most people unless the university chooses to digitize it (which they have permission to do).

I currently have the habit of saving all of my important digital documents on an external hard drive, on usb sticks and by emailing them to myself. I am not particularly tech savvy, though I am trying to expand my knowledge in that area since it is VITAL. Because of my relative inexperience with technology, I need to really think about how best to preserve my research. For certain projects, I create digital copies of my handwritten notes since I like to use pen and paper to record certain information. That being said, for this research, it is likely that most of my notes will be digital since I am working with online communities in a variety of locations. My research will likely create notes, audio and transcripts of interviews, data from a survey, the survey itself, chat logs and the digital items themselves (applications made from open data).

Some of these digital items (such as the interview audio and transcriptions) would likely have to be destroyed after the research is finished, though perhaps it could be kept with the permission of all parties involved since the interviewees are not vulnerable members of society and the questions would likely not contain any sensitive information. That said, I would have to ensure that all of my research files are encrypted, and that all information that needed to be destroyed after the research is finished is completely cleared from my hard drive by wiping it. To try my best to ensure that my research is preserved for the future, I will store it in ways listed above and will continue to copy the data into newer formats and programs as they develop. So I will re-encode my information in new formats before the old formats become obsolete. I will also attempt to have the research exist in multiple formats, including (when possible) a print format.

As for professional associations that offer suggestions or guidelines for digital preservation, the National Digital Stewardship Alliance (connected to the Library of Congress) offers their standards and best practices on the subject. Furthermore, the NDSA provides a Personal Digital Archiving Day Kit.  There is also a program called Muse (Memories Using Email) that is run by Stanford University that offers, amongst other things, long term email archives. And hey, why not one more? The Digital Preservation Coalition has a Digital Preservation Handbook.

2 Comments

by | November 22, 2013 · 5:59 pm

Week 10: Preservation

I don’t think that any of us can ensure, with full certainty, that anything we may produce will be preserved for any significant amount of time. Like others in this class have said in their blog posts, I’m not even confident that any file I may create, or any media upon which I store that file today will be readable any truly significant amount of time from now. The age of computers so far is such a short blip in the grand scheme of human history that it seems like a fool’s game to make that kind of prediction. It is near impossible to know for certain which formats will still be readable in to the far future – and more importantly – near impossible to figure out which records stored where will be prioritized for conversion to whichever new format becomes the standard (and again to whatever standard format may follow).

It’s tempting to jump to the conclusion that the only way that we could rely on for our research materials to be preserved for any significant amount of time would be to print everything onto acid-free paper and store the resulting mass in archival boxes in a humidity-controlled, fire-proof safe. But then that seems not only hubristic and impractical (especially for any research in which truly significant amounts of data are collected), but also restricts access to those materials, and (now this is the heart of the issue) would not even be capable of accurately representing the nature of digital artefacts and the ways in which the researcher made use of them. Print may be one of the most durable formats that humankind has invented, but, as Kirschenbaum discusses on the print versions of Agrippa, representing digital objects in print form is an interpretive act, and does not preserve so much as convert and reinterpret the original digital artefact.

I am taking this issue to extremes, but with the proliferation and to some extent standardization of digital repositories online for universities and other organizations, and what with our ready access to affordable, large-capacity external hard-drives and the like, preservation for the immediate future does not seem too difficult.

(*edit*, it is now 3:50 pm, not 4:50, so Ben, if you’re reading this, could you maybe see if you can fix this timezone issue we’re having?)

 

bibliography

Kirschenbaum, M.G. (2002). Editing the interface: textual studies and first-generation electronic objects. Text: An Interdisciplinary Annual of Textual Studies, 14, 15-51. [http://www.jstor.org.myaccess.library.utoronto.ca/stable/30227991]

Leave a comment

Filed under Blogging Question, Uncategorized

Servicemen, Be Suspicious

If I had to find another object to study and resources were unlimited, I would like to examine the pamphlets and posters about venereal diseases that were distributed to Canadian servicemen and women in the Second World War. I came across some of these little gems when I was studying women on the home front. They often contain illustrations or cartoons of “loose women” accompanied by two female companions labeled as syphilis and Gonorrhea. The choices that were made in how these diseases would be portrayed to servicemen reveals a variety of viewpoints about them from the time period.

This subject is interesting to me for a tons of reasons. First of all, ensuring the health of the troops was of vital importance for obvious reasons, and the way that the Canadian Government dealt with sexually transmitted diseases is very interesting. Some interesting differences in treatment can be seen by examining and comparing the pamphlets directed to both sexes. The information that the government chose to disseminate and how that information changed over time could show what the government was concerned with, thought was working to prevent disease, and what wasn’t. It is also interesting to see what forms of information the government thought was most effective at informing the service people.

The posters often have really intense imagery, or where sometimes obviously intended to be humorous. These different approaches taken can be analyzed to understand different strategies taken by the government to combat venereal diseases and how those approaches varied depending on the sex of the service people.

Here’s a nice example of a poster!

 He "Picked Up" More Than a Girl : sensitive campaign against venereal disease.


He “Picked Up” More Than a Girl : sensitive campaign against venereal disease.

Leave a comment

Filed under Blogging Question

I’d Like to Know What’s in Your Wallet

This week, we are required to consider information questions by closely reading a particular artifact, device, designed object, or text. In a sense, the current SSHRC research question I am undertaking addresses the study of an artifact – the photo album. It is a designed object/information tool that, when utilized according to its intended purpose, serves as a text which documents compiled histories. Through studying how immigrant families construct their photo album, much can be learned about the information practices that surround the artifact – the ways in which identities are constructed on a deeply personal level, or forged to mask existing fractured relationships, ultimately detailing visual constructs of what family life means to the author of the album.

In the same spirit, if I had to choose a different artifact, it would be the wallet. Much can be examined from an information perspective, especially in terms of sense-making by analyzing the organization practices inside a wallet, what people choose to keep in there and what they choose to omit. It would also be quite intriguing to discover the socio-cultural practices behind the organization of a wallet and how the ways in which people choose to keep organized through order or chaotic mess serve as a reflection of their identity. But oh, how highly controversial and invasive a study this would be! The family photos, purchase receipts, business cards, loyalty cards, banking cards…heck, personal information in a variety of formats exposed only to be scrutinized and judged by a researcher. And I couldn’t be expected to analyze my own wallet – no, it would have to be your wallet. How comfortable would you feel about that? I bet most of you are twisting your faces in agony, lifting an eyebrow and mouthing out “what?!” Rightfully so. In order to study the wallet, a researcher must understand that these artifacts are intrinsically tied to their human owners. As such, a study of these artifacts is invasive, bordering the realm of unethicality. This also complicates notions of authority and ownership, since it is inevitable for the researcher will become invested in the study of a wallet and feel a sense of identification with their context. In turn, once the wallets’ secrets are unveiled to the researcher, this could have a crippling effect on its owners’ morale.

After all, who in their right mind would willingly allow a stranger to rummage through and critically examine the contents of their wallet? Would such a research project even be feasible?

Olivia Wisniewski

Note: If you’re interested in the highly controversial study of texts, check out French conceptual artist Sophie Calle’s ‘The Address Book’ (originally published in 1983, later translated to English): http://sigliopress.com/book/the-address-book/

Basically, the entire affair commenced when Calle found a lost address book on a street in Paris. Before returning the address book to its rightful owner incognito, she copies down its contents and embarks on a mission to contact and interview the people within the address book in order to discover who the owner is without every meeting him. This is a great example of how the small-scale exploratory research of an artifact can answer larger questions about the information practices – how information, once out of the owner’s grasp, can lead to major surveillance and privacy issues which can potentially compromise the identity an individual.

Leave a comment

Filed under Blogging Question, Uncategorized

Week 9: Research Thoughts

I found coming up with a topic for this week’s blogging question really difficult. In the end, I decided to explore some issues and concerns about my research topic. Luckily, I do not have much of a problem worrying about whether the human subjects of my research will want to participate in my study. There will of course be some people who do not want to be involved; however, the majority will likely be willing to fill out surveys and/or take part in an interview.

In general, the people who work with governmental open data to create applications for the broader populations tend to be politically active, especially when it comes to ideas of open access to information and accountability. These people tend to openly discuss what they are doing with the media and would likely be willing to be involved in my study. Additionally, groups that participate in hackathons and other events that involve engaging and creating with open data tend to be open to media and scholarly attention. These are active citizens who really like to discuss what they are doing, which can easily be seen just by looking at the forums in their online communities. If I explain the research that I am doing and they find it interesting or valuable, then I should have no issues finding willing participants. In addition, these groups of people are highly tech savvy and therefore very easy to contact through their various interactions online, either through email or other social media.

That being said, there is the distinct possibility that my perception of these groups could be flawed, or that they do not find my research worth spending their time on. In this case, I would still have access to their rich archives of forum discussions and posts regarding their actions at hackathons, and events.

So, while I have remained relatively relaxed when considering participants, I am concerned about writing a survey. It seems to me that one of the keys to ensuring the quality of a survey might be to have another researcher, or just an intelligent individual, to look over what you have created to avoid those oh-so embarrassing issues that we saw in Glen’s lecture about stats and surveys. Even then, creating a good survey seems to be a very time and energy consuming activity. Obviously the results are worth all the work if the outcome is a well crafted survey that makes it easy for your participants to contribute and provides you with all the answers you need. Still, I find the process intimidating. Do any of you have similar concerns when it comes to the creation of surveys or interview questions?

 

Leave a comment

Filed under Blogging Question

Forever in Blue Jeans

I’m going to take the freedom Prof. Galey has kindly permitted this week as an opportunity to roll the dice on a random (yet relevant) topic pertaining to the realm of research. Back in 2011, reports on a rather ‘dirty’ research study were published extensively in Canadian and American news mediums. As some or all of you may recall, the study involved Josh Le, a 20-year-old University of Alberta student and raw-denim enthusiast. He shocked (and possibly – no, definitely – offended) pretty much everyone he came into contact with by wearing the same pair of jeans 330 times without a single wash in a 15-month time-frame. Yeah, that guy. Basically, raw-denim requires a process of ‘wearing-into’ the textile and Le wanted to see whether it was terribly unhealthy to wear these jeans without washing them for an extensive period of time. With the supervision of a faculty member from the Department of Human Ecology, Le was able to successfully prove that little change occurs in the bacteria levels of the denim from slight wear up to beyond excessive wear.

In last week’s readings, Neuman and Robson argue that experimentation (in quantitative research) is better suited amongst individuals or smaller groups and, as a result, the experiment can rarely generalize or answer to questions on a larger societal scale (p. 185). While I cannot argue against this, it still doesn’t eliminate the fact that this is a point of contention for me. Le’s experiment was small scale since he was the lone participant, the relationship between his body and jeans being the subject of study. Yet, the results proved to be generalizable. The findings produced through this study are useful for health information awareness, addressing concerns pertaining to hygienic clothing wear. Stemming from this awareness are environmental benefits, since wearing clothing longer between washings reduces water usage. Both health and the environment are applicable to the larger concerns of the human population.

Therefore, is it really all that rare for small-scale experiments to bear valid application on the level of generalizability? Can you think of any other small-scale experimental findings you’ve encountered which could be applied for the benefit of a larger populace?

Olivia Wisniewski

(Side note: Yes, the blog title is the title of a song. Clearly Neil Diamond was onto something long before Josh Le).

References

Betkowski, B. (2011, January 1). Jeans remain surprisingly clean after a year of wear. University of Alberta [online news article]. Retrieved from http://www.news.ualberta.ca/newsarticles/2011/01/jeansremainsurprisinglycleanafterayearofwear

Neuman, W.L., Robson, K. (2012). Experimental Research. In Basics of Social Research: Qualitative and Quantitative Approaches, 3rd ed. (pp. 184-204). Toronto, ON: Pearson Canada, Inc.

1 Comment

Filed under Blogging Question, Uncategorized

Week 8: Stats

Last week’s question set me on a wild goose chase looking for an infographic about gun ownership and gun control that made the rounds on Tumblr soon after the shootings at Sandy Hook Elementary School. I vaguely remember this infographic as being exceptionally balanced and informative, and (most rare) politically nonpartisan.

I have yet to find this now semi-mythical infographic. With any luck I will return with a supplementary post, but in the meantime, having stubbed my toes on them whilst chasing my proverbial goose, I present the two infographics flanking this post: both on the topic of gun control, touting some similar statistics*, and telling two very different stories.

The obverse of the bad surveys Glen showed us last (last) week is bad data presentation. As prof. Galey said last class, researchers’ biases can make them blind to the proverbial gorilla strolling across their data if they aren’t careful enough in formulating their questions. It also stands that those biases can lead people to leap on the statistical patterns in ball-passing, and yet ignore the gigantic gorilla walking plain as day through data that was collected with great care. Obviously, this effect is heightened if personal or political values are at stake, or if the goal has morphed from one of discovery to one of persuasion.

Infographics are an engaging and effective way to present a research essay, but they also stand at a troubling confluence where both statistical literacy and visual literacy are core skills– neither of which are actively taught or fostered in mandatory schooling (unless there have been significant changes since I graduated high school). On top of this, the infographic is a form of expression which seems accessible to non-experts. Amateur efforts abound (and occasionally succeed) without the guidance of official standards. Rarely do I see an author’s name on an infographic, or for that matter any other reliable way to assess the document’s authority except painstakingly taking a micro-comb to the list of sources (if there is enough information to follow them).

I suspect that the selfsame factors which make infographics and similar data visualizations appealing and accessible also bear the potential to subtly obfuscate and obscure instances in which data is not being presented in good faith. Infographics often seem to trade detail & accountability for accessibility. This is where I have to respectfully disagree with Glen about the language of statistics. I am in favour of conventions which support greater specificity and clarity of communication within a specialized field. Statistical language, like the language of mathematics or physics, is intended to reduce the ambiguity of the English language so that ideally** person A hears and comprehends exactly what person B intended to say. It may look like English, but it’s an illusion. When a physicist says “power”, it is not power as in “great responsibility”. What they mean is “the amount of energy consumed per unit time” as measured in Joules per second. I am not personally fluent in statistics, and I can’t speak to the potential for abuse, but it looks like a similar situation to me, that is, a specialized mathematical language which has been designed to minimize miscommunication.

The downsides of a field-specific language are clear: the alienation of many people; significant barriers to the spread of statistical literacy; the inclination toward an insular and cliquey atmosphere within the discipline. I don’t know how to bridge this gap in accessibility, but I don’t think that the answer is to dispense with specialized terminology. I will say the burden to bridge the gap should not be on the reader – perhaps statisticians need to learn how to translate back into English (proper English) as part of their training. Should detailed statistics reports be bilingual, available in both English and Stats? And might it help to incorporate a mandate for accessibility into their professional code of ethics [http://www.amstat.org/about/ethicalguidelines.cfm]?

 

*Similar, but not the same. As far as I can tell (the sources listed in these two infographics make me want to hug a copy of the APA style guide and whisper Shhh, it’s okay, they can’t get us through the internet– but I digress) it seems the two sets of references have only one point of possible overlap (they both refer to the CDC).

** Caveat: we’re all human here.

Leave a comment

Filed under Blogging Question