Author Archives: annastandish

Week 12: Wrapping up

My research has gone through quite a few stages in this term (even more in this last week, though I’m almost embarrassed to admit it). I started out with a squishy little grub of an inkling about gender and fan/producer dynamics in Doctor Who, and by the time I submitted the thing today, I was wrestling with a big Twitter and participatory culture creature that bore very little resemblance to that little SSHRC larva – though I held on to the Doctor Who online fandom as a focal point, but other than that, I ended up citing wildly different theorists and settled on a startlingly different methodology. In all honesty I’m still not satisfied that I got a good enough grip on what makes for a persuasive research proposal, but I do know that I now know a lot more about research methods and proposals than I did in September.

I didn’t expect to get so caught up in Twitter’s terms of service – that surprised me – but I did. I don’t know how I managed to miss the problems there for the earlier part of the course (I think I may not have been reading quite current enough materials), but it turns out that during the past year, Twitter has become startlingly unfriendly to researchers. Well, when I say unfriendly, I mean that they intend to monetize researchers. I do wonder what effect this is going to have on scholarship in the long term.

In theory, the Library of Congress’s Twitter archive will provide the access that Twitter has been slowly chocking off, but so far it seems the reality is that the LC is stuck in a technological trap, and it will be a long while yet before they will be capable (much less willing) to provide access to the Twitter archive to anyone, much less a student researcher such as we are.

 

It’s been a pleasure reading about all of your interests. You all have some very intriguing ideas. Have a good winter break, everyone! (and good luck on those exams and final assignments!)

As a last note on this blog, I’d like to invite all of you lovely people to click on this link and take a quick break to read a funny little comic strip. Especially those of you who posted about infographics in week 8!

Advertisements

Leave a comment

Filed under Uncategorized

Week 11: bona fides and mala fides in peer review

Like Portia (and for the exact same reasons – my own group presented on the same topic in the morning Foundations class) peer review in the context of the debates over Open Access has been at the front of my mind of late. In her post, Portia brings up a very recent case, not unlike the Sokal affair: John Bohannon’s “sting” on Open Access journals in Science. I would like to use my blog entry to build on her commentary.

One interesting detail about Bohannon’s project is that out of the 304 Open Access journals he submitted his bogus article to, he knowingly included 122 journals listed at the time on Jeffery Beall’s list of predatory publishers (http://scholarlyoa.com/publishers/). Beall’s list has been criticized in its own right (generally for being too inclusive); but the fact remains that 40% of Bohannon’s submissions were to journals already known for, or at least suspected of unscrupulous, exploitative, & otherwise predatory practices. 82% of the journals on Beall’s list that completed some kind of review process accepted the paper – compared with 45% of journals listed in the Directory of Open Access Journals (and there is overlap between the two lists – I couldn’t find the numbers exclusively for journals not on Beall’s list). It seems to me that the real control group for this experiment were the Open Access publishers not on Beall’s list, and the numbers in Bohannon’s report are more effective at indicating the validity of Beall’s list than they are able to indicate anything about Open Access publishing in general.

Science magazine’s vested interest in promoting the traditional over Open Access publishing model may have contributed to the spin on Bohannon’s results (i.e. choice of title for the article). It’s difficult to say. In any case, the role of Science magazine in all this is rendered rather reflexive when one puts the flaws in Bohannon’s own “experiment” in the context of the journal’s own notorious misadventures in the peer-review process.

The question at the heart of Bohannon’s experimental design and Science‘s role in the publication of his results is whether Bohannon and Science magazine were truly acting in good faith. And here’s a thread that I’ve seen running through in the discourse on peer review, made most explicit in Stanley Fish’s NY Times piece on the Sokal Affair: trust. Both traditional scholarly publishing, and suggested alternative models depend to some extent on being able to trust that the various parties involved are acting in good faith.

Traditional models of peer review rest on the assumptions that authors are providing work they honestly believe in (including real data and a genuine attempt to accurately portray methodology and analysis), and that editors and peer reviewers will honestly, to the best of their knowledge and ability, determine whether an article is worth presenting to a wider audience, or whether it will waste the academic community’s time (and, with non-Open Access, money).

Alternative models of peer review put trust in slightly different areas, but being able to rely on everyone acting in good faith is no less important. The kind of open review described by Fitzpatrick depends on reviewers engaging with the process in good faith. For open peer review to work, those involved need to be both honest in their criticisms, and neither take advantage of knowing which criticisms come from whom (especially regarding young and otherwise vulnerable scholars) nor expect others to take such advantage.

I suspect that the publishing of advancements in highly specialized fields of knowledge will always depend, to some extent, on the assumption that most everyone involved is acting in good faith. As long as this is the case, peer review (no matter the business model and rights management practices of the journal involved) will always be vulnerable to theatrical stunts by the likes of Sokal and Bohannon.

Peer review, though clearly in need of some re-thinking, can only ever be one check point at the start of the long road to acceptance by the scholarly community. In many ways, peer review is just the beginning of that process. The fact is that the bulk of the burden lies (and in one way or another, will always lie) with the community at large to decide whether a bit of research is or is not bunk.

For Bohannon’s own summary and explanation, see: http://www.sciencemag.org/content/342/6154/60.full

For the most succinct criticism I’ve ever read of the “sting” operation, as well as an up-to-date list of other responses to Bohannon, see: http://svpow.com/2013/10/03/john-bohannons-peer-review-sting-against-science/

Cited:

Fitzpatrick, K. (2009). Chapter 1: Peer review. In Planned Obsolescence: Publishing, Technology, and the Future of the Academy.

3 Comments

Filed under Uncategorized

Another thought on week 10

As an aside to my other post on this week’s blogging question: I worry about the psychological impact of the expectation that the research process will be made public along with the final product. I think that there is something private about process work which deserves respect. The magic of note taking and rough drafts is in their presumed personal and ephemeral nature. That they are disposable is exactly what is so liberating. I find I am more creative and take more intellectual risks in my process work than I ever can with something that is public from the start. Even though the process work of historical figures can be rich and rewarding – when considering the research going on right now, we need to weigh the potentially smothering anxiety that comes with producing something under the expectation public scrutiny. There are things which I deliberately do not back up on my computer, because psychologically, I need them to be temporary.

If I had to make my research materials public, I know that whatever I made available would be heavily edited and weeded. In the end, would that still serve the same purpose for future researchers as do the kinds of archived process materials (i.e. Bell’s notebooks) that Alan Galey is talking about in his blog post?

(For the record I’d like to add that I take no issue whatsoever with planning to make one’s raw data public – that’s just polite!)

2 Comments

Filed under Uncategorized

Week 10: Preservation

I don’t think that any of us can ensure, with full certainty, that anything we may produce will be preserved for any significant amount of time. Like others in this class have said in their blog posts, I’m not even confident that any file I may create, or any media upon which I store that file today will be readable any truly significant amount of time from now. The age of computers so far is such a short blip in the grand scheme of human history that it seems like a fool’s game to make that kind of prediction. It is near impossible to know for certain which formats will still be readable in to the far future – and more importantly – near impossible to figure out which records stored where will be prioritized for conversion to whichever new format becomes the standard (and again to whatever standard format may follow).

It’s tempting to jump to the conclusion that the only way that we could rely on for our research materials to be preserved for any significant amount of time would be to print everything onto acid-free paper and store the resulting mass in archival boxes in a humidity-controlled, fire-proof safe. But then that seems not only hubristic and impractical (especially for any research in which truly significant amounts of data are collected), but also restricts access to those materials, and (now this is the heart of the issue) would not even be capable of accurately representing the nature of digital artefacts and the ways in which the researcher made use of them. Print may be one of the most durable formats that humankind has invented, but, as Kirschenbaum discusses on the print versions of Agrippa, representing digital objects in print form is an interpretive act, and does not preserve so much as convert and reinterpret the original digital artefact.

I am taking this issue to extremes, but with the proliferation and to some extent standardization of digital repositories online for universities and other organizations, and what with our ready access to affordable, large-capacity external hard-drives and the like, preservation for the immediate future does not seem too difficult.

(*edit*, it is now 3:50 pm, not 4:50, so Ben, if you’re reading this, could you maybe see if you can fix this timezone issue we’re having?)

 

bibliography

Kirschenbaum, M.G. (2002). Editing the interface: textual studies and first-generation electronic objects. Text: An Interdisciplinary Annual of Textual Studies, 14, 15-51. [http://www.jstor.org.myaccess.library.utoronto.ca/stable/30227991]

Leave a comment

Filed under Blogging Question, Uncategorized

Week 9.2: an artifact

If I had to pick one artifact to study (and it would be so very difficult to chose) I would focus on the architecture of the Royal Ontario Museum:  the physical, constructed building itself that (I would argue) defines the public’s access to and interactions with the ROM’s collections.

Even a study of the exterior surface of the building (with its palpable tension between the old ROM building and the Michael Lee-Chin Crystal) presents a fascinating case of the shifting identity politics of a cultural institution over time. In fact, I’m sure that an analysis of the main entrance alone could be revealing, given that it has been relocated twice, with each new addition to the building.

But if resources were not an issue whatsoever, I would love to analyze the entire building, inside and out. The layout, and paths that visitors are encouraged to follow are extremely expressive; and the sharp  contrast between the corridors in the early 20th century and the Libeskind wings – and how the proportions of these spaces relate to the human body– are ideological as well as material constructions. I suspect that even the acoustics can have a significant impact on the ways in which visitors relate to the materials on display. Much has already been said about the ideological implications of a museum exhibit’s arrangement, but I think it would also be rewarding to analyze the ways in which the internal architecture encourages or even demands certain exhibition formats or arrangements over others.

I expect that such a study could result in a detailed portrait of this cultural institution’s sense of identity and purpose, of the ways in which the ROM literally constructs its relationship with the public – and how these tacit ideologies compare to the official discourse, and how they’ve changed over the last 100 years.

The layered and grafted architecture of the ROM also constitutes an extension of the historiographical enterprise that is a museum. Reading the Johns article from this week* reminded me that the construction of history is an ongoing and often tacit process, and that what we say we are now always includes a (re)constructed narrative of what we used to be. When major cultural institutions embark on large architectural projects, they can reveal a kind of institutional autobiography, and the new building can be read as a statement on this museum’s narrative and evaluation of its own past.

And on that note, I’d like to end this with a deliciously unpackable statement from the Crystal’s architect:

“Why should one expect the new addition to the ROM to be ‘business as usual’? Architecture in our time is no longer an introvert’s business. On the contrary, the creation of communicative, stunning and unexpected architecture signals a bold re-awakening of the civic life of the museum and the city.”

– Daniel Libeskind

[from http://www.rom.on.ca/en/about-us/rom/michael-lee-chin-crystal]

*Johns, A. (2012). Gutenberg and the Samurai: Or, the information revolution is history. Anthropological Quarterly, 85(3), 859-83.

Leave a comment

Filed under Uncategorized

Week 9: thinking about research as process

Sparked by week 8’s materials and discussion, I’ve had some lingering thoughts about experiments on which I’d like to muse for this wild card blogging question.

While procrastinating this reading week I came across this study: http://io9.com/the-people-who-can-see-in-pitch-darkness-1455790525?utm_source=twitterfeed&utm_medium=twitter&utm_campaign=Feed%3A+io9%2Ffull+(io9) (For the original scientific article referenced in the io9 bit – doi: 10.1177/0956797613497968) The cumulative findings are (in a nutshell) that in perfect darkness, around 50% of people can see visual phenomena corresponding to the movement of their own hands, and that this effect appears to be linked somehow to synaesthesia.

The findings are fascinating – but what really captured my attention about this article was this research team’s persistence, how they returned to the same question over and over with systematic permutations of the first experiment. None of their individual experiments adequately measured the phenomenon, but cumulatively, the results are persuasive.

Our readings have covered the design of an experiment in detail, but there was nothing assigned (that I’ve found) which dealt with experiments in series. (The closest I found was in the Neuman and Robson, on comparing different research projects/experiments in order to reveal the potential flaws in each projects’ research design ( p. 202).) Naturally, much of the point of the literature review  is to find other, similar or supporting research, which goes toward making your own research a cumulative and contextualized endeavour. But I’ve been thinking that it is also important, when designing an experiment or research project, to remember that no one study can be so well-designed as to conclusively prove an issue: that we need to be thinking of our projects as they are shaped and limited by our available resources; and that we need to consider how our framing of our research might enable others to build on it in the future.

Leave a comment

Filed under Uncategorized

Week 8: Stats

Last week’s question set me on a wild goose chase looking for an infographic about gun ownership and gun control that made the rounds on Tumblr soon after the shootings at Sandy Hook Elementary School. I vaguely remember this infographic as being exceptionally balanced and informative, and (most rare) politically nonpartisan.

I have yet to find this now semi-mythical infographic. With any luck I will return with a supplementary post, but in the meantime, having stubbed my toes on them whilst chasing my proverbial goose, I present the two infographics flanking this post: both on the topic of gun control, touting some similar statistics*, and telling two very different stories.

The obverse of the bad surveys Glen showed us last (last) week is bad data presentation. As prof. Galey said last class, researchers’ biases can make them blind to the proverbial gorilla strolling across their data if they aren’t careful enough in formulating their questions. It also stands that those biases can lead people to leap on the statistical patterns in ball-passing, and yet ignore the gigantic gorilla walking plain as day through data that was collected with great care. Obviously, this effect is heightened if personal or political values are at stake, or if the goal has morphed from one of discovery to one of persuasion.

Infographics are an engaging and effective way to present a research essay, but they also stand at a troubling confluence where both statistical literacy and visual literacy are core skills– neither of which are actively taught or fostered in mandatory schooling (unless there have been significant changes since I graduated high school). On top of this, the infographic is a form of expression which seems accessible to non-experts. Amateur efforts abound (and occasionally succeed) without the guidance of official standards. Rarely do I see an author’s name on an infographic, or for that matter any other reliable way to assess the document’s authority except painstakingly taking a micro-comb to the list of sources (if there is enough information to follow them).

I suspect that the selfsame factors which make infographics and similar data visualizations appealing and accessible also bear the potential to subtly obfuscate and obscure instances in which data is not being presented in good faith. Infographics often seem to trade detail & accountability for accessibility. This is where I have to respectfully disagree with Glen about the language of statistics. I am in favour of conventions which support greater specificity and clarity of communication within a specialized field. Statistical language, like the language of mathematics or physics, is intended to reduce the ambiguity of the English language so that ideally** person A hears and comprehends exactly what person B intended to say. It may look like English, but it’s an illusion. When a physicist says “power”, it is not power as in “great responsibility”. What they mean is “the amount of energy consumed per unit time” as measured in Joules per second. I am not personally fluent in statistics, and I can’t speak to the potential for abuse, but it looks like a similar situation to me, that is, a specialized mathematical language which has been designed to minimize miscommunication.

The downsides of a field-specific language are clear: the alienation of many people; significant barriers to the spread of statistical literacy; the inclination toward an insular and cliquey atmosphere within the discipline. I don’t know how to bridge this gap in accessibility, but I don’t think that the answer is to dispense with specialized terminology. I will say the burden to bridge the gap should not be on the reader – perhaps statisticians need to learn how to translate back into English (proper English) as part of their training. Should detailed statistics reports be bilingual, available in both English and Stats? And might it help to incorporate a mandate for accessibility into their professional code of ethics [http://www.amstat.org/about/ethicalguidelines.cfm]?

 

*Similar, but not the same. As far as I can tell (the sources listed in these two infographics make me want to hug a copy of the APA style guide and whisper Shhh, it’s okay, they can’t get us through the internet– but I digress) it seems the two sets of references have only one point of possible overlap (they both refer to the CDC).

** Caveat: we’re all human here.

Leave a comment

Filed under Blogging Question