Brainstorming the online peer review process
It’s not much of a surprise that I would be thinking more about the online peer review process we discussed last week, since that’s what I do many hours a week now. Again, whether or not it was wholly successful isn’t really the issue in my mind, but it’s that the folks at Shakespeare Quarterly and MediaCommons sought to innovate peer review and academic publishing. Like I mentioned last time, I’ve always been thinking about production and distribution when it came to imagining what digital media had to offer, and less about how scholarship and collegiality might also benefit. So the SQ experiment was definitely illuminating on that front.
What follows, then, are some things that could be brainstormed about the next time someone tries something like this, to build on what SQ and MediaCommons tried on this go-round:
Incorporating responses: One of the outcomes of the project was that there was so much feedback that authors found it took longer to process the comments, both in terms of time and page length. According to the journal’s editor David Schalkwyk in the piece that appeared in the Chronicle, editors and authors had to spend a good amount time keeping track of how the discussion of the articles went, which also led to more lengthy revisions. Think of it this way: Don’t you feel indebted to incorporate all the comments that people who’ve really taken the time to read your writing offer you? Well, multiply that by about 10 times, with the suggestions being public, so that there’s a record to check your changes against. Getting input is good, but there’s a limit to it, logistically for the editor and mentally for the writer.
More brainstorming below the fold…
A closed but collaborative option: For publications that are perhaps less ambitious and iconoclastic in their objectives, what about combining the existing blind model with this open online one, where a smaller group of selected reviewers work together online in assessing a piece? You might get some of the benefits of the shared “track changes” concept of the SQ experiment, but it would be in a more circumscribed and perhaps more manageable form? You could then keep the reviewers honest a bit, if a few of ’em had to work together making a yay-or-nay recommendation, in consultation with one another and the journal editors. Plus, you get all the comments in digital, saving the time it takes to snail mail things back and forth, when that’s still happening.
A living document: The last point is a more philosophical one about what the need is to actually publish a “final” piece, when the original submission is, in fact, a dynamic, living document in and of itself. In some ways, the legitimacy conferred to a completed article in its actual printing and publication becomes anti-climactic, when compared to all the attention and care it received during the vetting process. Actually, it’s unlikely that readers of the final product will devote as much to it as the self/appointed reviewers did–plus, how much larger will the audience be at this point, especially since there were probably some registered lurkers on top of the identified participants?
The project really gets at the heart of what scholarship means in light of the technologies that are available, since the “rough” version that was worked through in the revision process likely elicited more active engagement on the part of its readers than the final product probably ever will. If you think about it, even a rejected essay run through the wringer would inspire more discussion and interaction between scholars than pretty much any published piece in a typical print journal would. The open peer review definitely makes us think about the ways that process can trump product.