Open Science: How do we track contributions across the whole internet?

← back to the blog

I've had two discussions recently with organizations trying to implement open science-type platforms, and the conversation keeps coming back to the same thing: "How do we get people to contribute?"

Maybe this is a silly question. Clay Shirky argues that trying to understand why people contribute to Wikipedia (or other crowdsourced projects) in raw economic terms makes no sense. There is literally no reason for it, except that people have free time and would rather spend some of it creating as well as consuming media (ie, watching TV or surfing the web).

Still, I think when it comes to post-publication peer review, crowdsourced knowledge repositories, and the other ventures in open science, getting some amount of recognition for contributions can only help. This doesn't only apply only to open science, and sites as diverse as Wikipedia and Stack Overflow have their own ways of assessing reputation, but I think it becomes more critical in the academic realm. The key thing to keep in mind is that - for academics - recognition is currency. Recognition is everything. It affects hiring, tenure, promotion, pay, and reputation, which is key to getting more opportunities to get recognized. Right now, recognition comes mostly in the form of authorship on papers, and one's academic worth can be measured in the number and "quality" of those publications (though quality is by nature subjective and difficult to measure, so a poor stand-in is often the perceived quality of the journal in which the paper appears).

A very slow process of change is underway, whereby quality can be more directly measured, and "contributions" can be considered viable units of research output instead of just papers. Contributions could include data, code, reagents, post-publication reviews, and edits to knowledge repositories. There are many ways one can contribute to knowledge creation and dissemination, and the scientific community is right to begin to reward those of them which go beyond publishing in high-profile journals. Whether or not we actually get to the point where contributing data or post-pub reviews is taken into account in hiring and tenure decisions is dubious, but I think it's somewhat beside the point. Academics already review papers, sit on study section, edit journals, organize conferences, and sometimes even put a little effort into their teaching responsibilities, all of which might not necessarily impact their future pay or prospects.

These things (usually) go on a CV. Where should we track them in the digital age? One place might be on academics own websites, with links to all the relevant places, but (for now) this would require a large amount of maintenance (it should be noted that maintaining a CV is no small task, in and of itself). Another place would be ORCID, which could be considered a digital CV. A recent ORCID plugin for Wordpress provides a step forward here - now blogposts and even comments can be associated with an ORCID profile. Theoretically, they could therefore be tracked. I'm interested to see where this goes.

At another level, we'll need a way to count whether these contributions are meaningful. A poor-quality dataset is both easy to create and of little value to anyone. A blog that no one reads is of little immediate value, although everyone has to start somewhere (please tell your friends about my blog). Do we need altmetrics for these other contributions, too? AddGene provides an interesting case study here, I believe. A search for "GFP" revealed 2357 plasmids. Beyond assuring that the sequence of each of these is exactly right, which of these is the best one for me to use? Similarly, if I deposit a plasmid that gets highly used, should I be recognized for that in some way? Since AddGene sends out the plasmids themselves, they must have this information available.

The consensus largely exists that assessing value only through the glam-publication game is hurting science. We understand very well why people keep publishing there: because it helps their careers. Change the metrics that matter, change what people get recognized for, and you change science.