May 28, 2014
By now most researchers have heard of the term Open Science, and there are certainly a lot of strong feelings about it across the various disciplines. Recently, hot button issues like journal bans and replicability crises have made it all the way into mainstream news sources. Within the walls of the academy this conversation has been going for a while - during my MA I was practically raised on campfire tales of academic misconduct.
These ghost stories recently came true for many with the indictments of fraudulent research in both the life and social sciences (note: this is unfortunately by no means an exhuastive list), and things have gotten to a point where more than talk is necessary. Researchers are dissatisfied with they way their work is being evaluated, and this has given rise to attempts to create a better system - most if not all of which fly under the banner of open science. Heck, there's even a center for it. But what does open science represent? It's a nebulous term to say the least, and in a way, the whole movement is similar to the whole Occupy Wall Street thing. Let's take a look now at those similarities, and some important differences.
Tensions are running high in research these days, and this can lead to an us-vs-them attitude, with the grads/postdocs/untenured faculty vying for dominance over senior faculty and journal publishers. Certainly, the ivory tower is more easily toppled than a Wall Street skyscraper, but an angry mob is unlikely to replace the current system with a better one. Evolution, I think, is better than revolution.
Researchers are players in a game that gives prizes for the wrong reasons. On Wall Street, nothing but greed mattered, and lack of oversight coupled with rampant dishonesty led to a massive economic downturn. In science, the lack of (in many cases statistical) oversight during the peer review process coupled with granting agencies tendency to favor the most sensational findings has led to abuses of the system. As a result, published findings don't necessarily mean anything.
Research tends to be all-consuming - but you get very little out for what you give. This is particularly salient in terms of access to knowledge, which to many has a very high value. Up until recently, almost all publicly funded scientific research was stored behind paywalls, a practice which is unacceptable to many (hence the talk of boycotts). If I lost my position at a research university or decided to start my own research group, I'd no longer be able to afford access to the journal articles I need to read to stay on top of the game.
Occupy had a pretty catchy slogan - but open science does not. 'We are the 99%' is not really appropriate for us. Even if an adversarial attitude were the solution here (it isn't), I don't think the numbers shake out that way. They say that 1 in 10 postdocs manages to seal the deal on a tenure-track gig, so it'd be more like eighty-eight than ninety-nine. Besides, we're scientists. If we did think up a slogan, it would probably be terrible.
The Occupy movement was often accused in the media of being disorganized. That may not have actually been the reality, but Open Science certainly seems that way. Different people and researchers are angry about different things, and everybody has different ideas on how to solve it. Examining twitter posts with the hashtag #openscience attached will lead you to projects ranging from replicability to alternative publication formats to all manner of software projects, some with rather cryptic goals. This is unlikely to change any time soon. Let's face it - people who are drawn to academic research got into it because of desire to be independent, a trait that does not lend itself to playing well with others (if you don't believe me try working in a lab with 12 postdocs).
One clear difference between Occupy and Open Science is that we're not perceived by the mainstream media as a bunch of freeloaders fact hanging around in shantytowns in public parks, playing drums and getting high (although I've got my eye on those kids from Frontiers for Young Minds, I don't know anybody who's that happy all the time). Rather than living in tent cities, most are maintaining active research activities while pushing for change. This is no easy task since research is generally expected to dominate your life, and given this, it's quite impressive that some researchers, particularly early-career ones, are willing to risk falling behind in the research grind for a cause they believe in. Of course, the fact that we can all assemble on social media and participate in this discussion should not be ignored - the internet is our place of assembly.
Ultimately there are going to be some changes coming, and many underway. The publishing process is fixable - F1000, ArXiv and others represent early attempts to explore alternative reviewing processes, and these have managed to gain a fair bit of traction in some areas, which leads me to be relatively optimistic about my own field. Poor statistical oversight, on the other hand, is a bit tougher. Techniques are being developped to spot fraudulent data, but I'd like to belive that this can be fixed by improving the quality of undergraduate education (more on this in a later post). However, I've worked in Departments where a small minority of tenured faculty didn't even understand basic statistical concepts such as p-hacking - and this might make something like teaching more stats a tough sell.
But the toughest issue I think is the fact that the reward system appears to be malfunctioning. There's a limited amount of money to go around, and not everybody can get some. I've never participated in the grant review process, but I do know some faculty that have, and they tell me that it's an arduous process that's taken very seriously. There is a lot of material to cover in a short amount of time. Grant reviewers can't review the grants of people they know personally due to conflict of interest. In small research areas, this has the effect of sometimes forcing them to review work that's farther outside of their field than they might like. Better aggregation of scientific metadata is likely to be the solution here, and companies like Altmetric are exploring ways of ranking research outside of the conventional measures of impact factor by taking into account how much a work is being discussed in social and traditional media. Certainly an interesting stat to know, but it's more or less just a rehashing of the traditional impact factor, except the opinions of people who also watch Here Comes Honey Boo Boo are also considered, which is not necessarily an improvement. Rather, meta-information regarding the quality of theory, statistics and replicability of published work needs to be at grant reviewer's fingertips to aid them in the decision making process.
So that's it. In some ways Open Science is like Occupy, and in some ways it's not. And while many media outlets did quite a bit of work to discount what Occupy wound up accomplishing, there was certainly an impact. Personally, I think a lot of this depends on the work of web developpers. The current publishing system is a throwback to the age of print - when it made sense to charge money for a copy of an article, because it was an actual thing that had to be printed. Similarly, pre-publication review was really the only way to do it back in the print era. But we live in a different time, and software projects like The Open Journal demonstrate that we can be the architects of our own future.
If you'd like to comment on this blog, get up at me on twitter.comments powered by Disqus