In finance, you often buy a stock or bond if you think it’s going to go up in value in future, and sell if you think it’s going to go down in value. There’s an obvious (loose) analogy here to bandwagon-jumping in science. Someone publishes some new method or approach that promises a shortcut to insight, and others “buy”: they jump on the bandwagon by rushing out to apply the approach. And then once it starts becoming apparent that the method doesn’t live up to its initial promise, or has been pushed as far as it can go, people “sell”: they stop using the approach and move on to something else.
You can view this behavior, in both finance and science, as a form of betting. Much as with buying a stock, applying some new scientific approach amounts to a bet that the approach will work, and that others will “buy” the results (believe the results and view them as interesting and important). And ceasing to use some approach amounts to a bet that others will stop “buying” the results.
But how do you bet against a scientific idea you never bought into in the first place? How do you jump off a bandwagon that you never got on?
In finance, you can bet against a stock or bond that you don’t own by selling it short. To do this, you sell stocks or bonds that you don’t currently own, and buy them back later. If the price has fallen in the interim, you make money because you spend less on the buyback than you made on the initial sale. Mathematically, it’s equivalent to buying a negative amount of the stock or bond. In practice, this is done by borrowing stocks or bonds from someone who owns them, selling them immediately, buying them back later, and then returning the repurchased stocks or bonds to the lender while pocketing the profit (if there is a profit; the short seller loses money if the stock or bond rises in price).
But doing the same thing in science is more difficult. Thomas Basbøll notes that, in science, there isn’t a big “market” for criticism of ideas, especially popular ideas. I think he’s right about this (and his remarks inspired this post). Purely critical papers, or even critical technical comments, often are difficult to publish, and when they are published they’re often ignored. A lot of scientists take a dim view of criticism of the work of others, especially if it’s not accompanied by proposal of an alternative approach. On this view, if you don’t want to “buy” stock A, you can go buy stock B instead (i.e. use some other approach, or work on some other topic entirely)–but shorting stock A is illegitimate.
Similarly, short selling in finance is controversial. It’s been accused of increasing market volatility and giving rise to self-perpetuating market crashes, though the evidence for this is not clear-cut (according to Wikipedia, anyway).* Conversely, short selling is defended as an important price discovery mechanism that improves market efficiency. Short sellers provide a counterweight against irrationally bullish investors, may prevent financial bubbles from developing, and have played a role in uncovering fraudulent business practices (the financial equivalent of uncovering serious technical flaws in scientific papers).
As I’ve said before (but with different phrasing), I think we need more short selling in science. Short selling in science has the same benefits as short selling in finance. Even better, some of the purported downsides of short selling in finance don’t exist in science. For instance, there’s no way criticism of existing scientific ideas is ever going to lead to a self-fulfilling panic in which everyone tries to abandon all scientific ideas, analogous to a financial market crash in which everyone simultaneously tries to sell all their assets. And while in finance an individual short seller can lose massive amounts of money if the borrowed asset rises in price, in science if you criticize an idea and your criticism turns out to be wrong or gets refuted, you don’t suffer any major loss. You don’t get fired, or lose all credibility so that you can never publish again, or anything like that. For instance, back when the biodiversity-ecosystem function bandwagon was first getting rolling in 1997, Huston and Aarssen pushed back by arguing that “sampling effects” were driving the then-new experimental results. Their criticisms were refuted by additional data and new analytical techniques, and biodiversity-ecosystem function research rolled on. Had they been short sellers in finance, they’d have lost their shirts. But because they’re scientists, they didn’t suffer any “losses” at all as far as I’m aware (nor should they have).
One way in which blogs have improved scientific communication is by providing a mechanism for short selling. If journals won’t publish your criticisms of some scientific idea, you can still make the case on your blog, and you’ll still have an impact insofar as people read your criticisms and alter their scientific behavior accordingly. Rosie Redfield’s criticism of the arsenic life paper is the most prominent example, but here at Dynamic Ecology we’re no slouches at short selling either (e.g., our most-commented post is Brian’s “short sell” of the use of detection probabilities in wildlife research). This is really important, I think. Some financial bubbles (for instance, in subprime mortgages) are thought to have developed in part because, for technical reasons, there was no way to short sell them.
Scientific “bubbles” distort the allocation of scientific effort. They should be popped before they get that far. And the way to pop them is to short them.
UPDATE: On Twitter, David Wescott takes my financial analogy even further:
@boraz fascinating read.I wonder if we could securitize scientific ideas and market them a la Fannie Mae. would you buy discovery futures?
— David Wescott (@dwescott1) May 8, 2013
That snapping sound you just heard was the sound of my analogy being stretched beyond the breaking point.😉 Discovery futures?! The thought is somehow hilarious and vaguely terrifying at the same time…
*You get the background research you pay for on this blog.