No, I don’t need to propose an alternative approach in order to criticize yours

Been keeping an eye on the molecular ecology live online chat. Got to the office just as my contrarian question about the futility/pointlessness of genotype-phenotype mapping was posed to the group. Not that I actually believe it’s futile/pointless–I don’t know nearly enough to judge one way or the other–but it’s an issue that some famous people recently raised in a top journal, so I was curious what the panelists would say. It’s a nice change to be able to watch a serious debate as a curious bystander, rather than as a participant! Basically, none of the panelists really agreed with the critique. More importantly, they had a variety of cogent-sounding reasons why not, touching on both things we already know, and things we could learn in future (e.g., by combining studies of selection in natural populations with whole-genome sequencing).

But I was struck by a passing remark by one of the panelists (Loren Rieseberg) that I wanted to comment on, because I think it articulates a widely-held view with which I completely disagree. Loren said in passing that he always has a problem with people criticizing the approaches of others without proposing an alternative. (His remark wasn’t directed at me, by the way, but at the paper in Evolution that inspired my contrarian question) As I said, I think Loren’s articulating a widely-held view here. So don’t think I’m picking on him specifically when I say that I totally disagree.

If a way of addressing some question is ineffective for whatever reason, then it’s ineffective, full stop. Not only is there nothing wrong with pointing this out, it’s absolutely essential to point this out! I say this for the following reasons:

  • Whether or not approach X is flawed is logically independent of whether or not there are any alternative approaches, and of any flaws in whatever alternative approaches might exist. If I point out flaws in approach X, I’m no more obliged to propose or discuss alternative approaches than I am to discuss other logically-unrelated matters. And if approach X is flawed, you can’t defend it by saying “But no alternative exists!” That does not make the flaws in approach X go away, any more than if you’d said “But chocolate is delicious!”
  • Surely it behooves us to have an honest understanding of the flaws of any approach, whether or not any alternative exists! I mean, would you argue that the flaws in some approach should be hushed up and swept under the rug just because no alternative approach is available? Even if for some reason we have no choice but to study some question, and to do so using some flawed approach (and such cases are rare, I think), surely it’s best if we’re aware of the flaws!
  • Alternative approaches may also be seriously flawed. In such situations, the best idea often is not to settle on the best of a bad bunch, but to stop pursuing the question entirely. It’s very rarely the case in science that we have no choice but to pursue a particular question (see also the following point)
  • Opportunity costs are ever present. The time, effort, and money being spent pursuing an ineffective approach to addressing some question could be spent addressing some other question effectively. In other words, there’s always an “alternative approach” available in science–ask some other question!
  • Ineffective approaches often are worse than nothing. What’s worse than not knowing anything about the answer to a question? Having a wrong, misleading, or biased answer that is not recognized as wrong, misleading, or biased. See also: zombie ideas.
  • One very strong motivation for developing new approaches is recognition of problems with existing approaches.

I suspect there’s an implicit assumption behind Loren’s point of view–that nobody’s approach is ever totally ineffective. So if you criticize someone’s approach without proposing an alternative, you’re basically arguing that we stop doing imperfect-but-still-valuable science. To which I’d respond in three ways. First, my point above about the need for an honest understanding of the effectiveness of any approach still applies. Second, my point above about opportunity costs still applies. Time, money, and effort spent on a somewhat-effective approach could be spent addressing some other question more effectively. I don’t claim that such judgments are easy to make–they’re not–but they’re the sorts of judgments that scientists (and our funding agencies) make all the time. They’re unavoidable. Third, the implicit assumption that every approach is at least somewhat effective needs justifying on a case-by-case basis. Scientists, even very good ones, and even lots of them at a time, sometimes do make really serious mistakes and pursue completely ineffective approaches. That’s simply an empirical fact. And when that happens, I don’t think it’s polite or respectful or professional to remain silent. Science works best when we all push each other, not when we all pat each other on the back.

p.s. The panelists have now gone into an interesting discussion of how metagenomic studies (e.g., “Here are the whole-genome sequences of all the microbes at site X and time Y”) need to move beyond description into experiments and hypothesis testing. Woohoo! That’d be expensive, but still…woohoo!

18 thoughts on “No, I don’t need to propose an alternative approach in order to criticize yours

  1. Jeremy you seem to have an absolutist view – an approach either works or doesn’t work. I take a much more relativist view – most approaches work a little, all approaches are flawed, most are somewhere in between, and the goal is to find the best available.

    Although as a statistician I am not a huge fan of model selection, on methodologies I guess I am a method selection person. Take the best available method. As long as we’re learning new things its not a waste of time.

    All of which leads to in my relativist view, critiquing my method without having a better alternative is of limited value. If it is constructive and highlighting something I can fix or improve it is useful. But if its a claim that my method is fundamentally flawed and useless, it is counterproductive without a better alternative.

    Thus, although I think niche e.g. modelling is deeply limited, I am careful not to make statements that would disqualify it is a tool to use in conservation until we have a better approach (which is a core research question of mine).

    • No, I don’t have an absolutist view at all. Should’ve made that clearer in the post. I would not argue for abandoning an imperfect-but-still-useful approach. What I would say is that it’s perfectly fair (and indeed, very useful) to point out the imperfections of imperfect-but-still-useful approaches even if one can’t suggest any improvements.

      What I would also say is that an essential part of the case for using an imperfect-but-still-useful approach for which no better alternative exists is the case for pursuing that question at all. This kind of gets back to my post on model systems. If the reason you’re obliged to use an imperfect-but-still-useful approach is your choice of study system, well, maybe you should’ve chosen a different system, or a different question. But on this point I freely admit that there’s wide scope for reasonable disagreement.

      All I wanted to push back against in this post is the view that criticism, unaccompanied by positive suggestions, is useless. Clearly I didn’t articulate that as well as I should have.

      It’s interesting that you’re not a fan of statistical model selection (I’m not either, except in limited circumstances), but that you are a “methods selection” fan. The two situations seem perfectly analogous to me. I take it you don’t find them analogous? So that you’re ok with being a fan of one, but not the other?

      • Thanks for the clarification – rereading your original post I can see the interpretation you just espoused in the original post. I may be guilty of finding a hole and driving a truck through it!

        In any case, I think you are right – the core disagreement (which came up on my scaling post) is when one has methods with clear flaws whether one should just abandon the question or push on with shaky methods. I favor the latter. One can always leave the question and hope that a new technology will make it tractable in a couple of decades. But I suspect this is not the most common scenario in ecology. Rather I think pushing away at a problem – however weak the methods – brings insight and eventually that aha moment that lets us leap forward and improve our approach arrives. Progress is highly non-linear and if you don’t try you’ll never move forward!

      • And to answer your specific question about model selection vs “method selection” its all about progress in science. Picking one model as best out of a collection of bad models does not advance the field in anyway I can see – almost certainly one ought to return to the list of models and think harder.

        “Method selection” may also pick the best of a bad set of methods, but as I just argued, I think this can pay off in the long run. Of course certainly it can also be a function of laziness and not thinking hard enough about alternative methods (or your example of you should be working on a different system). It depends, but at least there are scenarios where I think science advances by using the best of a bad set of methods.

      • I am a little puzzled by the comments regarding model selection. I can’t imagine any situation in which you don’t want to use objective methods to determine the best possible model.

  2. There are many, many flawed ideas out there. But as a scientist, I have a limited amount of time. I have to decide if I am going to spend it tearing at ideas in arenas in which I have nothing new to contribute, or in areas where I might have something new and useful to contribute.
    Here is the thought process I go through while trying to decide how to spend my time:
    At its best, science works because it is a true marketplace of ideas. Ideas are proposed, they are evaluated, we see where they work, try to understand where they don’t and why. We let those failures inform new ideas. If this process is to lead to greater understanding then proposing new ideas has to be valued more than tearing at current/old ones.
    Secondly, most ideas are flawed; it’s simply easier to point out flaws in others’ ideas than it is to develop a new one. As a grad student, I went through a phase where I got good at tearing papers apart long before I got to the point where I was ready to contribute something useful and new. As a result, if I don’t point a certain flaw out, it is likely that somebody else (hopefully somebody with a new idea) will.
    Finally, it simply takes more guts to propose an idea to the market to have it evaluated than to sit back and pick at other peoples ideas.
    There are undoubtedly certain cases where it is important to tear down an old idea even in the absence of having one a new one to contribute, but on the whole I try not be a professional naysayer because, in the long run, it is 1. less valuable to science, 2. easier and 3. more cowardly than contributing new ideas.
    In my view, being able to find the weaknesses in an argument is necessary to being a good scientist, but it is not sufficient.

    • All very good comments Don.

      I don’t disagree nearly as much as you might think. As I said in my reply to Brian’s comment just now, my post wasn’t as clear as it should have been, which is my bad. This post isn’t one of my better efforts. All I wanted to push back against was the notion that criticism, unaccompanied by suggestions for positive improvement, is useless. That it’s somehow *inherently* nitpicky, no matter *what* the substance of the criticism.

      As I said in my reply to Brian just now, yes, all approaches have their flaws and limitations. I believe that we do need to know about those flaws and limitations, and I think pointing out those flaws and limitations is part of how a well-functioning market of ideas works. But I certainly wouldn’t argue for abandoning all imperfect approaches! I definitely do not want to make the best (especially not a hypothetical, impossible-to-achieve perfection) the enemy of the good, or of the adequate, or of the better-than-nothing. I just don’t want to see people take the attitude that “Well, all approaches are imperfect–so all criticism of all approaches is just nitpicking”.

      I don’t agree that it takes more courage to propose a new idea than to criticize an existing one. Especially not if you’re talking about criticism of a well-established or popular idea. To my mind, it takes real guts to criticize such an idea–*especially* if you don’t have an alternative to replace it with. In my experience, it’s psychologically *much* easier for most people to propose a new idea of their own than to criticize an idea of someone else’s. Isn’t this one big reason why peer review is anonymous–to give people a space in which to say critical things they’d otherwise be reluctant to say? For these reasons, I disagree that criticizing others’ ideas–even if that’s mostly what one does–is “cowardly”.

      Re: time allocation to thinking up new ideas vs. criticizing existing ideas, that’s a great point. It’s something I actually do think about a lot, hard as it may be to believe sometimes from reading this blog. For instance, there’s a paper in a leading journal which took an idea of mine and applied it in a new context. Or rather, misapplied it–the paper is seriously, fatally flawed. But I didn’t bother to write a comment to the journal (and I won’t bother to blog about it), because I decided that nobody else was likely to follow the paper’s example and make the same mistake. One thing I like about blogging is that it’s fast. It gives me the option to put ideas out there–both critical and positive–that I wouldn’t otherwise bother to put out there. It changes the reward-per-time-invested calculus.

      Positive contributions may in the aggregate be more valuable to science than negative contributions in the long run. But surely the best science needs both. And in any case I’m not sure that considering what’s best for science as a whole helps with the individual time allocation decisions that you quite rightly raise. It’s not as if I *personally* would be producing lots more really good, positive ideas if I wasn’t devoting any of my time to knocking down zombie ideas. My personal rate of positive ideas production is not strongly time-limited in that way. But perhaps this is not true for others.

      I completely agree that its important to exercise good judgment about which flaws and limitations are important enough to be worth criticizing. Like everyone, I’ve had the experience of sitting in grad student reading groups, picking papers apart, and going away despairing that anyone can ever do any science worth doing. Like everyone, I got over that, although you may not be able to tell from reading this blog. As I noted in my old post on how to blog, posts that say “I too agree with the ‘marketplace’ that idea X is a good idea well worth pursuing” are really boring. What I post on the blog is a highly non-random subset of all the ideas I have. It’s skewed towards the (hopefully important and interesting) criticisms because those are what I think make for more interesting posts that will be worth reading.

      Because I tend to like discussion and arguments, and because posts that start discussion and arguments often are critical, I’m certainly very conscious of the risks of becoming a professional nitpicker. And there have been times in my blogging when I’ve fallen into someone-is-wrong-on-the-internet-and-I-must-prove-it territory. Perhaps this is even one of those times. But I hope I don’t make those kinds of mistakes too often. There are a few things that help me strike a balance. One is working on my own science, and working with my students and collaborators on their science. I spend a *lot* more time doing that than blogging! Another is comments and feedback from readers–for which thanks!

      • ” This post isn’t one of my better efforts.”
        Yes it is!

        It’s one thing if paper authors are consistently clear about the drawbacks of approach x,y or z with respect to question A. Fine, then we have in the forefront of our minds at all times that these methods have various weaknesses and accordingly result in uncertainties and biases of known characteristics. It’s another thing altogether to either (1) ignore such problems, or (2) being +/- unaware or only vaguely aware of them, sometimes accompanied by pretending to be aware of them.

        The idea that pointing out problems comes from a lack of courage just steams me big time. Rather, people continually sweeping known analytical problems under the rug and failing to be forthcoming about the effects thereof on the results of their studies–now *there’s* a lack of courage.

  3. I’m with you one thousand percent on this Jeremy.

    When are scientists going to come out and say that hey, we mess up sometimes, we’re not always right. And when faulty work is done, it needs to be pointed out, as step one of the process for self-correction. And if there ain’t self-correction, then there ain’t ANY correction.

    I’m getting really really tired of all the small, and not-so-small, ways that science and scientists have of avoiding and deflecting criticisms. Some of these people need to attend an AA (Alcoholics Anonymous) meeting and find out that admitting mistakes is the first step on the road to improvement. No, not smooth-talking your way around admitting mistakes, with various excuses and red-herrings and whatnot, but actually coming right out and saying “We don’t have this right”

    I mean I am getting **REAL** tired of some of the attitudes I see in science, over and over again.

  4. You probably weren’t thinking about this at all, but this is why I’m not a physicist – because I used to be really interested in theoretical high energy physics, but the fact is that theoretical high energy physics has been stagnant for a while, in my opinion. There are some interesting questions out there, but not ones for which we have currently productive ways of finding answers. I looked at the energy being spent on string theory and decided my energies were better spent elsewhere.

  5. Pingback: We need more “short selling” of scientific ideas | Dynamic Ecology

  6. Pingback: John Stuart Mill on the value of contrarianism | Dynamic Ecology

  7. Pingback: Lost causes in science | Dynamic Ecology

  8. Pingback: Ask us anything: how do you critique the published literature without looking like a jerk? | Dynamic Ecology

  9. Pingback: Are there Buddy Holly ideas? (or, has any line of ecological or evolutionary research ever been prematurely abandoned?) | Dynamic Ecology

  10. Pingback: Does any field besides ecology use randomization-based “null” models? | Dynamic Ecology

  11. Well, I was searching about this and went through your entire piece. I agree with the point that it’s more logical to pursue a different question rather wasting time on a flawed approach. But do you think this idea is restricted to the field of science only and by science I mean where things can tested for their limits. In life we face questions which are important to not just us but a majority of our community something like an alternative to a political ideology or idea of safety etc. Such issues, when critiqued should accompany alternatives since those are the questions which can’t be avoided as there is no better question.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.