In a couple of recent posts on statistical machismo, it has become increasingly clear to me that there is disagreement about how common statistical machismo even is. Which is an irresistible invitation to produce a poll (as also suggested by a commentor). So please take the following poll. Results will be published after Thanksgiving (week of Nov 28).

Clink here to leave Dynamic Ecology and enter unframed google survey (or link to share): https://docs.google.com/forms/d/e/1FAIpQLSea-c6lbTGCPgj0iJ2rgk38X6HQ6bo2LQ8gj1rsZQyvWKZVUQ/viewform?usp=sf_link

Or take the poll directly here:

### Like this:

Like Loading...

Rather than just asking how common it really is I am way more interested in the effect. If reviewers push a method that is for example more complex but still gets the same result (even if it was unnecessary) I don’t mind as long as it doesn’t mean no one understands/reads and/or cites my paper anymore. Once that happens though it greatly impacts research negatively which isn’t what anyone should want.

Hopefully some of the questions I asked will get at affect or at least perceptions about effect. I suppose you’d really have to do bibliometric analysis to fully answer your question.

Hi Brian,

Interesting poll, and I’m curious to see the results. But as I was filling it out, all I kept thinking was, “It depends on the question.” If you ask an interesting ecological question, conduct a good study, and use appropriate stats, you’ll probably get the paper in a good journal. (In fact, my impression is that Science and Nature ecology papers tend to have the simplest analyses, wrapped around really interesting questions.) I guess the problem is that “appropriate stats” is subjective, which is where machismo can come in. In the end, though, I think whether one publishes in a high profile journal has much more to do with the question being asked than the stats — and this comes from someone who likes fancy stats (when necessary).

I’m sure someone already brought up this point in the comments of other posts.

You are right that everything is context specific and also in the eye of the beholder. Which is probably why there are such diverse experiences and opinions on this. I’m just curious to get a broad brush sense at this point after all the conflicting comments I have seen.

Poll missed an increasingly common situation I see, which is (non-ecological, bench-based) reviewers pushing for much simpler methods because they have no idea of the kinds of adjustments necessary for confounding, the possibility of nonlinear interactions, etc. I was recently asked to give a citation for multivariate regressions.

The poll does ask about reviewers pushing for simpler methods. Can you clarify why you don’t think those questions cover the situation you describe?

No, you’re right, that’s fair. But in this case they are asking for raw correlations without any kind of adjustment in lieu of simple regressions, which they struggled to interpret. If regressions constitute machismo, heaven help us.

Regarding this question “How often do you think the choice to use simple, traditional versus complex advanced statistics actually changes the ecological or scientific conclusions (in a qualitative, not just quantitative way)”

Complicating matters a little bit, I think there’s also a tendency among some ecologists to conflate complex statistics with mathematical theory. After all these are often described in succession in an analysis section in the paper. As an example, using a Bayesian framework to fit a non-linear model could look like overly complex statistics to some reviewers. “Why are they going through all this trouble? Just do an ANOVA?” But if the objects of interest really are the parameters in the non-linear model, perhaps with strong rationale for its use, then the statistics are in service of the question. Arguably they are as simple as they can be. Or vice-versa, a reviewer might make the case that a GLM being used to analyse the data in a manuscript has a poor correspondence with current ecological theory–which perhaps is based on a non-linear function. In either case it may look like complicated statistics are being pushed. But actually the statistics aren’t the issue. Am I a making any sense?

Yes we got a little bit into that on the last post statistical machismo. There’s complex methods. But there is also people not understanding and or pushing different inferential frameworks than the author intended.

I thought the question “How do you think being known as a person who can and does use complex, advanced statistics affects ones professional reputation?” was interesting. My perception is that the number of positions for “quantitative ecologists” which often including using advanced statistics are increasing more than more traditional subfields in ecology. So, it definitely seems like there are some professional incentives to use these methods and have a reputation for doing so

I agree. In my experience, when students meet with job candidates in ecology and natural resource departments they always ask the candidate about their statistical skills. Students seem to favor candidates who could teach a Bayesian stats course, a genomics course, a mark-recapture course etc. A lot of incentive for young researchers to demonstrate that they can use these statistics, perhaps even when they aren’t required.

re: the number of advertised positions for “quantitative” ecologists, see this old post for a bit of data and some commentary: https://dynamicecology.wordpress.com/2017/09/13/ask-us-anything-the-demand-for-field-biologists/

tl;dr: advertised N. American asst. prof. positions in ecology and allied fields that require or encourage quantitative expertise beyond the ability to teach intro biostats to undergrads are still only a small minority of all advertised positions. And they’re still vastly outnumbered by positions that require or encourage field-based research and/or the ability to teach field courses.

Edit: I should add that I agree that a greater proportion of faculty job ads in ecology are for “quantitative” ecologists than used to be the case, say, 20-40 years ago (though I haven’t actually checked! Not sure how you would…). But given how low the current proportion is, I don’t see much current reason to be worried about the rate of change in that proportion. There may well be reasons to worry that ecology as a field is collectively over-valuing technical quantitative expertise and rigor; this poll asks about some of those reasons. But “ads for quantitative ecologists are taking over the faculty job market in ecology” is not among those reasons. Well, unless you consider any rate of change >0 to be worrisome, which I don’t think you should.

Very interesting poll. I am curious to see the results. To me, the choice about statistics depend on what the goal is. I don’t specifically choose more complex or more simple ones. Instead, choosing the proper methods that solve the problem seems to be the key. So the choice of simple or complex methods is context dependent. If reviewer pushes methods I thought as unnecessarily complex or simple, I would choose to respond with more details of the reason for the current method, instead of just following the reviewer.

There aslo seems to be interesting contrast between ecologists and statisticians. I have seen that fellow ecology graduate students tend to prefer fancy/complex method. But in the few statistic consulting session I observed, statistic consultants suggest simpler methods more often. My personal speculation is that better understanding of statistics, especially the theory behind methods, may play a role here. The complex method is probably not that complex once you fully understand it. This you have less tendency to fancy these methods.

I think probably the key question is is there ever only one proper method? And which considerations go into “proper”. Is being well understood by a majority of the audience part of proper? Is being able to communicate what you actually did in 5000-7000 words part of proper?

Via Twitter:

Now this was interesting! What I found most interesting were the questions if using complex stats changes the chance of a paper being published and the quality of its conclusions. My impression is that it increases chances of publication, as the reviewer may assume that someone using complex stats knows what s/he is doing; but decreases overall quality, as complex stats are often be used to correct for flaws in sampling design. Wondering if more people think this way…

Look forward to the results. Another situation I’ve experienced a few times is reviewers recommending rejection on conceptual/review papers because they don’t include data or meta-analysis.

Related old post: https://dynamicecology.wordpress.com/2016/01/11/how-not-to-influence-the-direction-of-your-field/

RE: The question asking about whether or not you skip the methods section, don’t understand the stats, etc when a paper has advanced stats…

When I am highly interested in a paper and I do not understand the stats, I research the methods and come back to the paper to assess the results/conclusions and usually also to see if the methods may help me in my work. I put some time into it. But, of course, when my interest/motivation is lower this does not happen and I suspect that is a common response.

Pingback: Poll results on statistical machismo | Dynamic Ecology