Friday links: side projects > main projects, tuning your scientific bogosity detector, and more (UPDATED)

Also this week: Canadian research funding can’t go home again, what should I put in my NSF annual report?, Axios Review explained, impressions of #ESA100, new data on the prevalence of p-hacking, and more.

From Meg:

This post from the NSF DEBrief blog has useful information on what should go into an annual and final report. This will definitely help me figure out what they are looking for in those reports.

scitrigrrl had a post at Tenure, She Wrote on how academia helped her with triathlons. Reason : “Mental toughness matters as much as anything else [in academia and sports]” Indeed!

Here’s an old post from Terry McGlynn that people on the job market this year might be interested in, where he asks: are teaching universities the farm league for R1 universities? In it, he says he doesn’t think he has a single ideal job. I’d say the same is true for me. I love doing research, but I also love teaching.

I was going to link to Stephen Heard’s side projects post and the ESA poll about #ESA100, but see Jeremy has those already. So, just keep on reading for those. (Jeremy adds: “Those are my links! Mine, I say!” [grabs links, runs away])

From Jeremy:

Arjun Raj argues that you’ll only confuse yourself by reading too much of the literature, because a lot of it is wrong. I don’t know that I’d say that, exactly, but I would say something similar: that you’ll get confused if you don’t read critically. So I really like how Raj follows up with suggestions on how to tell if a paper or subfield is bogus. Some of these suggestions are specific to his own field of molecular biology, but others are more broadly applicable or have analogues in other fields. Here are condensed/paraphrased versions of some of the more broadly-applicable ones:

  • If some obvious next-step observation or experiment is missing, be suspicious.
  • New methods should be validated on data from systems in which the correct answer is already well-known, and/or on simulated data generated by known processes.
  • Dig carefully into the supplementary material. That’s often where the bodies are buried.

Perhaps at some point I should post on my own bogosity detection heuristics. I also agree 110% with this passage near the end (emphasis added):

Making decisions based on the literature means deciding what avenues not to follow up on, and I think that most good molecular biologists learn this early on. Even more importantly, they develop the social networks to get the insider’s perspective on what to trust and what to ignore. As a beginning trainee, though, you typically will have neither the experience nor the network to make these decisions. My advice would be to pick a PI who asks these same sorts of questions. Then keep asking yourself these questions during your training. Seek out critical people and bounce your ideas off of them. At the same time, don’t become one of those people who just rips every paper to shreds in journal club. The point is to learn to exhibit sound judgement and find a way forward, and that also means sifting out the good stuff and threading it together across multiple papers.

Stephen Heard’s side projects seem to have as much or even more impact than his main projects. He muses on his mixed feelings about this.

Sticking with Stephen: here he explains why you might want to be an associate editor at a journal. As a former associate editor at Oikos, I’d add: it’s a real feather in your cap in the eyes of department heads, deans, and colleagues. It’s a chance to have some influence on the direction of the field (especially at leading selective journals). And it gives you an early look at a broad range of the latest work in your field, so helps keep you in touch with what others are thinking and doing. Being an associate editor is a win-win for the field as a whole and for you personally. If you’re invited to do it (at a journal you care about), you should probably do it.

An interview with Tim Vines, founder of the Axios Review peer review service. I’m an editor for Axios Review, here and here are posts explaining why I support it.

Alex Usher of Higher Education Strategy Consultants argues that the new federal government Canadians might well have in the fall probably won’t just return to the status quo ante on science funding policy. Any new government is not just going to unwind the current government’s obsession with subsidizing short-term industrial R&D. New governments want new ideas and new policies. But I confess I have no idea what those ought to be, since I liked the status quo ante. (ht Worthwhile Canadian Initiative)

Sticking with Alex Usher, here’s why it’s hard for governments to steer universities by using money as an incentive:

Outside of terrorist cells, universities are about the most loosely-coupled organizations on earth.

Jacquelyn Gill’s impressions of #ESA100. I too have noticed that the meeting attendees have become more diverse over the years. As for her wish that some meeting locations not be seen as much more desirable than others: hey, I want the meeting to go back to Spokane just so I can go back to Rock City Pizza (I think that’s what it was called). It ain’t happenin’. Some cities are always going to be more popular than others; that’s life. (UPDATE: I slightly misunderstood Jacquelyn’s comments on the location–she just wishes more people would give seemingly-undesirable locations like Milwaukee and Minneapolis a chance, not that people would stop caring about location entirely or see every location as equally desirable.)

Speaking of #ESA100, here’s ESA’s poll asking attendees about their experiences at the meeting and how it could be improved. There are also a couple of questions about whether you’re an ESA-certified ecologist, which has nothing to do with the meeting and so seemed kind of odd, but whatever. Go fill it out, I did. Related: my old post on why the ESA meeting ends with a half day on Friday (short answer: it’s complicated), along with a little poll on what to do about it. Unfortunately, I don’t foresee much traction for the option I and the plurality of poll respondents favored. (ht Margaret Kosmala, via the comments)

Some striking evidence on the prevalence and importance of p-hacking: NIH started requiring preregistration of all randomized controlled clinical trials in 2000. The rules oblige preregistration of statistical analyses, which should cut down greatly on intentional or unintentional p-hacking. Result: 17/30 cardiovascular disease trials conducted before 2000 found significant benefits of the tested treatment. Only 2/25 have done so since 2000. And none of the most obvious confounding variables explain the difference. I could definitely see using this example in intro stats courses. (ht Retraction Watch)

Speaking of p-hacking: Data Colada argues that, if you’re looking for statistical evidence of p-hacking in a large sample of papers, you should not look at the distribution of all p-values (which is what several analyses that I’ve linked to in the past have done). Instead, you want to focus only on those p-values that might be expected to be p-hacked. I usually agree with Data Colada, but I think I disagree on this one.

Relatedly: this week’s Science includes a paper from the Open Science Collaboration, which conducted pre-registered replications of 100 cheap-to-replicate experiments published in leading psychology journals. The replications had high power to detect the originally-estimated effect sizes. It was all very carefully done, in part by involving the authors of the original studies to make sure that the original protocols were followed as closely as possible. The headline results are sobering, though not surprising, and there are some interesting nuances:

  • Only 36% of the replications were statistically significant at the 0.05 level, vs. 97% of the original studies. That’s compared to the 89% significant replications that would’ve been expected if every original study had accurately estimated the true effect.
  • The replication P-values weren’t uniformly distributed, but they were very widely scattered on the (0,1) interval
  • The mean effect size of the replications was less than half that of the original studies, with 83% of the replications finding effect sizes smaller than those originally reported
  • Only 41% of the replication 95% confidence intervals contained the original effect size, and only 39% of the replications were subjectively rated as having replicated the original result
  • Some of the replications found statistically-significant effects in the opposite direction from the original studies
  • Original and replication effect sizes were significantly positively rank-correlated (Spearman’s r=0.51)
  • The lower the p-value of the original, the more likely it was to replicate. Cognitive psychology experiments replicated much more often than social psychology experiments, though this may be due at least in part to among-field differences in typical study design. Effects rated as more “surprising” replicated less often.

Overall, the results suggest at least some psychology experiments are studying real effects, but a combination of low-powered experiments (relative to the true effect sizes), possible p-hacking (unintentional or otherwise), and publication bias results in a published literature giving a very distorted picture of the world. That this study happened, that it’s making such a splash, and that so many psychologists–including most of the original authors–are supportive rather than defensive is terrific. It’s a sign of a culture change in psychology, one that I suspect is proceeding more rapidly than it otherwise would’ve thanks in part to blogs and other online discussions. News articles about the results from Science and FiveThirtyEight. This would make a great case study for an intro stats course.

A while back I wrote that we’re currently living through a “culture clash” when it comes to so-called “post-publication review”. I argued that we need, but don’t currently have, agreed norms on what it is and how to do it. There’s ongoing discussion of this issue in bioinformatics, a field in which some prominent bloggers take the view that public attacks on the scientific competence and professional integrity of others are an essential part of scientific discourse. This got me thinking about how just a few prominent online voices can set the tone and define what’s acceptable in entire subfields. I hope that we set the right tone here at Dynamic Ecology.

13 thoughts on “Friday links: side projects > main projects, tuning your scientific bogosity detector, and more (UPDATED)

  1. It’s not so much that I don’t think meeting location should be seen as more or less desirable, it’s more that I wish people would give less trendy cities a chance. I think a lot of people were pleasantly surprised by Milwaukee and Minneapolis, and I’ve talked to a number of folks were disappointed with Portland. My point was more that I’d like us to be a little more open-minded when it comes to conference venues, because affordability and access to amenities matter, too.

    (Selfishly, I had one of the best meals of my life in Minneapolis, also!)

      • (Note from Jeremy: for some reason WordPress cocked up the threading, this comment is in response to a comment of mine below.)

        I’m curious about Denver, too!

        I’ve heard Madison isn’t big enough for ESA now, which makes sense (I lived there 2005-2012 and the conference center is a bit out if the way and smaller than what we usually go to).

        I’d love to see Boston considered. I’m not a fan of the brutally hot and humid places, myself. I’m not psyched about Ft. Lauderdale, but the theme is pretty perfect so I’ll be there!

      • My impression is that Boston is in the same category as NYC, SanFran, et al. as a conference location, but I could be wrong.

        Too bad we’ve outgrown Madison.

      • Boston is in the NYC department, for sure. Hotels are $$$ and I imagine the convention center is too.

    • When I filled out the survey, I said that I thought Baltimore, Pittsburgh, and Milwaukee were all great places to have a meeting. (I didn’t make it to Minneapolis, but imagine that would be great, too.) Cities like Portland are nice, but I’d be happy to have cities like Baltimore, Pittsburgh, and Milwaukee in the regular rotation.

      • I’ve been pleasantly surprised by many of the ESA cities (not Baltimore though), so I hope they keep rotating it around because it’s fun to see new cities that I would not have visited otherwise. I initially thought Pittsburgh and Milwaukee would be awful, but thought they were fantastic venues by the time I’d left.

      • When I filled out the survey, I suggested a hybrid strategy: return regularly to certain popular locations, but also keep trying out new ones. I go to ESA every year (and if I don’t go, it’s not because I don’t like the location), so I think I’d get bored if ESA started rotating through the same 4-5 places.

        But it’s not clear to me that ESA should care about the preferences of anyone who attends every year, as I do. If you’re trying to make sure the meeting is well-attended every year, you have to worry about the preferences of people who take location into account when deciding whether to attend.

        Since I now have an excuse to share my own personal preferences, I will. 🙂

        I tend to be fine with most locations ESA picks. I like some better than others, but there’s never been a location I disliked so much that I didn’t enjoy the meeting.

        Having said that, I hope we never go back to Memphis or Savannah. And given that both those meetings were poorly attended by ESA standards (2500 or less, if memory serves), I suspect I’ll get my wish on that.

        I liked Pittsburgh and Minneapolis. Minneapolis is a cab ride away from The Happy Monk over in St. Paul, site of the best evening I’ve ever had at an ESA, so I have especially fond memories of Minneapolis. 🙂 Madison was very good, kind of surprised we haven’t gone back there. Everyone complained about the heat, but that was bad luck–Madison isn’t normally 98 F in Aug. Austin was good, as was Montreal. Portland was quite good, but I agree it’s a bit overrated. Everyone goes for the beer, but the best brewpubs and beer bars are scattered around the city and most require a cab ride from the convention center.

        I’m not thrilled with Tucson and Albuquerque, due to the combination of heat, relative lack of good beer (with one notable exception in Albuquerque), and not being a big fan of Mexican/Southwestern food.

        I remember liking Spokane a lot more than I expected, but that was yonks ago so that’s all I recall. My first meeting was Providence, RI–too long ago for me to remember anything about it.

        I’m looking forward to Louisville, never been there or anywhere near there so curious to check it out. And really looking forward to New Orleans, never been there either. I’m a bit surprised that we can afford New Orleans, even post-Katrina. That’s the trouble with ESA being the size it is. As I understand it (happy to be corrected if I’m wrong), we’re not big enough to be able to afford going to someplace like New York, Chicago, Boston, San Francisco, Philadelphia, or San Diego. We can’t afford, and wouldn’t fill, the Javits Center in New York or whatever. But we’re also *too* big to go to major cities, because we’re too big for a single big hotel to host us. The US paleontological meeting apparently goes to places like Boston and New York because the meeting is less than a thousand people. They can all fit in a single big downtown hotel or pair of adjacent hotels.

        I know lots of people are sad we’re now too big for Snowbird to ever go back. But since I live near the mountains, the scenery at Snowbird doesn’t compensate for its various disadvantages to me. Which just goes to show how personal and idiosyncratic location preferences are. 🙂

        What about cities we’ve never been to, at least not in forever? Is ESA the right size to go to Boulder or Denver? Never seen either city, but have heard good things about both. I hear mixed things about Columbus, Cincinnati, and Kansas City–thoughts, anyone? Seattle and Vancouver would be great–are they too expensive for ESA to rent? Years ago, the ESA looked into Calgary, but it’s not gonna happen–the convention center’s too small (3500 people, max). Which I think is actually good, because honestly Calgary’s not the best city for ESA. It’s a commuter city that empties out in the evenings, so the downtown doesn’t have a critical mass of interesting bars and restaurants that are walkable from the convention center. I’d love to go to Toronto (cue eye rolling from most Canadians reading this), but I assume that would be too pricey. What about Quebec City? Are there any other college towns, like Madison, that have a convention center big enough to take us? Ok, there’s Knoxville, but we’ve been there, I’m thinking of new places. Meg, any chance the ESA could come to Ann Arbor? (we can all crash at your place, right?)

        I’m glad that I might well be missing ESA next year. I’m not looking forward to Ft. Lauderdale, so if I’m going to be missing an ESA I’m glad it’ll be a location that I’m not looking forward to. (sorry Jacquelyn!)

      • @Casey terHorst:

        “I initially thought Pittsburgh and Milwaukee would be awful, but thought they were fantastic venues by the time I’d left.”

        That’s a good illustration of Jacquelyn’s point. I know Pittsburgh a bit and knew it’d be a good pick. I remember saying so to people who didn’t know the city and who initially thought “Ugh, Pittsburgh”.

        So perhaps I should temper my skepticism about Ft. Lauderdale.

  2. Re: questions in the ESA poll about what cities should host ESA
    I wrote a blog post (http://www.ecoevolab.com/your-esa-carbon-footprint-2/) about my carbon-footprint guilt from traveling to ESA, although clearly this wasn’t a strong enough force to prevent me from going. But Jarrett Byrnes then pointed to his article (http://72.27.230.152/ojs/index.php/ebl/article/viewFile/29/27) about how to reduce the carbon footprint of academic meetings. One option is to be more strategic in terms of geography. Ft. Lauderdale seems like just about the worst option in this regard, as it’s not easy to get to for anybody outside of south Florida.

  3. I notice that most of the places people are suggesting or praising are the more northerly ones. I think that’s one fairly universal opinion–we all wish the ESA didn’t got to stinkin’ hot locations so often. But I suspect there’s no changing that, since I’m sure we get good deals on convention centers and hotels by going to hot places in August. At least, I sure hope we do! (Tucson should probably pay *us* to meet there in August…)

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.