Friday links: #metoo in economics, Word = LaTeX, and more

Also this week: sociology vs. good news, how analyzing data is like composing music, perverse effects of (optional) double-blind review, best references section ever, and more.

From Jeremy:

Rising star economist Ronald Fryer has been found guilty of serial sexual harassment by a Harvard University investigation. Commentary from Justin Wolfers and Claudia Sahm.

What would a theory of practical data analysis look like? Very interesting question, with some tentative stabs at an answer based on an analogy to musical theory:

Similarly in music, theory provides an efficient way to summarize the useful aspects of what has come before in a manner that can be fit into something like a semester-long course. It also provides a language for describing characteristics of music and for communicating similarities and differences across musical pieces. But when it comes to the creation of new music, theory can only provide the foundation; the composer must ultimately build the house.

I really like that way of putting things. It’s what I’m trying to do in the book I’m supposed to be writing.

Am Nat EiC Dan Bolnick examines whether Am Nat’s switch to double-blind review (with an option for authors to opt out) is having any effect. Unfortunately, Dan later discovered there was a mistake in his data–they’re data on the gender of corresponding authors of Am Nat submissions, not first authors as mistakenly stated in Dan’s post. That affects the interpretation of some of the results in the post, because women are less likely to be corresponding authors than they are to be first authors, and women first authors are less likely to be corresponding authors than men first authors are. Still, Am Nat’s data to seem to show a small but unfortunately perverse effect of their switch to optional double-blind review. The good news is that Am Nat publication decisions are gender-blind (with respect to corresponding author gender) for papers reviewed double-blind, and for papers reviewed single blind. But male corresponding authors are slightly more likely than women to opt out of double-blind review, and it looks like papers subject to double blind review get reviewed a bit more negatively on average than papers subject to single blind review (aside: that last result doesn’t surprise me; as Dan notes, it has been shown before in studies of peer review). The obvious solution here is to make double-blind review mandatory. As an Am Nat editor, I’d support a switch to mandatory double-blind review. We’d just have to hope (or ideally, check) that reviewers don’t see through the blinding, or think they’ve seen through it, non-randomly with respect to author gender or other attributes, and that reviewers who think they’ve seen through the blinding aren’t any less critical on average than those who don’t think they have. I don’t think that would happen, but it’s not far-fetched, and so I think it would be best to check.

If something, like, totally great happened, would sociology cover it?” (ht Andrew Gelman) One could ask the same question about ecology: if some aspect of conservation, or human impacts on the environment, got better (as some aspects have!), would ecologists cover it? Maybe the better (and trickier) question is, would they cover it the right amount? That’s a trickier question because what is the “right amount”, exactly? Related old posts from Brian and me, and me again. One could also ask the same question about the practice of ecology. Is the practice of ecology getting better, and if so, do ecologists talk about that improvement the “right amount”? Brian’s post earlier this week addressed some of this, and here’s an old post from me approaching the topic from a different angle.

Economists these days increasingly use rigorous experimental and statistical methods to address narrow, tractable questions about causality in “local” contexts. Rather than bigger, more important questions that aren’t directly addressable with those rigorous methods. So, with that background, discuss: could the same be said of ecology today? Or maybe the same could be said of ecology back in, say, the 1980s-90s when field experiments in 1 m² quadrats were all the rage, but not any more thanks to remotely-sensed data, globally distributed experiments, our increasingly macroecological focus, etc.? More broadly, when it is a waste of everyone’s collective time to argue about big questions on which the evidence is inconclusive? Does rigorously answering narrow, tractable “small scale” questions contribute to answering big, broad questions? If so, how exactly? Looking forward to your comments.

“The fact that I cannot remember the last time the internet made me feel, on balance, less anxious and better about other people tells you something about how much has changed online since 1999, 2001, and even 2007.” I’m curious if this resonates with anyone much younger than me. I’m in my mid-40s, only a few years older than the author. And I’m not even on Facebook.

On criticizing profs as a group on Twitter for doing things that no (or almost no) profs actually do.

A few simple tweaks to make your Word document look just as sharp as a LaTeX document. Angry replies from LaTeX users in 3…2… 🙂 (ht @dendrezner)

They don’t make lit cited sections like they used to. 🙂 Although my favorite is still Price (1970 Nature).

13 thoughts on “Friday links: #metoo in economics, Word = LaTeX, and more

  1. I think the disproportionate attention paid to bad news is quite a general phenomenon (beyond sociologists or ecologists). The principle that “bad is stronger than good” seems to manifest in many situations:
    http://assets.csom.umn.edu/assets/71516.pdf
    Doom-and-gloom scenarios do a good bit of the work of justifying ecologists’ existence, so there’s additional incentive to give them more weight. Good news stories are acknowledged, but most often quickly followed by “yeah, but…don’t forget about all the bad news”. For one person that might be what you call the “right amount” (i.e., there’s a reason not to dwell on good news), and for another not the right amount since it might produce a skewed view of reality.

  2. Word >>>>LaTeX??

    Oh my goodness! Where are my meds?!? Word is a sophisticated composition tool, not a “text editor!” Gasp!!! And writing in plain text with formatting commands in brackets????

    Word’s writing tools help you organize documents as you write them and make it super easy to keep track of different elements within the doc. You can format any aspect of text: fonts, paragraphs (indenting, hanging, text spacing whatever), tabs, columns, language, outline style and numbering, fill, and even borders. These are essential organizational tools. I can’t imagine writing without them.

    On top of managing fonts and appearance, Word’s styles:

    > automatically generate the TOC, lists of figures and tables; regenerate each list with 1-click when content changes

    > create navigation that is retained in PDF export (Jim Pavlik pointed out);

    > allow writers to organize and reorganize paragraphs/headings in text super fast with Alt-Shift-(U/D)Arrow

    > move headings and text between levels and styles easily Alt-Shift-(R/L)Arrow

    > Apply all headings and styles with keyboard shortcuts of your choice: File > Options > Customize Ribbon > Keyboard Shortcuts > Categories > (scroll all the way down) Styles.

    For headings, I find “Alt+1”, “Alt+2” for “heading 1”, “heading 2” etc are easy to key and to remember. You can change shortcut keys to the easiest key combinations (take that, Adobe!) for whatever you’re doing at the moment.

    I’m not sure why so many people have so much trouble with figures in Word. It can be a bit of a pain but really hardly unmanageable.

    Happy holidays!

  3. About that first Kieran Healy thread (criticizing profs for things no one does)…maybe I’m missing something, but I’m not seeing the humor there. It feels reminiscent of responses to sexual harassment when women said “Hey this awful thing happens and it needs to stop” and were told “Don’t be ridiculous- no one actually does that, since I’VE never seen it.”

  4. Regarding the Am Nat optional double-blind review process, to me there are compelling arguments why optional double-blind is worse than either straight single- or double-blind. In a large study from NPG, when given a choice, only one in eight authors submitting to the Nature family of journals actually chose to have their reviewers blinded when given the option.  More troubling, authors from India and China tended to choose double-blind submissions at much higher rates than did authors from western countries such as France and the United States, as did authors from less prestigious institutions. Double-blind submissions were also rejected more often. In other words, it worsens the Matthew Effect.

    I hoped Brian McGill would chime in on GEBs initial experience with “double-blurred” review. (In “double-blurred” reviews, authors are not required to tediously scrub their manuscripts for giveaway statements like “in our previous work we found that …. (No Longer Anonymous, et al. (2016).“)   Brian called it “soft mandatory double-blind” and the approach is intended to pursue the perceived benefits of double-blind review, yet trying not to overburden authors and the editorial office. I renamed this approach “double-blurred” peer review and laying out options, the journal I work with is running with it (Thanks Brian).  I still have ethical misgivings about obscuring potential funding bias and competing interests statements from reviewers, but have to see how it plays out.

    So I’d argue that Dan Bolnick is spot on about the ambiguous evidence for the superiority of single- vs. double-blind reviews, the optional double-blind  is the worst option. I gather he inherited this compromise from his predecessor.

     

    • I like double blurred! At GEB it has all been seamless. I’m not sure how you would measure success. As I indicated at the time the main motive was simply to increase the perception of fairness and I think that has worked.

      I agree optional double-blind is ultimately a fairly troubling method. I give AmNat a lot of credit for leading the charge to double-blind in ecology (some conservation and behavior journals got there first, but within ecology they were clearly the first by a good lead). Given that, it is understandable that they hedged their bets a bit with optional double-blind in case some authors had a big negative reaction and to avoid concerns about too much work scrubbing certain papers. But I would suggest it is time to move to mandatory double-blind now that so many other journals have followed their lead with no real consequences from a mandatory “double-blurred”.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.