Scientists have moral obligations, some of which apply to everybody, and some of which are specific to scientists. For instance, everyone agrees that scientists shouldn’t falsify their data, or abuse their authority over their students. Many scientific societies, such as the Ecological Society of America, have an official code of ethics to which all members are expected to adhere.
But precisely which scientific practices should be covered by our professional ethics? It seems to be increasingly common to argue that scientific practices that weren’t previously thought to be ethical issues actually are. For instance, here’s Mike Taylor arguing that it’s immoral to publish your papers in any venue that isn’t open access. Other practices that have been argued to be immoral include submitting to, reviewing for, or editing for for-profit journals (or even society journals which the society contracts with a for-profit publisher to publish). It’s been argued that reviewing for Faculty of 1000 is immoral. (UPDATE: Via Twitter, Casey Bergman denies that he ever said reviewing for Faculty of 1000 is a moral issue. Apologies for the misreading, although I think my reading was a natural one given his post starts with what he calls a “morality tale” and says reviewing for Faculty of 1000 violates his “principles”). I’ve had people tell me that it’s immoral of me to decide what to read by scanning the TOCs of leading journals. I’ve heard it argued that pre-publication peer review is immoral, amounting to a form of illegitimate censorship. I wouldn’t be surprised if someone out there has argued that sharing your raw data and code is not merely good for science, but a moral obligation. And I’m sure there are many other examples I haven’t thought of, to do with practices totally unrelated to the “openness” of science.
There’s a common thread to many of the above arguments. It’s claimed that scientific practices in which individual scientists engage (sometimes but not always because they have an externally-imposed incentive to do so) are immoral because of purported negative externalities for some larger group or entity (e.g., science as a whole, the general public, or some subset of the public).* But not all of the above arguments have that character. And in any case my goal with this post isn’t to debate whether or not any particular practice is indeed a matter of ethics, and if it is, what ethical practice consists of. Here I’m just curious about which practices people see as having an ethical dimension. I just want a sense of the range of opinion out there on what’s a matter of ethics and what’s not.
So I’ve set up a little poll, asking you to identify scientific practices that you think have an ethical dimension (by which I mean “ethical issues are a relevant consideration”, not necessarily “ethical issues are the only relevant consideration”).
Obviously this poll isn’t going to take a random sample of views from any well-specified population. But that’s ok, I just want to get some sense of the range of views out there. And the list of “possibly-ethical” issues in the poll may well be incomplete. If you think it’s incomplete, say so in the comments. I also recognize that the poll asks a pretty coarse question and it’s not going to capture all the nuances of people’s views. That’s ok, it’s just a conversation starter.
I’m also curious about how far people think it’s ok for scientists to go in support of their views on ethical scientific practice. But I wasn’t sure how to phrase a poll question on that, so I guess I’ll just throw it out there for discussion. Presumably, nobody has any problem with any scientist who decides on ethical grounds to share all their raw data, or only submit to open access journals, or whatever. Because while others may not share that ethical stance, nobody thinks it’s unethical to share your data, or only submit to open access journals, or whatever. And I don’t see why anyone would have a problem with anyone else explaining the ethical reasons for their own choices–explaining why they personally chose to share their data or whatever. I might not agree that your choice was ethical or wise, but why would I have any problem with you explaining why you chose as you did? But the next step beyond that is trying to convince others that there are ethical issues here that those others aren’t considering. And the next step beyond that is trying to convince others to adopt the same ethical stance as you and change their behavior accordingly. I could imagine that taking either of these two steps might turn off some people, much as some people are turned off by missionaries knocking on their door. The next step beyond that (and one that is perhaps hard to avoid taking if you’ve taken the previous step) is to tell your fellow scientists that they’re behaving unethically. And if the ethical issue is serious enough (and presumably you think it’s pretty serious if you’ve come this far), isn’t the next step to consider your fellow scientists as bad people and treat them accordingly? At least, it might be hard to convince others that you don’t consider them bad people, despite appearances to the contrary! See, e.g., this response to Mike Taylor’s editorial, and Mike’s reply.** Presumably, people’s views on this will come down to how confident they are in the rightness of their ethical stances, and how important they think it is that others adopt the same stance. Again, in raising this I’m just curious to get a sense of the range of views out there.
A request: obviously, people feel very strongly about ethical issues. So I’m more worried than usual about the comment thread getting heated. The comment thread on Mike Taylor’s piece illustrates just how heated discussions of this topic can get. It’s for that reason that I decided not to argue for my own views on any of this, instead just focusing on surveying the range of opinion out there. Indeed, I considered not allowing comments on this post at all. But in the end I decided to trust the great commenting community we have here. Let’s all bend over backwards to make sure the discussion stays productive, professional, and respectful (that applies to me as well). In order to help make sure that happens, I’m going to be taking my role as moderator especially seriously.
*As an aside, I’ll note that any negative externalities created by any given individual’s actions ordinarily are small. So if you call such actions immoral on the grounds that they create negative externalities for some larger group, you are basically trying to combat a purported “tragedy of the commons” via moral exhortation. Garrett Hardin, who first defined tragedies of the commons, famously questioned the effectiveness of attempts to resolve them via moral exhortation. And even Elinor Ostrom, who won a Nobel Prize for questioning whether the only way to resolve tragedies of the commons is via privatizing the commons, agreed that moral exhortation is ineffective unless backed by appropriate norms, rules, and institutions. Even if you think an issue is a moral issue, exhorting others on that basis isn’t necessarily the most effective way to change their behavior. For instance, this is why Owen Petchey and I suggested a rule-based system to oblige authors to review in appropriate proportion to how much they submit, rather than exhorting authors that that’s what they ought to do.
**Mike suggests that publishing behind a paywall is an immoral act that might nevertheless be justified for certain reasons, so that you’re not a bad person if you have a reason. Of course, drawing a distinction between bad acts and the good people who perform them isn’t much comfort to the person whose acts you’re criticizing if that person doesn’t draw the same distinction. Plus, like Jean Renoir said, the truly terrible thing is that everybody has their reasons. I’ve had to address a similar issue in arguing against zombie ideas, drawing a distinction between serious mistakes and the very good ecologists who’ve made them. I’m comfortable making that distinction because ecology is hard and nobody is infallible, so all ecologists, even the best ones, will sometimes make mistakes.
Interesting. So far (123 votes at the moment), hardly anyone thinks none of the issues on my list have an ethical dimension. But none of the issues has been chosen by more than 15% of the voters, even though people can vote for as many issues as they want.
But weirdly, looks like the total number of votes is almost exactly the same as the total number of voters? Is everyone only voting for one option? I double checked, the poll should be allowing you to vote for as many options as you want. Is it not allowing you to do that?
Hmm, I just tried it myself and it seems to have let me vote for multiple items. At least, I didn’t get an error message or anything…
That is odd – by my tally #of voted fors exactly equals # of voters (159),which is highly suspicious, but I definitely voted for two things and got no error message. Even if only my and your votes voted for multiple choices there should be more than 159 votes.
I think it’s working but perhaps not as popular as you thought… according to the legend, it doesn’t show total voters but total votes. Did you notice the increment when you voted? The graphs shown are terribly misleading if the legend is correct.
Crap, you’re right Brian. There must be something wrong with the poll. No idea what, I’ve done polls before allowing multiple answers and they worked fine. Will try to fix or replace as soon as I get a chance, but it’s already a bit late at this point, the post has been up for over an hour…
@Daniel:
Oh yeah, I bet that’s it! Brian and I have been misreading the the total number of “votes” as the total number of voters, when in fact it’s just the total number of items everybody has selected.
Ok, false alarm, sorry for the confusion everybody.
Having said that, it would be much more useful to know how many people had voted as well as which items were most popular. Probably should’ve gone to the trouble of doing a Google Docs poll rather than just inserting a WordPress poll.
If you really want the number of individuals who voted, you can sign up for a “pro” polldaddy account – not free. That can also give you break down by country and other options.
Thanks for the tip. But rather than pay the money I think I’ll just stick to setting up a Google Docs poll next time. I’ve actually done one for this blog in the past, I just didn’t bother this time.
So so far, the only thing you can tell about the total number of voters is that at least 25 people have voted, as that’s how many votes the most popular item has garnered.
so, if you would put up a new poll asking only about raw data, you could infer the total number of people that have voted so far (just kidding) 😉
Yeah, next time I’m going to be less lazy and do a Google Docs poll.
Perhaps I am pedantic on this. I am very hesitant to ascribe ‘moral’ to the vast majority of these issues. These seem like professional ethics.
Ethics to me borders on “best practices”. For all the options listed I can think of some way of making it at least vaguely ethical. I picked the most clear cut one to me, which is data sharing.
No, you’re not pendatic, Pat. I totally agree that “best practices” shades into “ethics” or “morals”, and that it’s often hard to say where one stops and the other begins. That’s something I deliberately glossed over in the post, preferring to let readers make their own judgments on this.
For the record, I never made the claim that reviewing for Faculty of 1000 is immoral. I also asked Jeremy to update this current post to correct this mischaracterization of my post. Unfortunately, Jeremy again mischaracterizes my views in his “update” (this time I feel deliberately) since I never made this claim and therefore cannot “deny” making it. Readers can judge for themselves whether Jeremy is accurately representing my views, honestly attepting to set the record straight, or deliberately trolling and setting up a strawman to suit his agenda.
And for the record, I’m very surprised that you have taken offense. I have no agenda. In the update, I stated my reasons for reading your post as I did. I acknowledge that my reading of your post is not the one you intended, for which I apologize. However, I stand by the claim that my original reading was a natural reading. I suggest that if you don’t mean to raise a moral issue, you should not start your post with what you call a “morality tale”.
I made an honest mistake, and I have quoted passages in your post that explain why the mistake was made. I’m afraid I don’t understand why you persist in implying that I have ulterior motives or that I am deliberately trolling. I’m not.
You are making two separate claims. One is that I misread your post. I’m happy to agree, and I’ve corrected the record. The other claim is about my intentions: you claim I deliberately misread your post so as pursue some agenda. If you want to back that claim up, be my guest. But I’m going to delete any further comments attacking my motives unless you can back them up with evidence.
I read your post and I think Jeremy’s statement is reasonable. From your response though, I’m thinking that you are comfortable making an a moral judgement of your own choices, but not comfortable making a statement about the morality of other people’s choices. As Jeremy pointed out, it is a very short step from making a statement that a choice is not morally acceptable to me, to this choice isn’t morally acceptable for anyone. I agree that you did not do the latter explicitly, and it seems like you feel that Jeremy’s use of the word “immoral” implies that you did.
Not having formally studied the distinction, seems like the gradient from good practice -> ethical behavior -> moral behavior is a fuzzy one. I was surprised by the range of things that were regarded as having ethical dimensions in the poll, but maybe that’s in part because of the linguistic ambiguity here.
Re: the gradient from good practice to matters of ethics or morals*, now I’m wondering if part what the poll is capturing is how clear-cut people think good professional practice is. For instance, it seems to be pretty widely-agreed at this point that sharing your data and code is good professional practice. Whereas at the other end of the scale I don’t think there’s any general agreement on what’s good professional practice in terms of what papers to read, or how to decide what papers to read. On the other hand, I don’t think that’s all of what’s going on, since I don’t know that things like “whether to fly to conferences” or “whether pre-publication peer review should exist at all” would ordinarily be considered matters of good/poor professional practice at all.
*probably should’ve noted that I deliberately used “ethics” and “morals” interchangeably in the post, since it’s not universally agreed whether there’s a clear distinction between them…
Further to this, just remembered that on her Error Statistics blog, philosopher Deborah Mayo has been talking a lot about the thin and fuzzy line between various common-but-statistically-dubious analytical practices, and outright fraud such as the Stapel case. So another case where good professional practice (here, regarding study design and statistical analysis) shades into moral and ethical issues. Where’s the line between questionable or dodgy statistical practices, and just making results up?
Interesting results so far. Some I expected and some surprise me.
Most oft-chosen so far is data/code sharing. That doesn’t surprise me. There seems to be a rapidly-developing consensus that this is at least good professional practice, to the point where many leading journals in ecology now require it of authors. And as noted by a commenter above, professional best practice shades into professional ethics.
I am surprised that the second most oft-chosen item so far is the criteria by which papers are evaluated in peer review. Curious to hear someone comment on why they consider this to be at least in part a matter of ethics. I’m pretty sure I can guess why some folks chose this one, but rather than speculate I’d prefer to just let people comment.
Also surprised to see which papers to cite as the third most oft-chosen item. Again, curious to hear someone comment on why they chose this one.
Kind of surprised that “how much to review” didn’t get even more votes. Most discussion of peer review reforms that I’ve seen over the past few years takes for granted that, over the course of their careers, people ought to review in appropriate proportion to how much they submit. That is, most discussion takes for granted that professional ethics dictates how much reviewing people should do, and then goes on to focus on what sort of reforms would help ensure that people act as they should. But apparently that shouldn’t be taken for granted, since there are at least a decent number of people out there who don’t see the amount of reviewing one does as a matter of professional ethics at all.
Also surprised to see use of proprietary software get as many votes as it has so far. In the same ballpark as things like which journals to review for or submit to, which as noted in the post often have been argued to be ethical matters over the past few years. Again, would be interested in comments from folks who picked this one. Do people see ethical issues with using proprietary software when non-proprietary substitutes are available? Or are the perceived ethical issues broader or different than that, extending to cases where there are no good substitutes for a proprietary software package?
Not surprised that only a few folks see the very existence of pre-publication peer review as an ethical issue. You do see it criticized as inefficient or ineffective, but only very rarely have I seen pre-publication peer review criticized as unethical. Similarly, I’m not surprised that very few people see ethical issues in what papers individuals decide to read, or how individuals make decisions about which papers to read. I wonder if at least some people who chose those rarely-chosen items chose them on the view that that everything people do necessarily has an ethical dimension?
I’ll take a shot at why people think the criteria papers are reviewed by is a moral one. Full disclosure I am relatively new to reviewing and receiving reviews, so my opinions may be a little off base. I think you blogged about the issues with this before. When you get an unfavorable review occasionally it seems like your work has been rejected based on whether the reviewer agrees with your conclusions, not based on the quality of the work. You get the impression, whether intended or not, that they are rejecting your paper based on whether the findings are contrary to theirs. Of course with blind review you have no idea if this is the case, but one will play the reviewer guessing game. While you have no idea if it is true it can give the impression of immoral behavior – suppressing a valid contrary opinion. This is particularly the case when you receive more than 3 reviews for a paper and there is one particularly nasty review while the others are positive. In these cases the editor did an excellent job by soliciting an extra review before making a decision.
Whoops, this should have been a reply to the post above.
Could be. I’m not sure.
My own guess was that some people might think it’s immoral to evaluate papers based on their “interest” or “likely impact”. That it’s not legitimate to evaluate mss on such “subjective” criteria, or to evaluate mss on criteria on which reviewers often disagree. But I freely admit my guess could be way off base. Certainly, one could object to reviewing mss on those grounds without seeing it as a moral issue.
I voted for this one because it seems to me that the main ethical consideration is that the tone and outcome of the review should not be motivated by personal animosities hidden behind anonymous comments.
@jeffollerton: That seems to fit with the idea of professional practice shading into ethics? A review motivated by personal animosities is really poor professional practice–so poor that it crosses over into being unethical?
Or is it the fact that personal animosities motivate the bad review that make it unethical? So that a review that was bad just because, say, the reviewer was sloppy would be bad professional practice but not unethical.
This conversation is taking me back to first-year philosophy in college. 🙂 Is morality about motivations (here, why someone did a really bad review), or about consequences (here, the fact that a bad review results in harm to the author)?
Jeremy,
Just for the record, we did argue that data sharing is an ethical obligation (although we made it abundantly clear that it is our opinion): http://figshare.com/articles/Moving_toward_a_sustainable_ecological_science_don_t_let_data_go_to_waste_/693745
See also for a slightly more extreme view: http://blogs.bmj.com/bmj-journals-development-blog/2013/05/24/publishing-articles-without-making-the-data-available-is-scientific-malpractice/ — Geoff Boulton or the Royal Society expressed the view that not sharing data is malpractice.
Thanks for the links Tim, sorry I missed them in the original post.
I like that you cover a range of different ethical arguments for datasharing.
I note that several (not all) of your ethical arguments for data sharing seem to come down to the claim that it’s such good professional practice that not doing is unethical. That seems to be a big theme in this discussion thread, to the point I’m kind of embarrassed I didn’t think to point it out in the original post–that many of our ethical obligations as scientists ultimately come down to an ethical obligation to follow, or at least not deviate too far from, best practices.
Which means my upcoming post on the downsides of data sharing is probably going to go over like a lead balloon. 🙂 (To be clear, I note that there are many unquestioned benefits to data sharing, and don’t argue that people shouldn’t do it. But I do argue that there is a potentially-important downside that hasn’t been much discussed…)
It’s hard to argue against the idea that we should not deviate from best practices 😉
But more seriously, yes, a bunch of arguments are that we should do what will allow us to produce “better” science. And if possible, we should make that “better” science in a way that is compatible with a high ethos (let’s say, technical excellence, virtue, and goodwill). Certainly sharing data is part of this ideal.
One original idea for the paper was to have a table/box exposing several arguments *against* data sharing and how we would reply to them. So I’m happy to hear about your upcoming post. I’m sure there are downsides, and I’m (almost entirely 100%) that the upsides far outweigh them!
I was surprised to not see one issue listed that specifically concerns us ecologists and biologists: the use of animals and plants in experiments and monitoring. It is a can of worms and discussions about it, especially with non-scientists, can get very heated and emotional. But it clearly has an ethical dimension (well, everything we do as social conscious beings has, as everything we do affects someone else) that is of interest to the general public, more than how to cite papers etc. I wonder why this wasn’t on the list? Do you and maybe others think it is actully not an issue within science/ecology but only with respect to the public? This is a genuine question because it was, and in some way still is, my attitude: Why bother with it when what we do as ecologists is so important and improve so many things that it outweighs sacrificing and using organisms. I really think that that is the way our research practices can and need to be defended and I use the word defended deliberately here: regulations and laws here in the EU lobbied in by the animal rights movement have made it very difficult to do certain experiments, especially with vertebrates. But this was helped, I believe, by the passive if not arrogant way biologists dealt the ethical dimension of the usage of animals in research. Note that I don’t say this is the one reason why we should care about the welfare of the organisms we use. We should do so and we do so, because they are living creatures that demand respect and the best care we can give them. I just want to say that the ethical dimension of organisms in research is an important one that needs discussing within the scientific community as well.
I didn’t list it because animal welfare is already universally agreed to be an ethical issue. Though as you say there is disagreement as to precisely what ethical practice consists of.