This post is solely my opinion; not reflecting any views of my coauthors, my university, etc, and was written in my free time at home. I am just putting my current thoughts in writing, with the hope of stimulating some discussion. My post is based on some ruminations I’ve had over recent years, in which I’ve seen a lot of change happening in how science’s self-correcting process works, and the levels of openness in science, which are trends that seem likely to only get more intense.
That’s what this post ponders- where are we headed and what does it mean for scientists and science? Please stay to the end. It’s a long read, but I hope it is worth it. I raise some points at the end that I feel strongly about, and many people (not just scientists) might also agree with or be stimulated to think about more.
I’ve always tried to be proactive about correcting my (“my” including coauthors where relevant) papers, whether it was a publisher error I spotted or my/our own; I’ve done at least 5 such published corrections. Some of my later papers have “corrected” (by modifying and improving the methods and data) my older ones, to the degree that the older ones are almost obsolete. A key example is my 2002 Nature paper on “Tyrannosaurus rex was not a fast runner“- a well-cited paper that I am still proud of. I’ve published (with coauthors aplenty) about 10 papers since then that explore various strongly related themes, the accuracy of assumptions and estimates involved, and new ways to approach the 2002 paper’s main question. The message of that paper remains largely the same after all those studies, but the data have changed to the extent that it would no longer be viable to use them. Not that this paper was wrong; it’s just we found better ways to do the science in the 12 years since we wrote it.
I think that is the way that most of science works; we add new increments to old ones, and sooner or later the old ones become more historical milestones for the evolution of ideas than methods and data that we rely on anymore. And I think that is just fine. I cannot imagine it being any other way.
If you paid close attention over the past five months, you may have noticed a kerfuffle (to put it mildly) raised by former Microsoft guru/patent afficionado/chef/paleontologist Nathan Myhrvold over published estimates of dinosaur growth rates since the early 2000’s. The paper coincided with some emails to authors of papers in question, and some press attention, especially in the New York Times and the Economist. I’m not going to dwell on the details of what was right or wrong about this process, especially the scientific nuances behind the argument of Myhrvold vs. papers in question. What happened happened. And similar things are likely to happen again to others, if the current climate in science is any clue. More about that later.
But one outcome of this kerfuffle was that my coauthors and I went through (very willingly; indeed, by my own instigation) some formal procedures at our universities for examining allegations of flaws in publications. And now, as a result of those procedures, we issued a correction to this paper:
Hutchinson, J.R., Bates, K.T., Molnar, J., Allen, V., Makovicky, P.J. 2011. A computational analysis of limb and body dimensions in Tyrannosaurus rex with implications for locomotion, ontogeny, and growth. PLoS One 6(10): e26037. doi: 10.1371/journal.pone.0026037 (see explanatory webpage at: http://www.rvc.ac.uk/SML/Projects/3DTrexGrowth.cfm)
The paper correction is here: http://www.plosone.org/article/info%3Adoi/10.1371/journal.pone.0097055. Our investigations found that the growth rate estimates for Tyrannosaurus were not good enough to base any firm conclusions are, so we retracted all aspects of growth rates from that paper. The majority of the paper, about estimating body mass and segment dimensions (masses, centres of mass, inertia) and muscle sizes as well as their changes through growth and implications for locomotor ontogeny, still stands; it was not in question.
For those (most of you!) who have never gone through such a formal university procedure checking a paper, my description of it is that it is a big freakin’ deal! Outside experts may be called in to check the allegations and paper, you have to share all your data with them and go through the paper in great detail, retracing your steps, and this takes weeks or months. Those experts may need to get paid for their time. It is embarassing even if you didn’t make any errors yourself and even if you come out squeaky clean. And it takes a huge amount of your time and energy! My experience started on 16 December, reached a peak right around Xmas eve (yep…), and finally we submitted our correction to PLoS and got editorial approval on 20 March. So it involved three months of part-time but gruelling dissection of the science, and long discussions of how to best correct the problems. Many cooks! I have to admit that personally I found the process very stressful and draining.
Next time you wonder why science can be so slow at self-correction, this is the reason. The formal processes and busy people involved mean it MUST be slow– by the increasingly speedy standards of modern e-science, anyway. Much as doing science can be slow and cautious, re-checking it will be. Should be?
My message from that experience is to get out in front of problems like this, as an author. Don’t wait for someone else to point it out. If you find mistakes, correct them ASAP. Especially if they (1) involve inaccurate data in the paper (in text, figures, tables, whatever), (2) would lead others to be unable to reproduce your work in any way, even if they had all your original methods and data, or (3) alter your conclusions. It is far less excruciating to do it this way then to have someone else force you to do it, which will almost inevitably involve more formality, deeper probing, exhaustion and embarassment. And there is really no excuse that you don’t have time to do it. Especially if a formal process starts. I can’t even talk about another situation I’ve observed, which is ongoing after ~3 years and is MUCH worse, but I’ve learned more strongly than ever that you must demonstrate you are serious and proactive about correcting your work.
I’ve watched other scientists from diverse fields experience similar things– I’m far from alone. Skim Retraction Watch and you’ll get the picture. What I observe both excites me and frightens me. I have a few thoughts.
1) The drive to correct past science is a very good development and it’s what science is meant to be about. This is the most important thing!
2) The digital era, especially trends for open access and open data for papers, makes corrections much easier to discover and do. That is essentially good, and important, and it is changing everything about how we do science. Just watch… “we live in interesting times” encapsulates the many layers of feelings one should react with if you are an active researcher. I would not dare to guess what science will be like in 20 years, presumably when I’ll be near my retirement and looking back on it all!
3) The challenge comes in once humans get involved. We could all agree on the same lofty principles of science and digital data but even then, as complex human beings, we will have a wide spectrum of views on how to handle cases in general, or specific cases.
This leads to a corollary question– what are scientists? And that question is at the heart of almost everything controversial about scientific peer review, publishing and post-publication review/correction today, in my opinion. To answer this, we need to answer at least two sub-questions:
1–Are we mere cogs in something greater, meant to hunker down and work for the greater glory of the machine of science?
(Should scientists be another kind of public servant? Ascetic monks?)
2–Are we people meant to enjoy and live our own lives, making our own choices and value judgements even if they end up being not truly optimal for the greater glory of science?
(Why do we endure ~5-10 years of training, increasingly poor job prospects/security, dwindling research funds, mounting burdens of expectations [e.g., administrative work, extra teaching loads, all leading to reduced freedoms] and exponentially growing bureaucracies? How does our experience as scientists give meaning to our own lives, as recompense?)
The answer is, to some degree, yes to both of the main questions above, but how we reconcile these two answers is where the real action is. And this brew is made all the spicier by the addition of another global trend in academia: the corporatization of universities (“the business model”) and the concomitant, increasing concern of universities about public image/PR and marketing values. I will not go any further with that; I am just putting it out there; it exists.
The answer any person gives will determine how they handle a specific situation in science. You’ve reminded your colleague about possible errors in their work and they haven’t corrected it. Do you tell their university/boss or do you blog and tweet about it, to raise pressure and awareness and force their hand? Or do you continue the conversation and try to resolve it privately at any cost? Is your motive truly the greater glory of science, or are you a competitive (or worse yet, vindictive or bitter) person trying to climb up in the world by dragging others down? How should mentors counsel early career researchers to handle situations like this? Does/should any scientist truly act alone in such a regard? There may be no easy, or even mutually exclusive, answers to these questions.
We’re all in an increasingly complex new world of science. Change is coming, and what that change will be like or when, no one truly knows. But ponder this:
Open data, open science, open review and post-publication review, in regards to correcting/retracting past publications: how far down the rabbit hole do we go?
The dinosaur growth rates paper kerfuffle concerned numerous papers that date back to earlier days of science, when traditions and expectations differed from today’s. Do we judge all past work by today’s standards, and enforce corrections on past work regardless of the standards of its time? If we answer some degree of “yes” to this, we’re in trouble. We approach a reductio ad absurdum: we might logic ourselves into a corner where that great machine of science is directed to churn up great scientific works of their time. Should Darwin’s or Einstein’s errors be corrected or retracted by a formal process like those we use today? Who would do such an insane thing? No one (I hope), but my point is this: there is a risk that is carried in the vigorous winds of the rush to make science look, or act, perfect, that we dispose of the neonate in conjunction with the abstergent solution.
There is always another way. Science’s incremental, self-correcting process can be carried out quite effectively by publishing new papers that correct and improve on old ones, rather than dismantling the older papers themselves. I’m not arguing for getting rid of retractions and corrections. But, where simple corrections don’t suffice, and where there is no evidence of misconduct or other terrible aspects of humanity’s role in science, perhaps publishing a new paper is a better way than demolishing the old. Perhaps it should be the preferred or default approach. I hope that this is the direction that the Myhrvold kerfuffle leans more toward, because the issues at stake are so many, so academic in nature, and so complex (little black/white and right/wrong) that openly addressing them in substantial papers by many researchers seems the best way forward. That’s all I’ll say about that.
I still feel we did the right thing with our T. rex growth paper’s correction. There is plenty of scope for researchers to re-investigate the growth question in later papers. But I can imagine situations in which we hastily tear down our or others’ hard work in order to show how serious we are about science’s great machine, brandishing lofty ideals with zeal– and leaving unfairly maligned scientists as casualties in our wake. I am reminded of outbursts over extreme implementations of security procedures at airports in the USA, which were labelled “security theatre” for their extreme cost, showiness and inconvenience, with negligible evidence of security improvements.
The last thing we want in science is an analogous monstrosity that we might call “scientific theatre.” We need corrective procedures for and by scientists, that serve both science and scientists best. Everyone needs to be a part of this, and we can all probably do better, but how we do it… that is an interesting adventure we are on. I am not wise enough to say how it should happen, beyond what I’ve written here. But…
A symptom of scientific theatre might be a tendency to rely on public shaming of scientists as punishment for their wrongs, or as encouragement for them to come clean. I know why it’s done. Maybe it’s the easy way out; point at someone, yell at them in a passionate tone backed up with those lofty ideals, and the mob mentality will back you up, and they will be duly shamed. You can probably think of good examples. If you’re on social media you probably see a lot of it. There are naughty scientists out there, much as there are naughty humans of any career, and their exploits make a good story for us to gawk at, and often after a good dose of shaming they seem to go away.
But Jon Ronson‘s ponderings of the phenomenon of public shaming got me thinking (e.g., from this WTF podcast episode; go to about 1 hr 9 min): does public shaming belong in science? As Ronson said, targets of severe public shaming have described it as “the worst pain ever”, and sometimes “there’s no recourse” for them. Is this the best way to live together in this world? Is it really worth it, for scientists to do to others or to risk having done to them? What actually are its costs? We all do it in our lives sometimes, but it deserves introspection. I think there are lessons from the dinosaur growth rates kerfuffle to be learned about public shaming, and this is emblematic of problems that science needs to work out for how it does its own policing. I think this is a very, very important issue for us all to consider, in the global-audience age of the internet as well as in context of the intense pressures on scientists today. I have no easy answers. I am as lost as anyone.
What do you think?
EDIT: I am reminded by comments below that 2 other blog posts helped inspire/coagulate my thoughts via the alchemy of my brain, so here they are:
http://dynamicecology.wordpress.com/2014/02/24/post-publication-review-signs-of-the-times/ Which considers the early days of the Myhrvold kerfuffle.
http://blogs.discovermagazine.com/neuroskeptic/2014/01/27/post-publication-cyber-bullying/ Which considers how professional and personal selves may get wounded in scientific exchanges.
Reblogged this on BLACKWATER LOCALITY #1.
I’ll be the first to admit that my first prominent publication was a note tearing down someone else’s work. That work had appeared in a major journal and caused quite a stir — but the apparent results were the product of a careless (not dishonest, just careless) mistake in the analysis. The note pointing this out was not derogatory in tone, nor was it intended to shame, but was doubtless embarrassing to the authors. Now that I am older and wiser, I wonder if I should have contacted the authors first, told them about the problem, and asked whether they wanted to retract. I think that they would not have, and that one of them would have attacked me if I had offered them that opportunity (indeed this person did send me a perfectly classic e-mail about how stupid I was to believe that anything he said could be incorrect, in which he also compared me to a female canid) but I’ll never know because I didn’t do it. At least I didn’t call up the NYT!
Now that I am much older, a little wiser, and a little kinder (and a lot more employed, and thus less vulnerable to jerks) I would send the authors my analysis of their math first and give them the opportunity to correct. And I hope that my colleagues would give me the same consideration if (when?) I make a stupid mistake.
Hey Anne,
Glad to hear I’m not the only one who started their publishing career like that! I did contact the author before writing-in (Nature actually requires that you do this anyhow) but the reply I got rather seemed to be a brush-off to me.
I see the whole process of ‘technical comments’ and such as a good thing on the whole though – it gives incentive (a publication) to those who actually take the time to go through and check the ‘results’ reported in the literature. It’s not uncommon for there to be a vast gulf between the described data & methods and results of the paper and the *actual* results given the described method & data.
I do worry though if the effort of writing rebuttals & corrections is somewhat wasted though — few people actually read them! The old incorrect version continues to be cited, including the incorrect parts!
( see Banobi, J. A., Branch, T. A., and Hilborn, R. 2011. Do rebuttals affect future science? Ecosphere http://dx.doi.org/10.1890/es10-00142.1 )
I agree with Ross that rebuttals at least are often not a good effort, although I’ve done one myself and had many directed at me. The vast majority never get published. The advice I got as a PhD student was “don’t write a rebuttal, write a new paper” and I think that usually serves best. Usually.
My thoughts are complicated. Science amongst scientists is a public endeavor. We work and present in ways that reveals our data to many others and open ourselves to scrutiny. The ultimate goal is to make this public. Paradoxically, we then insulate ourselves from criticisms, while at the same time forcing ourselves to be subject to it.
Scientists are humans, yes; but they are also adults. They need to deal with the vagaries of people’s reactions, just as we have to with shitty reviews and people being all too fawning.
As part of what could be considered one of the craziest episodes in recent years I often wonder if everything could have been handled differently and if it had what the alternate outcome may have been. The public route was taken in that specific case in order to protect ourselves from a direct and very real threat, but it was probably the least fun year of my life and I still don’t completely recognize what the consequences have been for all involved. Public shaming affects everyone involved, the people who are being ‘corrected’, as well as those proposing the correction. It can be very damaging for careers and gives the science a black eye. Still, we do need to police ourselves, especially in cases of academic dishonesty and unethical behavior to keep those activities at a minimum, especially for repeat offenders. Thus, I think it is sometimes (and was in my case) a worthwhile trade-off to preserve integrity in our discipline.
I look after some open source scientific image analysis software, BoneJ. An explicit part of the development model is that we expect there to be bugs or variation of implementation, and that we hope when people find them, they tell us, and the other users, about it. Then a fix can be devised, tested and released to everyone. The software is there to help everybody, and it’s no help to just slag the developers off for bad coding. Thankfully BoneJ has a really smart and respectful community which so far behaves maturely. External validation only adds value to the project rather than detracts from the developers personally, as long as all involved can discuss it in a non ad hominem way. We just added a correction spotted by some scientists elsewhere, which will improve accuracy for some users – I could have been offended and refused to change the code, but actually they will publish their findings and BoneJ would have looked pretty bad to have not responded to the analysis. A grown-up discussion, thinking about what is best for the community and getting over one’s self is helpful in these situations.
An interesting contrast.
I think one key difference is that a bug report is so specific: “Here is a situation where the software that should do X but does Y instead. Here are steps A, B and C that you can use to reproduce it. Here is a patch that fixes this error.” That’s unambiguously helpful and unquestionably improves the software.
Whereas criticisms of published papers are often much more vague: “inadequate statistical analysis”, “not enough data points” and suchlike. What can the author actually do in response to such criticism? It’s not always clear.
The other big difference is that in software we have an established culture of multiple releases of a given work, even down to conventions about major and minor releases, how to write change-logs, etc. There’s nothing analogous in scientific publication: a paper once published is supposed to stand forever in its published form. So any criticism is not an opportunity to improve the paper, just something the devalues it.
It does make me think that version-controlled papers may be a really valuable innovation when it comes (as it inevitably will). For many other reasons as well as this one.
Thanks for the comments so far, folks! I am finding it very interesting to hear the perspectives and stories. I encourage others to come forward and tell their stories, but I also recognize that some stories are too touchy to post publicly. Naming names and truly humiliating oneself/others would be an ironic outcome, and not desirable here IMHO. In some of the more egregious cases of wrongdoing, I wonder if there were situations where public shaming was not used as the major punishment/incentive-for-change and what the outcome of that was; better or worse? I suppose it is hard to do a controlled experiment on this…
I’ve had some interesting chats on email and Facebook with some readers of this post and want to share a paraphrasing of those here, as they contribute value:
-Public shaming in science, at its extreme, is not a humanizing process (like simple, mild embarrassment is) but a dehumanizing one, for all involved and for science. We need to keep aware of that. It might be a last resort if at all, not a first one. Its power is too often forgotten.
-Women, minorities and early career researchers may face a bias in the correcting process of science because some will not trust their abilities as much – the caustic “They are not an old white guy like me so they cannot be as good as me” attitude manifesting itself. And they can be weaker in the power eschelons of science so they may face other disadvantages, including greater risks from public shaming.
-Time fixes science; some people may prefer to back down from confrontation over mistakes in science that are perceived as “low stakes” rather than take the risks involved in opening the floodgates of shame upon others.
-Why correct things via a formal university procedure when you can just do it with your coauthors? (1) You may not have a choice; some universities will force a formal process on you if they catch wind of errors, especially allegations of misconduct (Not every scientist is an angel); (2) At its best, a formal process is truly objective, transparent, and detailed, and more likely to catch problems that authors may miss (intentionally or not). And it shows that the university agrees with the authors. For much research, in legal terms the university owns the science, not the authors, so they *are* stakeholders.
-Keep the spectra of mistakes and misconduct separate. Misconduct may lead to retractions; only big mistakes would do so. Many scientists are quick to assume misconduct rather than mistake, for various reasons. Hanlon’s Razor applies here– “Never attribute to malice that which is adequately explained by stupidity”.
Should an honest mistake ever lead to retraction? I’m not inclined to think so. Do we want to retract Riggs (1904) because he classified Haplocanthosaurus as a brachiosaurid?
To my mind, retractions are for misconduct only.
I’d tend to agree, but an honest mistake that totally blows the paper apart and leaves it essentially worthless might be a case for that… but these things are seldom so clear. In practice, distinguishing a big mistake(s) from misconduct can be hard, because it requires proving intent.
To me, science is about gaining a greater understanding of the world we live in. It’s about using the resources / techniques / current knowledge to build on what we already know / don’t know.
Each new addition will increase our current knowledge. Thus, subsequent pieces of research will have a greater foundation to build on; allowing them to reach greater heights.
Additionally, improvements in techniques and resources (not least in computing power), we will obviously be able to improve our understanding; delving deeper into the unknown.
Importantly, this process will ultimately lead to previous publications being found out as being wrong. It’s not necessarily due to “bad science” (whatever that is), but because we have become able to delve deeper; it’s a process.
The key here, as you have said, is at which point is an error/mistake in a previous paper something to build on in subsequent papers, or something to go back and correct.
One advantage we have with everything being online, is I don’t think we really need to go back and correct. Why can’t we advance and add new papers. Then on the website for the previous paper, a link is provided. This is currently done for corrections in papers. But I think it would be good that if any future paper changes our understanding on the topic at hand it is linked to in the earlier paper; thus guiding the reader to the latest position in our understanding.
Potentially, this is a logistically nightmare. It might be something that is built into the submission process; “Does this paper alter understanding gained from an earlier paper?”. But there might be an alternative approach.
The bottom line, is that the scientific process does not have a definitive finishing point. It’s a continuous process, with iterations. It’s something, IMHO, that should be driven by passion and desire to understand; not to find a definitive answer. This should be reflected in how we react to “errors”/changes in our understanding.
Great summary! The tricky bit is the passion. It’s what fuels most of us, but it works for and against science. It takes some reflection to keep it from steering one into darker places. A passionate work of science or a passionately helpful, constructive, empathic review can be great. A passionately biased study or a passionately malevolent review are not, of course. Some of this I delved into in my semi-prelude to this blog post, here: https://whatsinjohnsfreezer.com/2014/02/07/science-humanity-and-spock/
i, like others, think pucblic shaming is defo not the way to go. never. but what actually concerns me is another phenomenon that is linked to this, and to open science, data etc. which is public judgement.
on the one hand it is of course desirable that scientists become more accessible to the public. it makes people more interested in what we’re doing, give us more money for research and, after all, it’s their money too, so it is their right to know what we do.
but just as the penal system is somewhat rigid and must, to some extent, be disconnected from public opinion, so should science. otherwise we’re back to the middle ages with people pointing each other “witch!” and burning them in public spaces.
which other purpose would public shaming have if not giving bananas for the monkeys? public in general likes pane et circenses and i fear some stupid cases will come up just to give them what they want – and some “fame” for those who accuse. what i fear is that science becomes some other kind of religion, with fanatics holding their flags without acknowleding this is somehow inherent to it, and that science, like all other fields, are made by humans after all and might be influenced by ideas of the time, that will have genuine mistakes and those bad beings seeking just a bit of fame.
so yeah, while on the one hand it is good that the public gets to know the processes which we have developed to self-correct our methods and punish those trying to sabbotage it, on the other i fear all this “interference” will cause it to be modified to please them only. it is no easy task to draw a line….
Thanks Gabi, well said. The “scientific theatre” mentioned above involves putting scientists’ antics in the public eye as a form of entertainment, to be sure. Everyone loves a juicy story, and the foibles of supposedly earnest scientists make for good irony. Regardless, scientists continue to rate highly in public polls of the most trusted professions, so I think we’re doing OK in some aspects, and we need to stay in that public eye to some extent in order to maintain that trust (no one trusts secrecy). Much of the corrective process of science never gets seen by the public because the media know it is, well, mostly boring, technical stuff. It’s the egregious cases that make the headlines– or the cases where reality has been distorted in favour of a good story. The latter bit is the one to watch out for, and can involve the public shaming I emphasized.
[…] Co-rex-ions. […]
Nice article John.
I’d like to make a comment, particularly on the ‘tearing down’ of old papers. Of course, this is an idea that hopefully we’d never see in practice but I think it’s so important to have the history of ideas, not just the current bleeding edge, enshrined in the published record so that, if you wanted, you too can hop from one stepping stone of an idea to another and critically examine for yourself how ‘science’ of X, Y or Z got to the ‘consensus’ it has to day.
I think this is also a major flaw of biological education in the UK today, right up to postgraduate level. I’m fortunate enough to work with a lot of historians and philosophers of science and knowing the published scientific and political history of people and ideas is how you truly understand the science. I worry that today a lot of focus is on the ‘end result’ or the current theory which misses the core of why are we where we are today with whatever, biodynamics, island biogeography, genomics, origins of early life or what have you.
Yes, the historians can get a little to philosophically focused (which is where ‘scientists’ should kick them back into reality) and yes, I know that curricula in the sciences is already diverse and stretched without making space for critical thinking and historical modules.
Bit of an aside to your blog but not too unrelated I hope.
I agree wholeheartedly! The historical context of ideas comes pretty easily to an evolutionary biologist, and in my evolution (and other) teaching I try to work in a lot of history/ancestry of ideas, much to the chagrin of some students, but there’s no doubt it enriches the concepts being taught (and critical thinking!), like you say, and concepts are what we’re supposed to be emphasizing.
You may find this post by Neuroskeptic relevant to your concerns:
Postpublication “Cyberbullying” and the Professional Self
I am no disinterested party as I am one of those accused of “chemical cyberbullying” in the above controversy. I won’t go into the details of that controversy here (see links in the above article or on my blog), but from this experience, and other experiences, my impression is that we are very very far from the ideal of “science self corrects itself” and that it takes enormous extenuating amounts of efforts to get some basic problems sorted if the authors of the original study are unwilling to correct the record. [kudos for your own attitude to correcting the scientific record]
We need more post publication peer review, i.e. discussion of published work, and we need to become relaxed about it. It needs to become the norm, something which is part and parcel of our activities as scientists. Nothing to do with “shaming” or demonstrating your own credentials.
I am yet to find convincing examples of people who use “public shaming” as a career progression tool. In the current climate, this is the last thing I would recommend to someone for a fast career progression. Going to lots of conferences and making lots of friends (in addition to doing “high impact” work) would be a much better calculation.
Thanks for this, Raphael, it is impressive how you’ve been very open and forthright in that controversy, which to an outsider like me seems really messy and ambiguous. I remember reading that Neuroskeptic post and it really struck a nerve (sorry) with me then- thanks for reminding me of this; it is extremely relevant to my post!
While, on average, I think that scientific review (pre/post) is pretty objective-ish, I have strong doubts that human nature does/can stay out of it. So there may (always??) be a glimmer of selfishness behind any motives of authors, reviewers, editors, blog authors/commenters, etc. Part of that comes from the passion that drives us as human beings to keep doing what we enjoy; it can go in negative as well as positive directions, but it sustains us regardless and without it we’d wither.
As a prolific author and reviewer, and editor of 4 journals over the years (please don’t read that as me being arrogant; I’m just trying to describe where I’m coming from), I’ve seen every aspect of human nature in the process (pre- and post-publication review). And, as a moderately experienced mentor, keeping that in check is hard to train people to do; you learn by mistakes. I think the shaming and self-aggrandizement is always there to some extent, or even in those of the best intentions and morals it will be there in the perceiver; they may feel that way. I’ve definitely seen people using the process, including shaming others, to further their personal goals. Separating motives is famously hard, though- “no one knows the heart of another man” (or woman).
But a lot of scientific discourse is dispassionate and truly constructive, at any stage pre/post review, so overall it can work, but like you say there’s a long way to go, and human nature works both for and against that way. I totally agree that post-pub review is important, but it’s the delivery that matters most to keep it away from public shaming- the tone can easily slip into humiliation rather than scientific benevolence. Empathy for all involved in the process seems so important, because the process can itself be dehumanizing, turning into false dichotomies of right/wrong when the truth may be very grey.
We are indeed complex animals and there may be all sorts of strange things and ambitions that drive us to do what we do. At the end of the day though, we cannot read people’s motives and to a large extent, for the purpose of this discussion, it does not matter.
Empathy and benevolence are noble feelings, but there is a serious danger (which is almost there, embryonic, in your conclusion “the truth may be very grey”) that they could be a barrier to clarity of thinking, refusing to state clearly claims to avoid hurting the feelings of other scientists who may have strong beliefs in different hypotheses. As you allude in your response, and as Neuroskeptic explains in his post, criticisms of someone’s scientific methods or scientific findings (on which he/she may have invested considerable time, emotion, etc) is borne to be a difficult experience whatever the tone of the criticism. Should we then refrain to make this critiques?
There is also the danger that this kind of argument (“be good with your fellow scientists”) may be made to silence controversy and discussion (or may have this effect even if made with a different intention). I am thinking for example of this ACS Nano editorial
In some ways, “dehumanizing”, in the sense of sticking to the facts and theories and not to the people who have reported these facts or argued for these theories, is the correct thing to do, no?
I agree that it is often better to critique flawed science than to remain silent, although sometimes ignoring bad science helps it go away (in palaeontology for example there are a lot of bad ideas in old and new books and other publications that no one debates because they are obsolete or just stupid)- with limited time and such a firehose of science spouting out in some fields, it may not be possible to address it all with reasoned, detailed critiques. But when critiques are truly needed, even then the way the critique is delivered and where it is delivered (from a private email to a small group of people to a larger blog or global media explosion) leaves us a lot of choices. Even if we choose the narrowest forum and kindest tone, some people might still get offended or humiliated, but little can be done there. I think there is sometimes a leap to go to the global level and intemperate language because it is more exciting, releases our passions in a therapeutic way for us (catharsis), and makes a better story. I think there is a lot of agreement between us here, but there is also plenty of room for each scientist to find their own style; however they should be ready to defend it and also wake up in the morning feeling like they are a good human being. The impact of a public unveiling of scientific errors can be more powerful than we think, so in some cases I still question its appropriateness, especially as a first resort.
Very nice post John. I think you know that my thoughts echo yours (http://dynamicecology.wordpress.com/2014/02/24/post-publication-review-signs-of-the-times/)
You’re right to wonder if the collective enterprise of science as a whole is best-served by a collective zeal to publicly shame individuals for any and all perceived transgressions against the whole. See for instance this piece on the toxic state of online feminist activism: http://www.thenation.com/article/178140/feminisms-toxic-twitter-wars?page=0,0
On the other hand, there’s lots of evidence that more traditional forms of self-correction in science proceed awfully slowly. For instance, think of how outdated or even retracted papers often get cited for years. This is immensely frustrating. And so it’s natural to ask, are there ways to speed up the self-correction process? Like you, I don’t have any easy answers.
Thanks Jeremy! Damn! I feel dumb (but not shamed! I have a high shame threshold!) for not citing your post! It is indeed so relevant and I now remember reading it and nodding vigorously along with it. It definitely helped me formulate my own reactions to the Myhrvold kerfuffle, which I was struggling with at the time (e.g. my more roundabout approach here- https://whatsinjohnsfreezer.com/2014/02/07/science-humanity-and-spock/). I don’t feel wise enough to comment on any specifics of the feminism issue, but it did cross my mind as an example of where shaming might go off the rails sometimes– or be appropriate sometimes, too!
A crux of the issue is the great power of the human mind as an incredible rationalizing, self-deceiving machine. Look at human history and the examples are astounding– and science is far from immune. That gives me a lot of pause in my daily life. I am never sure even of my own true motives, let alone others’, but I try to stop and think about them at key moments of decision, and I think that’s all anyone can do.
Like your post, mine was trying to explore what the alternatives to shaming are, and while I feel there are some, I’m not left any more content with the situation. If there was a truly optimal path, I think we’d have found it? But given the vagaries and variations of the human spirit, I suspect that the “optimal path” will always shift depending on the humans involved in any interaction, so no one size will fit all. We’re in for quite a ride, then!
An additional thought that just came to me: what about scientific books/chapters? Almost none can be corrected or certainly retracted in an effective way. Yet every field I’ve worked in (palaeontology, biomechanics, anatomy, evolution, etc.– IU Press’s “Life of the Past” palaeo series comes quickly to mind) uses them to put some primary-ish literature into.
That work can only be amended or critiqued by subsequent papers, except in rare cases. And some of the work in those “gray literature” volumes is abysmal. My attitude toward them has generally been to ignore, seeing no good alternative, but there may be another way.
Yet another reason (access being a big one as well; not many open access books out there, outside of well-funded libraries!) for this style of scientific publication to go extinct, leaving books mainly for review and public dissemination of already vetted science?
[…] John Hutchinson has a lengthy, thoughtful piece on trends in scientific openness, self-correction, and post-publication review. Prompted in part by […]
[…] “We’re all in an increasingly complex new world of science.” Quote by John Hutchinson from a brilliant rumination on errors, corrections, & the future of science. Read of the week. […]
[…] « Co-rex-ions […]
As a guy involved in this matter I would like to offer a comment – and thanks – to John Hutchinson and his colleague Peter Makovicky.
When I was working on my dinosaur growth rate paper I found something I could not understand with a result in the Hutchinson et al paper that is the topic of this post. I emailed John who put me in touch with Peter Makovicky, the co-author responsible for the portion of the analysis that I couldn’t follow.
After just a day or two of emailing back and forth with Peter, he concluded that there was a problem with their analysis. That resolved the point that I couldn’t reconcile – my analysis an theirs became compatible. Unfortunately that also meant that their paper had to be corrected.
I was very impressed with the professionalism that both Peter and John showed in how they handled this. They were open and forthright about the problem in their own work. There was no trace of being defensive or upset – even though this issue lead ultimately to a lot of work as John posts above.
It is never fun finding errors in your own work, or correcting them after they have been published, but it is just as much a part of science as a positive result. We all want to find the big new result or achieve some breakthrough, but the value of science depends on us also correcting the inevitable mistakes that come along the way.
John and Peter’s reaction was basically everything that you would hope a true scientist would do in this situation.
Thank you very much, Nathan, that is very kind of you- and I agree that Peter was excellent about this issue throughout the process. I’m glad it’s resolved, in essence, although there’s still a good study of Tyrannosaurus growth rates left to be done in the future once adequate data exist, which shouldn’t be long.
[…] and how the authors should react. Palaeontologist @JohnHutchinson posted a long and thoughtful consideration of this based on his experience with a 2011 paper on the growth rates of Tyrannosaurus, which led […]
Cases of selling null results as “significant”…
Misrepresenting citations and “inventing” new fields…
Disregarding baseline values to compare treatment effects…
These bad papers ended my career in science and now I work two part time jobs. This “humble brag” that you made your career on incorrect science doesn’t make me feel sorry for you. It just outs you as a parasite on the field.
You didn’t restock a wal-mart shelf while they reviewed your paper. You had health insurance for the years you put into training and degrees. Your student loans were paid.
Your wealthy, white, male, (and very) privledged and entitled. That’s all I took from this post. You don’t deserve to be rewarded for your mistakes.
Speaking as someone who is lucky enough to know John a little …
I can confirm that he is indeed white and male. I don’t know whether or not to consider him wealthy, that being rather a subjective judgement. I would also not want to make a call on “privileged”. But on “entitled” you could not be more wrong. You’d have to go a long way to find someone who combines world-leading expertise with genuine humility as effortlessly as John does.
I therefore award you 40-80% on the accuracy scale. Not too bad, but could do better.
P.S. It’s “You’re wealthy, white, male, (and very) privileged“, and the “and” properly belongs outside the parenthesis. No need to thank me; just doing my bit to help.
I’m sorry you had a rough time in science. It can be brutal out there and that was a theme in the post. The point of the post was anything but to ask for sympathy for me, nor would I be so arrogant as to expect any kind of reward from anyone. It was to use my personal story, the only one I know firsthand, to get into a discussion of how public shaming of scientists is more and more becoming a part of what science involves. I questioned whether that should be the a priori approach rather than alternative options, such as having a private conversation first, or publishing a new paper rather than retracting one. The post could have entirely omitted any reference to me, but a personal story that relates to the theme makes it more personal and relatable.
I don’t know where the idea that I made my career on “incorrect science” comes from; I made it clear in my post that I’ve tried to openly correct my mistakes in the past, and the latest case is an example of that, but one that also touches on the public shaming issue. I think any science is incorrect to some degree, and it’s not hard to find errors in anyone’s papers if you dig deep enough.
The question at heart seems to be how we participate in science’s in-built correcting process: in addition to being proactive about self-correction, with other scientists one can go on the offensive and say “if you can’t stand the heat, stay out of the kitchen”, or one can adopt alternative approaches that might even be more effective in the long run. And one might make friends, which can enrich life even more than science can sometimes.
One of the key issues as I see it, is the fine line between what actually constitutes an error, versus intent to deceive. Put differently, how many times have good scientists who made genuine errors had to deal with prolonged fallout from over-zealous accusers, versus how many times real fraudsters have gotten away with “we made a mistake in image preparation and this correction in no way affects our conclusions”? I can personally recount examples of both, so there’s a fuzzy line. The fuzziness is extended when we think about the reader – how to point out problems in a paper using language that communicates the gravity of the problem (and highlights the reasons why something is probably not a genuine error), while toeing the line regarding formal accusations of misconduct? If one makes allegations in too passive a voice, they are often not taken seriously or even ignored altogether. Conversely if one speculates too vividly on the underlying cause of a problem in the data, lawsuits and accusations of libel quickly follow.
Adding to the general confusion is the huge variation in application of standards across journals and funding bodies and institutions. In one situation, a journal might retract a paper in a week (as happened with the stem cell paper at PubPeer last year). Contrast this with examples of attempts to get clearly fraudulent papers retracted that have been dragging on (including involvement of COPE) for more than 2 years. Then there are examples where the federal office of research integrity has made findings, but we’re still waiting over a year for retractions to occur. Still more examples where retractions have occurred and university has gone on the record with an interim report of misconduct but 3 years later the ORI has not yet released official findings. The institutions themselves have massive variability in how they run their internal ethics programs – as an example just pick 2-3 random universities and see how long it takes to find the contact details for the research integrity officer from the website. There are no standard time limits set for any investigations, and frequently hundreds of emails are required just to keep things moving along and to get any kind of progress reports. Then the punishments doled out by ORI and other bodies are frankly ridiculous, with almost no examples whatsoever of universities or scientists having to pay back misappropriated funds.
Overall, the systems we have in place for dealing with this are just too fragmented, and the possibilities of actually getting something done (e.g. getting a bad paper retracted) or making a horrible mistake (e.g. accusing someone of misconduct and finding yourself on the receiving end of a lawsuit) are just too random. This level of confusion is both allowing misconduct to continue happening (because the perpetrators know they can get away with it), AND discouraging those with valuable contributions to make from actually getting involved in the conversation (out of fear). Even solid attempts at standardizing things (like COPE) are a joke. What’s really needed is a uniform set of ethical research standards, for journals, scientists, funding bodies, and academic institutions, all overseen by a global body that has real power (i.e. ability to shut down journals that don’t comply, the ability to black-list individual scientists so they can’t publish without additional scrutiny). In reality that’s never going to happen, so the crowd has to make its own system up, and that’s essentially what’s already happening with the revolution in open access publishing and post-pub peer review. The obstacle now is to get the dinosaurs (e.g. faculty tenure committees, wealthy publishers) to buy into this new system instead of the old way of assessing value.
Thank you for your thoughtful contribution, Paul- I am glad to see some people with a lot of experience behind these issues weighing in. I wholly agree that the discernment of intent is a fulcrum of the whole process and that protocols/standards vary enormously (a colleague at another institution recently said to me “I don’t think we even have a research misconduct policy”), which is not good. I think some of the confusion also can help abusers of the system to take out personal biases, and that makes good bedfellows with fraud in terms of how ugly science can get, in its worst moments.
I didn’t get into libel in my post, but the topic was on my mind, particularly as libel laws in the UK are infamously draconian, which can either give an unfair victim of libel (which is one extreme end of what public shaming can turn out to be) a hefty weapon to defend themselves with, or have a terribly chilling effect on scientists aware of those laws and yet seeking to raise the issue of possible misconduct, like you say. It’s almost too scary for some scientists to even ponder, especially as few of us are legal eagles or flush with cash.
Fascinating post and though this must have been gruesome, congratulations on having “Done the right thing”. It is important.
Reading the above and as an interested party in the debate on nanoparticles raised by Raphael, I do think we have to think about the A-Z:
(A) A paper valid for its time.
(B) A perceived error that is put to the authors who willingly examine the question it and then come to a conclusion (as you have done).
all the way down to
(Z) Data that do not support the conclusions and never could have because they are invented (aka fraud), which the authors refuse to acknowledge.
At the front end of the alphabet, we have people “doing the right thing”, a category in its own right at Retraction Watch. However, as I noted in a post a while back, this category covers only a fraction of retractions, the others are further down the alphabet, including the notional (Z) above. Similarly, only 20% of authors deign to respond to comments on PubPeer. Why the lack of engagement? It is natural to consider that the papers from non-responding authors have real problems. Indeed in some PubPeer threads in areas I understand, the (very polite) arguments seem pretty watertight: the paper is indeed lower down the alphabet in the scheme above. We do need to know how low and whether the paper is in category Z. Why? Because this impacts on process and training in the lab and the host institution. Getting better at these can only be a good thing. It also impacts on the distribution of public money. This too is rather important.
Thanks for your post!
I like the A-to-Z spectrum; there is a lot of grey area in there. I agree that the impacts of incorrect science are worse toward the end of the spectrum and reducing this should be a goal. Part of my post was at least implicitly wondering if the New York Times was the best place to handle this (i.e., effectively public shaming, before it is clear who is right or wrong; no due process), or in a more scholar-led environment. PubPeer might be an appropriate place; I’m not so familiar with it but will check it out. COPE definitely seems like a good thing; as a journal editor I’ve become aware of it.
COPE has in the experience of many failed. Whether this is failure or is simply that the community’s expectations are too high is a moot point. In the end it is up the the community (aka the ‘peers’) to take action, not some body that vaguely represents them. Hence Pubpeer.
I would agree that NYT is not the place to start. Of course the broader media may pick up on a sorry, but by then there will my many, many resources online detailing the scientific questions and debate.
[…] to Raj’s question (some better than others), but even so, I think Raj has a point. As I and others have said before, post-publication review is here to stay (for high-profile papers), and in many […]
[…] you’ve followed this blog for a while, you may be familiar with a post in which I ruminated over my own responsibilities and conundrums we face in work-life balance, personal happiness, and our desires to protect […]
[…] toughest critics, within reason, to minimize errors or worse outcomes in our science. We promptly correct our published research if we find errors needing […]
[…] to explain what happened; although blame can be far from a simple issue. In cases of accusations of scientific error or misconduct that is vital. More positively, this section, thoughtfully considered, spreads credit […]