Feeds:
Posts
Comments

Posts Tagged ‘in all seriousness’

For about 3 years now I’ve used the #WIJF (i.e. acronym for What’s In John’s Freezer) hashtag to organize my social media efforts on this blog. Over that time I became aware that “wijf” in Dutch can be taken as a derogatory term for women. And indeed, these days I do see people tweeting derogatory things with the #wijf hashtag, along with other, tamer uses like mine. I’ve come to the decision, albeit gradually and with much internal debate, to stop using that hashtag so I can avoid association with the sexist Dutch word. This post is about why, and what’s next.

Stomach-Churning Rating: Debatable, but 0/10 by the standard of the usual gory things on this blog; no images.

I don’t speak Dutch, but 25 million or so people do. This is a blog about morphological science, and the Dutch have had (and continue to have) a disproportionately strong influence on that field. I’m not claiming to be perfect when it comes to feminist issues, but I listen and I try and I care. My undergraduate tutelage in science was almost exclusively driven by female scientists– I never thought about that before but it’s true; at least 5 different major faculty influences at the University of Wisconsin! I work at a university where ~85% of the students are female (common today in vet schools). My research team has featured 9 out of 16 female postgraduate staff and students since 2004, and a lot of my collaborators and friends are scientists or science afficionados who happen to be female. I have good reason to care, and social media has helped to raise my awareness of important matters within and outside of science that I do care a lot about.

So, while I tend to hate to abandon words (or hashtags), preferring to fight for alternative meanings (e.g. the word “design” in evolutionary biology), and I am a stubborn git, the #WIJF hashtag and acronym are different, I’ve decided, and it’s time to use something else. Admittedly, #WIJF hasn’t been that important to this blog as hashtag or acronym– mainly just I use it, and any “brand name recognition” or other things surely arise more from the full name of the blog. So abandoning #WIJF is an inconvenience but not devastating to my blog. I see this move as (1) taking control of a situation where the benefits of staying with the hashtag/acronym are minimal and the harms, while of debatable magnitude, outweigh those minimal benefits in my view, and (2) demonstrating that I don’t tolerate or want to be associated with sexism or other discrimination. And I hope that this move might inspire others to reflect similarly on their own behaviour. Morphology, like any science, is for everyone, and this blog is meant to be a friendly place.

But a thing that has held me back, even though it is admittedly trivial in the grand scheme of things, is what hashtag/acronym to use henceforth? I turn that over to you, Freezerinos. I have no good ideas and so I am crowdsourcing. I need something short (not #Whatsinjohnsfreezer, probably– too long), something associated with the title of the blog, but also something dissimilar to the naughty word “wijf” and thus inoffensive… ideally inoffensive in the ~7000 languages of the world (!?!?). That might not leave many options! What should be in John’s blog’s hashtag?

Read Full Post »

If you’ve been working in science for long enough, perhaps not very long at all, you’ve heard about (or witnessed) scientists in your field who get listed as co-authors on papers for political reasons alone. They may be an uninvolved but domineering professor or a fellow co-worker, a friend, a political ally, an overly protective museum curator, or just a jerk of any stripe. I read this article recently and felt it was symptomatic of the harm that bad supervisors (or other collaborators) do to science, including damage to the general reputation of professors and other mentors. There are cultural differences not only between countries (e.g. more authoritative, hierarchical cultures probably tolerate behaviour like this more) but also within institutions because of individual variation and local culture, tradition or other precedent. But this kind of honorary co-authorship turns my stomach—it is co-authorship bloat and a blight upon science. Honorary co-authorship should offend any reasonable scientist who actually works, at any level of the scientific hierarchy. So here’s my rant about it. Marshmallows and popcorn are welcomed if you want to watch my raving, but I hope this post stimulates discussion. A brief version of this did do that on my personal Facebook account, which motivated me to finish this public post.

Stomach-Churning Rating: 0/10 but it may provoke indigestion if you’ve been a victim of co-author bloat.

At its root, honorary co-authorship (HONCO) shows disdain for others’ efforts in research. “I get something for nothing, unlike others.” It persists because of deference to pressures from politics (I need to add this co-author or they’ll cause me trouble), other social dynamics (this person is my buddy; here’s a freebie for them), careerism (oneself/ally/student needs to be on this paper to boost their CV and move up in their career; or else), or even laziness (a minimal publishable unit mentality- e.g. any minor excuse for being a co-author is enough). All of these reasons for tolerating it, and apathy about the status quo, keep the fires of HONCO burning. My feeling from my past 20 years of experience in academia is that, as science is getting increasingly complex and requiring more collaborators and co-authors, the fire is raging to a point where it is visibly charring the integrity of science too often to just keep quiet about it and hope it doesn’t cause much damage.

There’s a flip side to HONCO, too– it’s not that, as some might take the article above to imply, we all need to boot senior authors off of papers. Senior authors, like other collaborators, have a reason for existing that encompasses — but is not limited to — boosting the careers of those they mentor. We scientists all want the satisfaction of doing science, even if the nature of our involvement in research evolves (and varies widely). Part of that satisfaction comes from publishing papers as the coup de grace to each project, and it’s a privilege that should be open to being earned by anyone qualified. Indeed, if adding HONCOs to papers is fraud, then removing worthy contributors from papers can be seen as a similar kind of fraud (unless a result of mutually agreed I’ll-help-you-for-nothing generosity). The broader point is, authors should deserve to be authors, and non-authors should not deserve to be authors.

On that latter issue, I think back to my grad school days and how my mentors Kevin Padian, Rodger Kram, Bob Full and others often gave me valuable input on my early papers (~1998-2002) but never earned co-authorship on them (exception: mentor Steve Gatesy’s vital role in our 2000 “abductors, adductors” paper). And frankly I feel a little bad now about that. Some of those mentors might have deserved co-authorship, but even when asked they declined, and just appeared in the Acknowledgements. It was the culture in my department at Berkeley, like many other USA grad schools at the time and perhaps now, that PhD students often did not put their supervisors on their papers and thus published single-author work. I see that less often today — but still varying among fields; e.g. in biomechanics, less single-authorship globally; in palaeontology and morphology, more single-authored work, but perhaps reducing overall. That is my off-the-cuff impression from the past >10 years.

I was shocked to see less (or often no) single-authored papers by lab colleagues once I moved to the UK to take up my present post– the prevalence of supervisors as senior authors on papers was starkly evident. On reflection, I now think that many of those multi-authored papers deserved to be such. It was not solo work and involved some significant steering, with key ideas originating from supervisors and thus constituting valid intellectual input. Yet I wondered then if it was a good thing or not, especially after hearing student complaints like waiting six months for comments from their supervisor on a manuscript. But this gets into a grey area that is best considered on a paper-by-paper basis, following clear criteria for authorship and contributions, and it involves difficulties inherent to some supervisor-supervisee relationships that I will not cover here. Much as supervisors need to manage their team, their team needs to manage them. ‘Nuff said.

Many institutions and journals have clear criteria for co-authorship, and publications have “author contributions” sections that are intended to make it clear who did what for a given paper – and thus whose responsibility any problems might be, too. HONCOs take credit without responsibility or merit, and are blatant fraud. I say it’s time we stand up to this disease. The criteria and contributions aspects of paper are part of the immune system of science that is there to help defend against academic misconduct. We need to work together to give that system a fighting chance.

There are huge grey areas in what criteria are enough for co-authorship. I have to wrestle with this for almost every paper I’m involved in– I am always thinking about whether I truly deserve to be listed on a paper, or whether others do. I’ve been training myself to think, and talk, about co-authorship criteria early in the process of research— that’s essential in avoiding bad blood later on down the line when it’s time to write up the work, when it’s possibly too late for others to earn co-authorship. This is a critical process that is best handled explicitly and in writing, especially in larger collaborations. What will the topic of any future paper(s) be and who will be involved as co-authors, or not? It’s a good agenda item for research meetings.

There are also grey areas in author contributions. How much editing of a paper is enough for co-authorship justification? Certainly not just spellchecking or adding comments saying “Great point!”, although both can be a bit helpful. Is funding a study a criterion? Sometimes– how much and how directly/indirectly did the funding help? Is providing data enough? Sometimes. In these days of open data, it seems like the data-provision criterion, part of the very hull that science floats upon, is weakening as a justification for co-authorship. It is becoming increasingly common to cite others’ papers for data, provide little new data oneself, and churn out papers without those data-papers’ authors involved. And that’s a good thing, to a degree. It’s nicer to invite published-data-providers on board a paper as collaborators, and they can often provide insight into the nature (and limitations or faults!) of the data. But adding co-authors can easily slide down the slippery slope of hooray-everyone’s-a-co-author (e.g. genetics papers with 1000+ co-authors, anyone?). I wrote up explicit co-authorship criteria here (Figshare login needed; 2nd pdf in the list) and here (Academia.edu login needed) if you’re curious how I handle it, but standards vary. Dr. William Pérez recently shared a good example of criteria with me; linked here.

In palaeontology and other specimen-based sciences, we get into some rough terrain — who collected the fossil (i.e. was on that field season and truly helped), identified it, prepared and curated it, published on it, or otherwise has “authority” over it, and which of them if any deserve co-authorship? I go to palaeontology conferences every year and listen over coffee/beers to colleagues complain about how their latest paper had such-and-such (and their students, pals, etc.) added onto the paper as HONCOs. Some museums or other institutions even have policies like this, requiring external users to add internal co-authors as a strong-arm tactic. An egregious past example: a CT-scanning facility I used once, and never again, even had the guff to call their mandatory joint-authorship policy for usage “non-collaborative access”… luckily we signed no such policy, and so we got our data, paid a reasonable fee for it, and had no HONCOs. Every time I hear about HONCOs, I wonder “How long can this kind of injustice last?” Yet there’s also the reality that finding and digging up a good field site or specimen(s); or analogous processes in science; takes a lot of time and effort and you don’t want others prematurely jumping your claim, which can be intellectual property theft, a different kind of misconduct. And there is good cause for sensitivity about non-Western countries that might not have the resources and training of staff to earn co-authorship as easily; flexibility might be necessary to avoid imperialist pillaging of their science with minimal benefit to their home country.

Yet there’s hope for minimizing HONCO infections. A wise person once said (slightly altered) “I’d rather light a candle than curse the darkness.” Problems can have solutions, even though cultural change tends to be agonizingly slow. But it can be slower still, or retrograde, if met with apathy. What can we do about HONCOs? Can we beat the bloat? What have I done myself before and what would I do differently now? I’ll take an inward look here.

Tolerating HONCOs isn’t a solution. I looked back on my experiences with >70 co-authored papers and technical book chapters since 1998. Luckily there are few instances where I’d even need to contemplate if a co-author was a HONCO. Most scientists I’ve worked with have clearly pulled their weight on papers or understood why they’re not co-authors on a given paper. More about that below. In those few instances of possible HONCOs, about five papers from several years ago, some colleagues provided research material/data but never commented on the manuscripts or other aspects of the work. I was disgruntled but tolerated it. It was a borderline grey area and I was a young academic who needed allies, and the data/specimens were important. Since then, I’ve curtailed collaborations with those people. To be fair, there were some papers where I didn’t do a ton (but did satisfy basic criteria for co-authorship, especially commenting on manuscripts) and I got buried in Middle-Authorland, and that’s fine with me; it wasn’t HONCO hell I was in. There were a few papers where I played a minor role and it wasn’t clear what other co-authors were contributing, but I was comfortable giving them the benefit of the doubt.

One anti-HONCO solution was on a more recent paper that involved a person who I had heard was a vector of HONCO infection. I stated early on in an email that only one person from their group could be a co-author on the resulting paper, and they could choose who it was and that person would be expected to contribute something beyond basic data. They wrote back agreeing to it and (magnanimously) putting a junior student forward for it, who did help, although they never substantially commented on the manuscript so I was a little disappointed. But in the grand scheme of things, this strategy worked in beating the HONCO bloat. I may have cost myself some political points that may stifle future collaborations with that senior person, but I feel satisfied that I did the right thing under the constraints, and damn the consequences. Containment of HONCO has its attendant risks of course. HONCO-rejects might get honked off. Maybe one has to pick their battles and concede ground sometimes, but how much do the ethics of such concessions weigh?

Another solution I used recently involved my own input on a paper. I was asked to join a “meta-analysis” paper as a co-author but the main work had already been done for it, and conclusions largely reached. I read the draft and saw places where I could help in a meaningful way, so with trepidation I agreed to help and did. But during the review process it became clear that (1) there was too much overlap between this paper and others by the same lead author, which made me uncomfortable; and (2) sections that I had contributed to didn’t really meld well with the main thrust of the paper and so were removed. As a consequence, I felt like a reluctant HONCO and asked to be removed from the paper as a co-author, even though I’d helped write sections of the main text that remained in the paper (but this was more stylistic in my view than deeply intellectual). I ended up in the Acknowledgements and relieved about it. I am comfortable removing myself from papers in which I don’t get a sense of satisfaction that I did something meriting co-author status. But it’s easier for more senior researchers like me to do that, compared to the quandary that sink-or-swim early-career researchers may face.

More broadly in academia, a key matter at stake is the CVs of researchers, especially junior ones, which these days require more and more papers (even minimal publishable units) to be competitive for jobs, awards and funding. Adding HONCOs to papers does strengthen individuals’ CVs, but in a parasitic way from the dilution of co-author contributions. And it’s just unethical, full stop. One solution: It’s thus up to senior people to lead from the front, showing that they don’t accept HONCOs themselves and encouraging more junior researchers to do the same when they can—or even questioning the contributions that potential new staff/students made to past papers, if their CV seems bloated (but such questions probe dangerous territory!). Junior people, however, still need to make a judgement call on how they’ll handle HONCOs with themselves or others. There is the issue of reputation to think about; complicity in the HONCO pandemic at any career level might be looked upon unfavourably by others, and scientists can be as gossipy as any humans, so bad ethics can bite you back.

I try to revisit co-authorship and the criteria involved throughout a project, especially as we begin the writing-up stage, to reduce risks of HONCOs or other maladies. An important aspect of collaboration is to ensure that people that might deserve co-authorship get an early chance to earn it, or else are told that they won’t be on board and why. Then they are not asked for further input unless it is needed, which might shift the balance and put them back on the co-author list. Critically, co-authorship is negotiable and should be a negotiation. One should not take it personally if not on a paper, but should treat others fairly and stay open-minded about co-authorship whenever possible. This has to be balanced against the risk of co-authorship bloat. Sure, so-and-so might add a little to a paper, but each co-author added complicates the project, probably slows it down, and diminishes the credit given to each other co-author. So a line must be drawn at some point. Maybe some co-authors and their contributions are best saved for a future paper, for example. This is a decision that the first, corresponding and senior author(s) should agree on, in consultation with others. But I also feel that undergraduate students and technicians often are the first to get the heave-ho from co-author considerations, which I’ve been trying to avoid lately when I can, as they deserve as much as anyone to have their co-author criteria scrutinized.

The Acknowledgements section of a paper is there for a reason, and it’s nice to show up there when you’ve truly helped a paper out whether as quasi-collaborative colleague, friendly draft-commenter, editor, reviewer or in other capacities. It is a far cry from being a co-author but it also typically implies that those people acknowledged are not to blame if something is wrong with the paper. I see Acknowledgements as “free space” that should be packed with thank-you’s to everyone one can think of that clearly assisted in some way. No one lists Acknowledged status on their CVs or gets other concrete benefits from them normally, but it is good social graces to use it generously. HONCOs’ proper home, at best, is there in the Acknowledgements, safely quarantined.

The Author Contributions section of a paper is something to take very seriously these days. I used to fill it out without much thought, but I’ve now gotten in the habit of scrutinizing it (where feasible) with every paper I’m involved in. Did author X really contribute to data analysis or writing the paper? Did all authors truly check and approve the final manuscript? “No” answers there are worrying. It is good research practice nowadays to put careful detail into this section of every paper, and even to openly discuss it among all authors so everyone agrees. Editors and reviewers should also pay heed to it, and readers of papers might find it increasingly interesting to peruse that section. Why should we care about author contribution lists in papers? Well, sure, it’s interesting to know who did what, that’s the main reason! It can reveal what skills an individual has or lacks, or their true input on the project vs. what the co-author order implies.

But there’s a deeper value to Author Contributions lists that is part of the academic immune system against HONCOs and other fraud. Anyone contributing to a particular part of a paper should be able to prove their contribution if challenged. For example, if a problem was suspected in a section of a paper, any authors listed as contributing to that section would be the first points of contact to check with about that possible problem. In a formal academic misconduct investigation, those contributing authors would need to walk through their contributions and defend (or correct) their work. It would be unpleasant to be asked how one contributed to such work if one didn’t do it, or to find out that someone listed you as contributing when you didn’t, and wouldn’t have accepted it if you had known. Attention to detail can pay off in any part of a research publication.

Ultimately, beating the blight of HONCO bloat will need teamwork from real co-authors, at every career level. Too often these academic dilemmas are broken down into “junior vs. senior” researcher false dichotomies. Yes, there’s a power structure and status quo that we need to be mindful of. Co-authorships, however, require collaboration and thus communication and co-operation.

It’s a long haul before we might see real progress; the fight against HONCOs must proceed paper-by-paper. There are worse problems that science faces, too, but my feeling is that HONCOs have gone far enough and it’s time to push back, and to earn the credit we claim as scientific authors. Honorary co-authorship is a dishonourable practice that is very different from other “honorary” kudos like honorary professorships or awards. Complex and collaborative science can mean longer co-author lists, absolutely, but it doesn’t mean handing out freebies to chums, students needing a boost, or erstwhile allies. It means more care is needed in designing and writing up research. And it also means that science is progressing; a progress we should all feel proud of in the end.

Do you have abhorrent HONCO chronicles of your own (anonymized please; no lynch mobs here!) or from public record? Or ideas for handling HONCO hazards? Please share and discuss.

Read Full Post »

When does a science story “end”? Never, probably. Science keeps voyaging on eternally in search of truth, and few if any stories in science truly “end”. But as science communicators of any stripe, we routinely have to make decisions about when a certain story has run its course; when the PR ship has sailed and the news cycle has ended. As scientists, we’re lucky if we have to consider this and should be grateful if and when our science even attracts media/science communication attention. But the point of today’s post; perhaps an obvious one but to my mind worthy of reflection on; is that scientists are not slaves to the PR machine– as a flip side to the previous self/science-promotion post, at some point we may have to say “This story about our research is done (for now).”

I routinely reflect on this when the media covers my research; I always have. My recent experience with New Yorker and BBC coverage of our penguin gait research (with James Proffitt and Emily Sparkes as well as Dr. Julia Clarke) got me thinking about this issue a lot, and talking about it quite a bit with James. This morning, over coffee, this blog post was born from my thoughts on that experience.

Stomach-Churning Rating: 7/10 for some mushy penguin specimens; PR officers might also get queasy.

I was waiting for a call from BBC radio one night almost three weeks ago, to do a recorded interview about our penguin research-in-progress, when I woke up surrounded by paramedics and was whisked off to the hospital. I never did that interview or any further ones. I won’t go into what went wrong but it relates to this old story. I’m OK now anyway. But for me, the penguin story had mostly ended before it began. However, I’d already agreed with James that we’d try to avoid doing further media stories beyond the New Yorker one and the BBC one, which was due out the next day and for which James (fortuitously instead of me!) was doing a live appearance on BBC Breakfast (TV). I got a few emails and calls about this story while recuperating in my hospital bed, including the one below, and turned down interview invitations for obvious reasons, with no arguments from anyone– at first.

For Jerry, the story never should have started, apparently. We all have our opinions on what stories are worth covering.

For Jerry, the story never should have started, apparently. We all have our opinions on what stories are worth covering. A “kind” email to receive in one’s hospital bed…

Then, after I recovered and got back to work, we kept getting a trickle of other interview/story invitations, and we declined them. Our PR office had suggested that we do a press release but we had already decided in advance not to, because we saw the story as just work-in-progress and I don’t like to do press releases about that kind of thing– except under extraordinary circumstances.

Finally, over a week after the BBC story aired, a major news agency wanted to film an interview with me about the story, which would get us (more) global coverage. They prefaced the invitation with the admission that they were latecomers to the story. Again I firmly said no; they could use existing footage but I could not do new interviews (these would inevitably take a half day or so of my time and energy). They wrote back saying they were going to go forward with the story anyway, and the journalist scolded me for not participating, saying that the story would have been so much better with a new film sequence of me in it. Maybe, but (1) I felt the story had run its course, (2) I’d had my hospitalization and a tragic death in the family, and (3) I was just returning, very jetlagged, from a short trip to the USA for other work. Enough already! I had other things to do. I didn’t follow up on what happened with that story. Maybe it didn’t even get published. I wasn’t left feeling very sympathetic.

Above: The BBC story

I kept thinking about being pressured and scolded by journalists, once in a while, for not joining in their news stories when they contradicted my own threshold for how much media coverage is enough. This reaching of a personal threshold had first happened to me 13 years ago when I published my first big paper, in Nature, on “Tyrannosaurus was not a fast runner.” After ~3 weeks of insane amounts of media coverage, I was exhausted and pulled the plug, refusing more interviews. It felt good to exert control over the process, and I learned a lot from learning to wield that control. I still use it routinely.

But… I am of course passionate about science communication, I feel it is a great thing for science to be in the public eye, and I actually love doing science communication stories about research-in-progress– too much science is shown as an endpoint, not a process. Indeed, that’s why I do this blog and other social media, most of which is science-in-progress and my thoughts about it. So I was and still am thrilled that we got such positive, broad, good quality media attention for our penguin work, but it was plenty.

Penguin bodies awaiting dissection for our latest work. Unfortunately, years of formalin, freezers and thawing cycles had rendered most of the soft tissues useless for our work. Photos here and below are of Natural History Museum (Tring) specimens from the ornithology collection; most collected in Antarctica ~50 yrs ago.

More sphenisciform science in progress: Penguin bodies awaiting dissection for our latest work. Unfortunately, years of formalin, freezers and thawing cycles had rendered most of the soft tissues useless for our work. Photos here and below are of Natural History Museum (Tring) specimens from the ornithology collection; most collected in Antarctica ~50 yrs ago.

Probably to many seasoned science communicators and scientists, my post’s message is blindingly obvious. Of course, scientists have rights — and responsbilities– in deciding how and when their research is covered. This is a negotiation process between their research team, their university, PR officers, journalists/media, funders and others involved– including the public. But less experienced scientists, and perhaps the public, might not realize how much control scientists do have over the amount of media attention they get. It’s easy to get caught up in a media frenzy surrounding one’s science (if you’re lucky enough to generate it at all) and feel the wind in one’s sails, thereby forgetting that you’re at the helm– you can decide when the journey is over (just be sure you communicate it diplomatically with others involved!).

This penguin did not survive the preservation process well; for whatever reason it had turned to mush, fit only for skeletonization. Gag. Its journey was definitely over.

This penguin did not survive the preservation process well; for whatever reason it had turned to mush, fit only for skeletonization. Gag. Its journey was definitely over.

As scientists, we have to balance enormous pressures and priorities: not just science communication and PR, but also our current main research, teaching, admin, personal lives, health, and so on. So we have to make hard decisions about how to balance these things. We should all reflect on what our dynamically shifting thresholds are for how much attention is enough, what priority level a given story has in our lives, and when the timing is right for any media attention. And as collaborative teams; more and more the norm in science; we should be discussing this issue and agreeing on it before it’s too late for us to exert much control.

One of our penguin chicks, in a better state of preservation than the adults. Photo by James Proffitt.

One of our penguin chicks from the Natural History Museum, in a better state of preservation than the adults. Photo by James Proffitt.

Penguin chick's right leg musculature in side view, exposing some nice muscles that gave us some useful data. Photo by James Proffitt.

Penguin chick’s right leg musculature in side view, exposing some decent muscles that gave us some useful data. Photo by James Proffitt.

Much like an over-played hit song, it’s not pretty when a science story gets over-milked and becomes too familiar and tedious, perhaps drawing attention away from other science that deserves attention. And we all will have our opinions on where that threshold of “too much attention” is. If we, as scientists, don’t think about those thresholds, we may end up rudderless or even wrecked on lonely islands of hype. I’ve seen scientists ostracized by their peers for over-hyping their work. It’s not fun. “Hey everybody, John is having a celery stick with peanut butter on it!” Celebrity culture doesn’t mean that everything scientists do deserves attention, and any amount of attention is deserved and good.

A great thing about science is that, in principle, it is eternal– a good science story can live forever while other science is built upon it. Each chapter in that story needs an ending, but there’s always the next chapter waiting for us, and that’s what keeps science vital and riveting. As scientists, we’re all authors of that story, with a lot of power over its narrative. We can decide when to save parts of that narrative for later, when the time is right. With our penguin story, we’ve only just begun and I’m incredibly excited about where it goes next.

How about other scientists, journalists and other afficionados of science? What examples of scientists taking charge of how their research gets covered do you find particularly instructive?

Read Full Post »

How do I manage my team of 10+ researchers without losing my mind <ahem> or otherwise having things fall apart? I’m often asked this, as I was today (10 December; I ruminated before posting this as I worried it was too boring). Whether those undesirable things have truly not transpired is perhaps debatable, but I’m still here and so is my team and their funding, so I take that as a good sign overall. But I usually give a lame answer to that question of how I do it all, like “I have no secrets, I just do it.” Which is superficially true, but…

Today was that time of year at the RVC when I conduct appraisals of the performance and development of my research staff, which is a procedure I once found horridly awkward and overly bureaucratic. But now that it focuses more on being helpful by learning from past missteps and plotting future steps in a (ideally) realistic fashion than on box-ticking or intimidation, I find the appraisals useful. The appraisals are useful at least for documenting progress and ensuring that teammates continue to develop their careers, not just crank out data and papers. By dissecting the year’s events, one comes to understand what happened, and what needs to happen in the next year.

The whole process crystalizes my own thoughts, by the end of a day of ~1 hour chats, on things like where there needs to be different coordination of team members in the coming year, or where I need to give more guidance, or where potential problems might arise. It especially helps us to sort out a timeline for the year… which inevitably still seems to go pear-shaped due to unexpected challenges, but we adapt and I think I am getting better myself at guessing how long research steps might take (pick an initial date that seems reasonable, move it back, then move it further back, then keep an eye on it).

Anyway, today the appraisals reminded me that I don’t have a good story for how I manage my team other than by doing these appraisals, which as an annual event are far from sufficient management but have become necessary. And so here I am with a post that goes through my approaches. Maybe you will find it useful or it will stimulate discussion. There are myriad styles of management. I am outlining here what facets of my style I can think of. There are parallels between this post and my earlier one on “success”, but I’ve tried to eliminate overlap.

Stomach-Churning Rating: 0/10 but no photos, long-read, bullet points AND top 10 list. A different kind of gore.

Successfully managing a large (for my field) research team leaves one with fewer choices than in a smaller team– in the latter case, you can be almost anywhere on the spectrum of hands-off vs. hands-on management and things may still go fine (or not). In the case of a large (and interdisciplinary) team, there’s no possibility to be heavily hands-on, especially with so many external collaborations piled on top of it all. So a balance has to be struck somewhere. As a result, inevitably I am forced into a managerial role where, over the years, I’ve become less directly in touch with the core methods we use, in terms of many nitty-gritty details. I’ve had to adapt to being comfortable with (1) emphasizing a big picture view that keeps the concepts at the forefront, (2) taking the constraints (e.g. time, technology and methods, which I do still therefore have to keep tabs on) into account in planning, (3) cultivating a level of trust in each team member that they will do a good job (also see “loyalty” below), and (4) maintaining the right level of overall expertise within the group (including external collaborators) that enables us to get research done to our standard. To do these things, I’ve had to learn to do these other things, which happen to form a top 10 list but are in no order:

  1. Communicate regularly– I’m an obsessive, well-organized emailer, in particular. E-mail is how I manage most of my collaborations within and outside my team, and how I keep track of much of the details. (Indeed, collaborators that aren’t so consistent with email are difficult for me) We do regular weekly team meetings in which we go around the table and review what we’re up to, and I do in-person chats or G+/Skype sessions fairly frequently to keep the ball rolling and everyone in synch. I now keep a notebook, or “memory cane” as I call it, to document meetings and to-do lists. Old school, but it works for me whereas my mental notebook started not to at times.
  2. Treat each person individually- everyone responds best to different management styles, so within my range of capabilities I vary my approach from more to less hands-off, or gentler vs. firmer. If people can handle robust criticism, or even if they can’t but they need to hear it, I can modulate to deliver that, or try to avoid crushing them. While I have high expectations of myself and those I work with, I also know that I have to be flexible because everyone is different.
  3. Value loyalty AND autonomy– Loyalty and trust matter hugely to me as a manager/collaborator. I believe in paying people back (e.g. expending a lot of effort in helping them move their career forward) for their dedicated work on my team, but also keeping in mind that I may need to make “sacrifices” (e.g. give them time off for side-projects I’m not involved in) to help them develop their career. I seek to avoid the extremes: fawningly helpless yes-men (rare, actually) or ~100% selfish what’s-in-it-for-me’s (not as rare but uncommon). Any good outcome can benefit a research manager even if they’re not a part of it, but also on a big team it’s about more than what benefits the 1st author or the senior author, but everyone, which is a tricky balance to attain.
  4. Prioritize endlessly– for me this means trying to keep myself from being the rate-limiting step in research. And I try to say “no” to new priorities if they don’t seem right for me. Sometimes it means getting little things done first to clear my desk (and mind) for bigger tasks; sometimes it means focusing on big tasks to the exclusion of smaller ones. Often it depends on my whims and energy level, but I try to keep those from harming others’ research. I make prioritized to-do lists and revisit them regularly.
  5. Allow chaos and failure/imperfection– This is the hardest for me. My mind does not work like a stereotypical accountant’s- I like a bit of disorder, as my seemingly messy office attests to. Oddly within that disorder, I find order, as my brain is still usually good at keeping things organized. I do like a certain level of involvement in research, and I get nervous when I feel that sliding down toward “uninvolved”– loss of control in research can be scary. Some degree of detachment, stepping aside and allowing for time to pass and people to self-organize or come ask for help to avoid disaster (or celebrate success), is necessary, though, because I cannot be everywhere at once and nothing can be perfect. And of course, I myself fail sometimes, but with alertness comes recognition and learning. Furthermore, too much control is micromanagement, which hurts morale, and “disorder” allows the flexibility that can bring serendipitous results (or disaster). And speaking of disaster, one has to be mentally prepared for it, and able to take a deep breath and react in the right way when it comes. Which leads to…
  6. Think brutally clearly – Despite all the swirling chaos of a large research team and many other responsibilities of an academic and father and all that, I have taught myself a skill that I point to as a vital one. I can stop what I’m doing and focus very intensely on a problem when I need to. If it’s within my expertise to solve it, by clearing my head (past experience with kendo, yoga and karate has helped me to do this), I usually can do it if I enter this intensely logical, calm, objective quasi-zen-state. I set my emotions aside (especially if it is a stressful situation) and figure out what’s possible, what’s impossible, and what needs to be done, and find what I think is the best course of action quite quickly, then act on that decisively (but without dogmatic inflexibility). In such moments, I find myself thinking “What is the right thing to do here?” and I almost instinctively know when I can see that right thing. At that moment I get a charge of adrenaline to act upon it, which helps me to move on quickly. From little but hard decisions to major crises, this ability serves me very well in my whole life. I maintain a duality between that singleminded focus and juggling/anarchy, often able to quickly switch between those modes as I need to.
  7. Work hardest when I work best (e.g. good sleep and caffeination level, mornings)- and let myself slack off when I’m not in prime working condition. I shrug aside guilt if I am “slacking”– I can’t do everything and some things must fall by the wayside if I can’t realistically resolve them in whatever state of mind I’m in. The slacking helps me recharge and refresh– by playing a quick video game or checking social media or cranking up some classic Iron Maiden/modern Menzingers, I can return to my work with new gusto, or even inspiration, because…
  8. Spend a lot of time thinking while I “slack off”, in little bursts (e.g. while checking Twitter). I let my brain process things that are going on, let go of them when I’m not getting anywhere with them, and return to them later. This is harder than it sounds as I still stubbornly or anxiously get stuck on things if they are stressing me out or exciting me a lot. But I am progressively improving at this staccato-thinking skill.
  9. Points 7+8 relate to my view that there is no “work-life balance” for me—it is all my life, and there’s still a lot of time to enjoy the non-work parts, but it’s all a blend that lets me be who I am.
  10. Be human– try to avoid acting like a distant, emotionless robotic manager and cultivate more of a family-like team. Being labelled with the word “boss” can turn my stomach. “Mentor” and “collaborator” are more like what I aim for. Being open about my own flaws, failures, and life helps.

Long post, yeah! 1 hour on a train commute lets the thoughts flow. I hope that if you made it this far you found it interesting.

What do you do if you manage a team, what works for you or what stories do you have of research management? Celebrations and post-mortems are equally welcome.

Read Full Post »

I awoke on the floor in the aisle of my United Airlines flight to Los Angeles, with three unfamiliar men crouched around me, bearing serious expressions as they looked down on my prone body.

I was next to my seat. My daughter was crying inconsolably in her seat next to mine, and my wife was calling to me with an urgent tone from the next seat over.

Gradually, as my confusion faded and the men let go of me (I’d been cursing them out, in mangled words because I had bitten my tongue), I became aware that I was in intense pain, I could not move much, and my wife’s words became clearer:

I’d had a seizure. And so our relaxing family holiday, which had only just begun, ended. And so my waking nightmare began.

Stomach-Churning Rating: 5/10; lots of Anatomy Fail CT/x-ray images and gruesome descriptions, and a photo of some bruising.

I was helped back into my seat as I regained my senses, I noticed blood on me from my tongue, and I learned that we were 2 hours away from L.A. As I was acting more normal, and we were 5/6 of our journey along, there was no need to prematurely land the flight. I had fallen asleep while watching “22 Jump Street”, about 1.5 hrs in, and that’s when my seizure struck– much like the previous two seizures I’d had. Jonah Hill could be ruled out as a culprit, but going to sleep was an enabling factor. I got some over-the-counter painkillers and sat in a daze as time ticked by, we landed, and paramedics boarded the plane to whisk me off to the hospital with my family.

Two gruelling days and nights in a California hospital later, with my first night spent in a haze of clinical tests, begging for painkillers, yelling in pain every time I moved, and otherwise keeping my hospital roommate awake, the story became clearer: my seizure was so intense that I’d dislocated my right shoulder (unfortunately I’d not had much pain relief when the emergency room staff popped it back into my glenoid), probably dislocated my left shoulder too but then relocated it myself admist my thrashing, and done this (cue Anatomy Fail images):

Left shoulder, with the offending greater tubercle/tuberosity of the humerus showing fracture(s).

Left shoulder, with the offending greater tubercle/tuberosity of the humerus showing fracture(s).

Right shoulder x-ray, showing dislocation of the head of the humerus from the glenoid. Compare with above image- humerus has been shifted down. BUT no fractures, yay!

Right shoulder x-ray, showing dislocation of the head of the humerus from the glenoid. Compare with above image- humerus has been shifted down, the shoulder joint is facing you. BUT no fractures, yay!

CT scan axial slice showing my neck (on left), then scapula with fractured coracoid process ("bad") and displaced, fractured greater tubercle of humerus on right side.

CT scan axial slice showing my spine (on left), then scapula with fractured coracoid process (“Bad”) and displaced, fractured greater tubercle of humerus on right side (“V bad”).

So, that explains most of the pain I was in.

What’s amazing is that the fractures most likely occurred purely via my own uncontrolled muscle contractions. All the karate and weight-training I’d been doing certainly had made me stronger in my rotator cuff muscles, which attach to the greater tubercle of the humerus. And with inhibition of my motoneurons turned off during my seizure, and both agonist and antagonist muscles near-maximally turned on, rapid motions of my shoulders by my spasming muscles would have dislocated my shoulders and then wrenched apart some of the bony attachments of those same muscles. I’m glad I don’t remember this happening.

I had also complained of pain in my neck, so they did a CT scan and x-ray there too:

X-ray: No broken neck. This is good.

X-ray: No broken neck. This is good. Just muscle strain, which soon faded.

The left shoulder injuries created a hematoma, or mass of blood beneath my skin, and soon that surfaced and began draining down my arm (via the lymphatic system under gravity’s pull), creating fascinating patterns:

Bruises migrating; no pain associated with these, just superficial drainage of old blood.

Bruises migrating; no pain associated with these, just superficial drainage of old blood. This is tame, tame, tame compared to what my left ribcage looked like. I’ve spared you that.

But then more fundamentally there was the question of, why a seizure? With no clear warning? As I’ve explained before, I’d had a stroke ~12 yrs ago that caused a similar seizure but with no injuries to my postcranial body. So a series of MRI and CT scans ensued (the radiation I’ve had from the latter is good fodder for a superhero/villain origin tale? Marvel, I’ll await your call), and there was no clear damage or bleeding, and hence no stroke evident. Good news.

There are, however, at least two sizeable calcifications in my brain that are likely to be hardened scar tissue from my stroke. These may or may not have an identifiable affect on me or linkage with the seizure. Brain calcifications can happen for a variety of reasons, sometimes without clear ill effects.

Calcification in ?ventricle? of my cerebrum.

Calcification in parietal lobe of my cerebrum, from axial CT scan slice. But no bleeding (zone of altered density/contrast).

That is the state of the evidence. I’ve since had what semblance of a L.A. family holiday I could manage, benefitting from a touching surge of support from my family, friends and colleagues that has kept me from sinking entirely into despair and has brought quite a few smiles.

The plane flight home was tense. We were in the same seats again and one of the flight attendants recognized us and came to chat, eager to learn what had happened after we left the plane a week ago. He was very nice and the doctors had given me an “OK to fly” letter. But it was an evening flight. I needed to sleep, yet it was clear to me that sleep was no longer the fortress of regenerative sanctity that I was used to it being. Sleep had taken on a certain menace, because it was a state in which I’d now had three seizures. Warily, I drifted off to sleep after having some hearty chuckles at the ending to “22 Jump Street”. And while it was not very restful slumber, it was the friendly kind of slumber that held no convulsive violence within its embrace. We returned home safely.

In a rush, I cancelled my attendance at the Society of Vertebrate Paleontology conference this week, turning over the symposium I’d convened to honour one of my scientific heroes, biomechanist R. McNeill Alexander (who also could not attend due to ill health), to my co-convenors Eric Snively and Andreas Christian (by accounts I heard, all went well). I missed out on a lot of fun and the joy of watching 2 of my PhD students present posters on preliminary results of their research. Thanks to social media and email, however, I’ve been able to catch a lot of the highlights and excitement from that conference in Berlin.That has helped distract me somewhat from other goings-on.

Meanwhile, I’ve been resting, doing a minimal amount of catching up with work, having a lot of meetings with doctors to arrange treatment, and pondering my situation– a lot.

I know this much: I’ve had two violent seizures in a month (the previous one was milder but still bad, and not a story I need to tell here), and so I’m now an epileptic, technically. When and if I’ll have another seizure is totally uncertain, but to boost the odds in my favour I’m on anti-convulsant drugs for a long time now.

In about half of seizure cases, it’s never clear what caused the seizures. What caused my 2002 stroke is somewhat clear, but the mechanism behind that remains a mystery, and my other health problems likewise have a lot of question marks regarding their genesis and mutually causative relationships, if any. The outcome of this new development in my medical history is likely to be: “maybe your brain calcifications and scar tissue helped stimulate your new seizures, but we can’t be sure. The treatment is the same regardless: stay on anti-convulsants for a while, try going off them later, and see if seizures manifest themselves again or not.” Brains are freaking complicated; when they go haywire it can be perplexing why.

As a scientist, I thrill at finding uncertainty in my research topics because that always means there is work left to be done. But in my own life outside of science, stubborn, independent, strong-willed control freak that I can certainly be at times, I am not such a fan of uncertainty. In both cases the goal is to minimize that uncertainty by gathering more information, but in our lives we often encounter unscalable walls of uncertainty that persist because of lack of knowledge regarding a problem that vexes us, especially a medical problem. We then can feel in a helpless state, adrift on the horizon of science, waiting for explorers to push that horizon further and with it advance our treatment or at least our insight into ourselves.

When the subject of that uncertainty is not some detached, objective, unthreatening, exciting research topic but rather ourselves and our own future constitution and mortality, it thus becomes deeply personal and disconcerting. I’m grateful that I don’t have brain cancer or some other clear and present threat to my immediate vitality. Things could be a lot worse; I am here writing this blog after all. I’ll never forget now being in the ambulance and thinking “this may be the end of it all; I might not last much longer”, and choking out a farewell to my wife just in case things took a bad turn. I’m grateful for the amazing things that modern medicine and imaging techniques can do– these have saved my life so many times over, I cannot fathom how to quantify it. And I’m grateful for the people that have helped me through this so far. Fiercely independent as I may be, I can’t face everything alone.

I am reminded of words I read recently by Baruch Spinoza, “The highest activity a human being can attain is learning for understanding, because to understand is to be free.” To further paraphrase him, we love truth because it is knowledge that enables us to stay alive- without it, we are flying blind and soon will crash. With the freedom it brings, we know the landscape of our own life and where the frontiers of uncertainty lie (“here be dragons”).

here_be_dragons

The past two weeks have been horrendous for me. I’d been feeling healthy and stronger than ever in many ways, and my life as of my birthday a month ago felt pretty damn good. But now everything has come crashing down in disaster, and I have been suffering from the realization, once again, of how vulnerable I am and how little I can control, and the darkness that ushers in as the odds begin to stack up against our future lives. I am acutely aware now of where the “dragons” are.

I am taking one important step forward, though, in wresting life back onto the rails again- this week I undergo surgery to put my left shoulder back together. While that’s scary, to be sliced open and have my rotator cuff and bones carpentered back where they should be, I know I’m in good hands with a top UK shoulder surgeon and methods that are tried-and-true. The risks are small, although the recovery time will be long. There won’t be any hefting of big frozen elephant feet in my research soon, not for me, and so my enjoyable anatomy studies are going to have to change their track for coming months while I regain my strength and rely on others’ help.

(do you know the movie reference?)

(do you know the movie reference? I have a new empathy for Ash.)

Then we’re on to the frightening task of tackling the spasmodic-gorilla-in-the-room with neurologists. We’ll see where that journey leads.

One thing is certain: I’m still me and there’s still a lot of fight left in me, because I have a lot left to fight for, and people and knowledge to aid me in that fight. I can shoulder the burden of uncertainty in my life because I have all that. Off I go…

20 November UPDATE:

I’ve had surgery to put my greater tuberosity back where it belongs. Thanks to a skilled surgeon’s team, some sutures and nickel-titanium staples, I am back closer to my normal morphology and can begin recovering my (currently negligible) shoulder joint’s range of motion via some physiotherapy. Surgery went very well; I was just in hospital for ~30 hours; but the 9 days of recovery since have been brutally hard due to problems switching medications around. Today I got my stitches out and a beautiful x-ray showing plentiful healing; yay!

This is a slightly oblique anterior (front) view of my left shoulder/chest. Fracture callus means healing is working well!  Four surgical staples (bright white thingies on upper RH side of image): forever now a part of my anatomy.

This is a slightly oblique anterior (front) view of my left shoulder/chest. Fracture callus means healing is working well!
Four surgical staples (bright white thingies on upper RH side of image): forever now a part of my anatomy.

Read Full Post »

Let's play find-the-spandrel!

Let’s play find-the-spandrel!

We just passed the 35th anniversary of the publication of Gould and Lewontin’s classic, highly cited, highly controversial essay (diatribe?), “The spandrels of San Marco and the Panglossian paradigm: a critique of the adaptationist programme.” The 21st of September 1979 was the fateful date. Every PhD student in biology should read it (you can find pdfs here— this post assumes some familiarity with it!) and wrestle with it and either love it or hate it- THERE CAN BE NO MIDDLE GROUND! With some 5405 citations according to Google Scholar, it has generated some discussion, to put it lightly. Evolutionary physiologists and behaviourists who were working at the time it came out have told me stories of how it sent (and continues to send) shockwaves through the community. Shockwaves of “oh crap I should have known better” and “Hell yeah man” and “F@$£ you Steve,” more or less.

I am among those who love “The Spandrels Paper“. I love it despite its many flaws that people have pointed out to seemingly no end- the inaccurate architectural spandrel analogy, the Gouldian discursive (overly parenthetical [I’m a recovering victim of reading too much Gould as an undergrad]) writing style, the perhaps excessive usage of “Look at some classic non-scientific literature I can quote”, the straw men and so on. I won’t belabour those; again your favourite literature search engine can be your guide through that dense bibliography of critiques. I love it because it is so daringly iconoclastic, and because I think it is still an accurate criticism of what a LOT of scientists who do research overlapping with evolutionary biology (that is, much of biology itself) do.

The aspects of The Spandrels Paper that I still think about the most are:

(1) scientists seldom test hypotheses of adaptation; they are quick to label something that is useful to an animal as an adaptation and then move on after rhapsodizing about how cool of an adaptation it is; and

(2) thus alternatives to adaptation, which might be very exciting topics to study in their own right, get less attention or none.

True for #2, evo-devo has flourished by raising the flag of constraint (genetic/developmental/other factors that prevent evolution from going in a certain direction, or even accelerate it in less random directions). That’s good, and there are other examples (genetic drift, we’ve heard about that sometimes), but option #1 still often tends to be the course researchers take. To some degree, labelling something as an adaptation is used as hype, to make it more exciting, I think, in plenty of instances.

Truth be told, much as Gould and Lewontin admitted in their 1979 paper and later ones, natural selection surely forges lineages that have loads of adaptations (even in the strictest sense of the word), and a lot of useful traits of organisms are thus indeed adaptations by any stripe. But the tendency seems to be to assume that this presumptive commonality of adaptations means that we are justified to quickly label traits as adaptations.

Or maybe some researchers just don’t care about rigorous tests of adaptation as they’re keen to do other things. Standards vary. What I wanted to raise in this post is how I tend to think about adaptation:

I think adaptations are totally cool products of evolution that we should be joyous to imagine, document, test and discover. But that means they should be Special. Precious. A cause for celebration, to carefully document by scientific criteria that something is an adaptation in the strictest sense, and not a plesiomorphy/exaptation (i.e. an adaptation at a different level in the evolutionary hierarchy; or an old one put to new uses), spandrel/byproduct, or other alternatives to adaptation-for-current-biological-role.

But that special-ness means testing a hypothesis of adaptation is hard. As many authors waving the flag of The Modern Comparative Method (TMCM) have contended, sciencing truth-to-adaptationist-power by the rules of TMCM takes a lot of work! George Lauder’s 1996 commentary in the great Adaptation book (pdf of the chapter here) outlined a lengthy procedure of  “The Argument from Design“; i.e., testing adaptation hypotheses. At its strictest implementation it could take a career (biomechanics experiments, field studies, fitness measurements, heritability studies, etc.) to test for one adaptation.

Who has time for all that?

The latter question seems maladaptive, placing cart and horse bass-ackwards. If one agrees that adaptations are Special, then one should be patient in testing them. Within the constraints of the practical, to some degree, and different fields would be forced to have different comfort levels of hypothesis testing (e.g. with fossils you can’t ever measure fitness or other components of adaptation directly; that does not mean that we cannot indirectly test for adaptations– with the vast time spans available, one would expect palaeo could do a very good job of it, actually!).

I find that, in my spheres of research, biomechanists in particular tend to be fast to call things they study adaptations, and plenty of palaeontologists do too. I feel like over-usage of the label “adaptation” cheapens the concept, making the discovery of one of the most revered and crucial concepts in all of evolutionary biology seem cheapened and trite. Things that are so easy to discover don’t seem as precious. When everything is awesome, nothing is…

I’ve always hesitated, thanks in part to The Spandrels Paper’s indoctrination, from calling features of animals adaptations, especially in my main research. I nominally do study major ?adaptations? such as terrestrial locomotion at giant body sizes, or the evolution of dinosaurian bipedalism. I searched through my ~80 serious scientific papers lately and found about 50 mentions of “adapt” in an adaptationist, evolutionary context. That’s not much considering how vital the concept is (or I think it is) to my research, but it’s still some mentions that slipped through, most of them cautiously considered– but plenty more times I very deliberately avoided using the term. So I’m no model of best practice, and perhaps I’m too wedded to semantics and pedantry on this issue, but I still find it interesting to think about, and I’ve gradually been headed in the direction of aspect #2 (above in bold) in my research, looking more and more for alternative hypotheses to adaptation that can be tested.

I like talking about The Spandrels Paper and I like some of the criticism of it- that’s healthy. It’s a fun paper to argue about and maybe we should move on, but I still come back to it and wonder how much of the resistance to its core points is truly scientific. I’m entering into teaching time, and I always teach my undergrads a few nuggets of The Spandrels Paper to get them thinking about what lies beyond adaptation in organismal design.

 What do other scientists think? What does adaptation mean (in terms of standards required to test it) to you? I’m curious how much personal/disciplinary standards vary. How much should they?

For the non-scientists, try this on for size: when our beloved Sir David Attenborough (or any science communicator) speaks in a nature documentary about how the otter is “perfectly adapted” to swim after prey underwater, do you buy into that or question it? Should you? (I get documentaries pushing me *all the time* to make statements like this, with a nudge and a wink when I resist) Aren’t scientists funny creatures anyway?

Read Full Post »

I’ll let the poll (prior post) run for a while but as it winds down I wanted to explain why I posted it:

In the past, I’ve often run into scientists who, when defending their published or other research, respond something like this:

“Yeah those data (or methods) might be wrong but the conclusions are right regardless, so don’t worry.”

And I’ve said things like that before. However, I’ve since realized that this is a dangerous attitude, and in many contexts it is wrong.

If the data are guesses, as in the example I gave, then we might worry about them and want to improve them. The “data are guesses” context that I set the prior post in comes from Garland’s 1983 paper on the maximal speeds of mammals– you can download a pdf here if this link works (or Google it). Basically the analysis shows that, as mammals get bigger, they don’t speed up as a simple linear analysis might show you. Rather, at a moderate size of around 50-100kg body mass or so, they hit a plateau of maximal speed, then bigger mammals tend to move more slowly. However, all but a few of the data points in that paper are guesses, many coming from old literature. The elephant data points are excessively fast in the case of African elephants, and on a little blog-ish webpage from the early 2000s we chronicled the history of these data– it’s a fun read, I think. The most important, influential data plot from that paper by Garland is below, and I love it– this plot says a lot:

Garland1983

I’ve worried about the accuracy of those data points for a long time, especially as analyses keep re-using them– e.g. this paper, this one, and this one, by different authors. I’ve talked to several people about this paper over the past 20 years or so. The general feeling has been in agreement with Scientist 1 in the poll, or the quote above– it’s hard to imagine how the main conclusions of the paper would truly be wrong, despite the unavoidable flaws in the data. I’d agree with that statement still: I love that Garland paper after many years and many reads. It is a paper that is strongly related to hypotheses that my own research seeks out to test. I’ve also tried to fill in some real empirical data on maximal speeds for mammals (mainly elephants; others have been less attainable), to improve data that could be put into or compared with such an analysis. But it is very hard to get good data on even near-maximal speeds for most non-domesticated, non-trained species. So the situation seems to be tolerable. Not ideal, but tolerable. Since 1983, science seems to be moving slowly toward better understanding of the real-life patterns that the Garland paper first inferred, and that is good.

But…

My poll wasn’t really about that Garland paper. I could defend that paper- it makes the best of a tough situation, and it has stimulated a lot of research (197 citations according to Google; seems low actually, considering the influence I feel the paper has had).

I decided to do the poll because thinking about the Garland paper’s “(educated) guesses as data” led me to think of another context in which someone might say “Yeah those data might be wrong but the conclusions are right regardless, so don’t worry.” They might say it to defend their own work, such as to deflect concerns that the paper might be based on flawed data or methods that should be formally corrected. I’ve heard people say this a lot about their own work, and sometimes it might be defensible. But I think we should think harder about why we would say such things, and if we are justified in doing so.

We may not just be making the best of a tough situation in our own research. Yes, indeed, science is normally wrong to some degree. A more disconcerting situation is that our wrongs may be mistakes that others will proliferate in the future. Part of the reasoning for being strict stewards of our own data is this: It’s our responsibility as scientists to protect the integrity of the scientific record, particularly of our own published research because we may know that best. We’re not funded (by whatever source, unless we’re independently wealthy) just to further our own careers, although that’s important too, as we’re not robots. We’re funded to generate useful knowledge (including data) that others can use, for the benefit of the society/institution that funds us. All the more reason to share our critical data as we publish papers, but I won’t go off on that important tangent right now.

In the context described in the latter paragraph and the overly simplistic poll, I’d tend to favour data over conclusions, especially if forced to answer the question as phrased. The poll reveals that, like me, most (~58%) respondents also would tend to favour data over conclusions (yes, biased audience, perhaps- social media users might tend to be more savvy about data issues in science today? Small sample size, sure,  that too!). Whereas very few (~10%) would favour conclusions, in the context of the poll. The many excellent comments on the poll post reveal the trickier nuances behind the poll’s overly simplistic question, and why many (~32%) did not favour one answer over the other.

If you’ve followed this blog for a while, you may be familiar with a post in which I ruminated over my own responsibilities and conundrums we face in work-life balance, personal happiness, and our desires to protect ourselves or judge/shame others. And if you’ve closely followed me on Twitter or Facebook, you may have noticed we corrected a paper recently and retracted another. So I’ve stuck by my guns lately, as I long have, to correct my team’s work when I’m aware of problems. But along the way I’ve learned a lot, too, about myself, science, collaboration, humanity, how to improve research practice or scrutiny, and the pain of errors vs. the satisfaction of doing the right thing. I’ve had some excellent advice from senior management at the RVC along the way, which I am thankful for.

I’ve been realizing I should minimize my own usage of the phrase “The science may be flawed but the conclusions are right.” That can be a more-or-less valid defence, as in the case of the classic Garland paper. But it can also be a mask (unintentional or not) that hides fear that past science might have real problems (or even just minor ones that nonetheless deserve fixing) that could distract one away from the pressing issues of current science. Science doesn’t appreciate the “pay no attention to the person behind the curtain” defence, however. And we owe it to future science to tidy up past messes, ensuring the soundness of science’s data.

We’re used to moving forward in science, not backward. Indeed, the idea of moving backward, undoing one’s own efforts, can be terrifying to a scientist– especially an early career researcher, who may feel they have more at risk. But it is at the very core of science’s ethos to undo itself, to fix itself, and then to move on forward again.

I hope that this blog post inspires other scientists to think about their own research and how they balance the priorities of keeping their research chugging along but also looking backwards and reassessing it as they proceed. It should become less common to say “Yeah those data might be wrong but the conclusions are right regardless, so don’t worry.” Or it might more common to politely question such a response in others. As I wrote before, there often are no simple, one-size-fits-all answers for how to best do science. Yet that means we should be wary of letting our own simple answers slip out, lest they blind us or others.

Maybe this is all bloody obvious or tedious to blog readers but I found it interesting to think about, so I’m sharing it. I’d enjoy hearing your thoughts.

Coming soon: more Mystery Anatomy, and a Richard Owen post I’ve long intended to do.

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 2,407 other followers