Posts Tagged ‘in all seriousness’

This is a follow-up post to my earlier one and also weaves into my post on “success” (with a little overlap). I am sharing my thoughts on this topic of research management, because I try to always keep myself learning about doing and managing research, and this blog serves as a set of notes as I learn; so why not share them too? I tried editing the old post but it clearly was too much to add so I started a new post. It’s easy to just coast along and not reflect on what one is doing, caught up in the steady stream of science that needs to get done. Mistakes and mis-judgements can snowball if one doesn’t reflect. So here are my personal reflections, freshly thawed for your consideration, on how I approach doing research and growing older as I do it, adapting to life’s changes along the way.

Stomach-Churning Rating: 0/10, just words and ideas.

I realized that a theme in these rant-y posts on my blog is to Know Yourself, and, in the case of mentoring a team, Know Your Team. That knowledge is a reward from the struggles and challenges of seeking whatever one calls success. I critique some traits or practices here that I’ve seen in myself (and/or others), and perhaps managed to change. And I seek to change my environment by building a strong team (which I feel I have right now!) and by finding the best ways to work with them (which I am always learning about!). I also realized a word to describe a large part of what I seek and that is joy. The joy of discovery in the study of nature; the joy from the satisfaction of a job well done; the joy of seeing team members succeed in their careers and broader lives. I want to know that multifarious joy; the ripening of fulfilment.

We’re all busy in one way or another. Talking about being busy can just come across as (very) boring or self-absorbed or insecure. Talk about what you’re doing instead of how much you’re juggling. That’s more interesting. Avoid the Cult of Busy. I try to. It’s any easy complaint to default with in a conversation, so it takes some alertness… which keeps you busy. :-)  I remember Undergrad-Me sighing wistfully to my advisor Dianna Padilla “I’m SO busy!” and her looking at me like I was an idiot. In that moment I realized that I was far from the only (or most) busy person in that conversation. Whether she was truly thinking that I was naïve, my imaginary version of her reaction is right. It was a foolish, presumptuously arrogant thing for me to declare. There surely are more interesting things to talk about than implied comparisons of the magnitudes of each other’s busy-ness. And so I move on…

Don’t count hours spent on work. That just leads to guilt of too much/too little time spent vs. how much was accomplished. Count successes. A paper/grant submitted is indeed a success, and acceptance/funding of it is another. A handy rule in science is that everything takes so much more time than you think it does that even trying to predict how long it will take is often foolish and maybe even time that could be better spent on doing something that progresses your work/life further.

Becoming older can slow you down and make you risk-averse, so you have to actively fight these tendencies. Ageing as a researcher needn’t always mandate becoming slower or less adventurous. But life will change, inevitably. One has to become more efficient at handling its demands as life goes on, and force oneself to try new things for the sake of the novelty, to think outside the box and avoid slipping into dogma or routine. We don’t want to be that stereotype of the doddering old professor, set in their ways, who stands in the way of change. The Old Guard is the villain of history. Lately I’ve been examining my own biases and challenging them, potentially re-defining myself as a scientist. I hope to report back on that topic.

The tone of life can darken as one becomes a senior researcher and “grows up”, accumulating grim experiences of reality. Some of my stories on this blog have illustrated that. In an attempt to distract me from that gloaming on the horizon, I try to do things at work that keep it FUN for me. This quest for fun applies well to my interactions with people, which dominate my work so much– I am seemingly always in meetings, less often in isolation at my desk. The nicer those meetings are, the happier I am. So I try to minimize exposure to people or interactions that are unpleasant, saving my energy for the battles that really matter. This can come across as dismissive or curt but in the end one has little choice sometimes. These days, nothing to me is more negatively emotive than sitting in an unproductive meeting and feeling my life slipping away as the clock ticks. I cherish my time. I don’t give it away wantonly to time-vampires and joy-vandals. They get kicked to the kerb– no room (or time) for them on this science-train. Choo choo!

Moreover, the No Asshole Rule is a great principle to try to follow at work. Don’t hire/support the hiring of people that you can’t stand socially, even if they are shit-hot researchers with a hugely promising career trajectory. Have a candidly private moment with someone who knows them well and get the inside scoop on what they’re like to work with. Try to get to know people you work with and collaborate more with people that you like to work with. Build a team of team-players (but not yes-men and yes-women; a good team challenges you to know them and yourself; so there must be some tension!). That can help you do better science because you enjoy doing it more, and you prioritize it more because of that, and you have more energy because of all that. Hence your life gets better as a result. I prefer that to a constant struggle in tense, competitive collaborations. One of the highest compliments I ever got was when someone described me to their friend as a “bon vivant”. I felt like they’d discovered who I was, and they’d helped me to discover it myself.

I wondered while writing this, would I hire 2003-Me, from when I was interviewing for my current job 12 years ago? I suppose so, but I’d give myself a stern scolding on day one at the job. “Chill the fuck out,” I’d say. “Focus on doing the good science and finding the other kinds of joy in life.” I like the more mellowed-out, introspective, focused, compassionate 2015-Me, and I think 2003-Me would agree with that assessment.

There is a false dichotomy in a common narrative about research mentoring that I am coming to recognize: a tension between the fortunes of early career researchers and senior research managers. The dichotomy holds that once one is senior enough, ambition wanes and success is complete and one’s job is to support early career researchers to gain success (as recompense for their efforts in pushing forward the research team’s day-to-day science), and to step back out of the limelight.

The reality, I think, is that all these things are linked: early career researchers succeed in part because their mentors are successful (i.e. the pedigree concept; good scientists arise in part from a good mentoring environment), and research-active mentors need to keep seeking funding to support their teams, which means they need to keep showing evidence of their own success. Hence it never ends. One could even argue that senior researchers need to keep authoring papers and getting grants and awards and other kinds of satisfaction and joy in science that maintain reputations, and thus their responsibility to themselves and their team to keep pushing their research forward may not decrease or even may intensify. Here, a “team” ethos rather than an “us vs. them” mentality seems more beneficial to all—we’re in this together. Science is hard. We are all ambitious and want to achieve things to feel happy about. I don’t think the “it never ends” perspective is gloomy, either—if the false dichotomy were true, once one hit that plateau of success as a senior researcher, ambition and joy and personal growth would die. Now that’s gloomy. Nor does the underlying pressure mandate that researchers can’t have a “life outside of work”. I’ve discussed that enough in other posts.

Trust can be a big issue in managing research. If people act like they don’t trust you, it may be a sign that they’ve been traumatized by violated trust before. Be sensitive to that; gently inquire? And get multiple sides of the story from others if you can… gingerly. But it also might be a warning sign that they don’t deserve trust themselves. Trust goes both ways. Value trust, perhaps above all else. It is so much more pleasant than the lack thereof. Reputation regarding trustworthiness is a currency that a research manager should keep careful track of in themselves and others. Trust is the watchdog of joy.

Say “No” more often to invitations to collaborate as your research team grows. “Success breeds success” they say, and you’ll get more invitations to collaborate because you are viewed as successful — and/or nice. But everyone has their limits. If you say “Yes” too much, you’ll get overloaded and your stock as a researcher will drop– you’ll get a reputation for being overcommitted and unreliable. Your “Yes” should be able to prove its value. I try to only say “Yes” to work that grabs me because it is great, do-able science and with fun people that I enjoy collaborating with. This urge to say “No” must be balanced with the need to take risks and try new directions. “Yes” or “No” can be easy comfort zones to settle into. A “Yes” can be a longterm-noncommittal answer that avoids the conflict that a “No” might bring, even if the “No” is the more responsible answer. This is harder than it seems, but important.

An example: Saying “No” applies well to conference invitations/opportunities, too. I love going to scientific conferences, and it’s still easy enough to find funding to do it. Travel is a huge perk of academic research! But I try to stick to a rule of attending two major conferences/year. I used to aim for just one per year but I always broke that rule so I amended it. Two is sane. It is easy to go to four or more annual conferences, in most fields, but each one takes at least a week of your time; maybe even a month if you are preparing and presenting and de-jetlagging and catching up. Beware the trap of the wandering, unproductive, perennial conference-attendee if doing science is what brings you joy.

This reminds me of my post on “saying no to media over-coverage“– and the trap of the popularizer who claims to still be an active researcher, too. There is a zero-sum game at play; 35 or 50 hour work week notwithstanding. Maybe someday I’d want to go the route of the popularizer, but I’m enjoying doing science and discovering new things far too much. It is a matter of personal preference, of course, how much science communication one does vs. how much actual science.

The denouement of this post is about how research teams rise and fall. I’m now often thinking ahead to ~2016, when almost all of my research team of ~10 people is due to finish their contracts. If funding patterns don’t change — and I do have applications in the works but who knows if they will pan out — I may “just” have two or so people on my team in a year from now. I could push myself to apply like mad for grants, but I thought about it and decided that I’ll let the fates decide based on a few key grant submissions early in the year. There was too little time and too much potential stress at risk. If the funding gods smile upon me and I maintain a large-ish team, that’s great too, but I would also truly enjoy having a smaller, more focused team to work with. I said “No” to pushing myself to apply for All The Grants. I’ll always have diverse external collaborations (thanks to saying “Yes” enough), but I don’t define my own success as having a large research group (that would be a very precarious definition to live by!). I’m curious to see what fortune delivers.

Becoming comfortable with the uncertainty of science and life is something I’m finding interesting and enjoy talking about. It’s not all a good thing, to have that sense of comfort (“whatever happens, happens, and I’m OK with that”). I don’t want my ambition to dwindle, although it’s still far healthier than I am. There is no denying that it is a fortunate privilege to feel fine about possibly not drowning in grant funds. It just is what it is; a serenity that I welcome even if it is only temporary. There’s a lot of science left to be written about, and a smaller team should mean more time to do that writing.

Will I even be writing this blog a year from now? I hope so, but who knows. Blogs rise and fall, too. This one, like me, has seen its changes. And if I am not still writing it, it might resurface in the future anyway. What matters is that I still derive joy from blogging, and I only give in to my internal pressure to write something when the mood and inspiration seize me. I hope someone finds these words useful.

Read Full Post »

Maybe it’s uncool to talk about heroes in science these days, because everyone is poised on others’ shoulders, but “Neill” (Robert McNeill) Alexander is undeniably a hero to many researchers in biomechanics and other strands of biology. Our lab probably wouldn’t exist without his pervasive influence- he has personally inspired many researchers to dive into biomechanics, and he has raised the profile of this field and championed its importance and principles like no other one individual. Often it feels like we’re just refining answers to questions he already answered. His influence extends not only to comparative biomechanics and not only around his UK home, but also –via his many, many books on biology, anatomy and related areas, in addition to his research, editorial work and public engagement with science– to much of the life sciences worldwide.

What does a kneecap (patella) do? Alexander and Dimery 1985, they knew. My team is still trying to figure that out!

What does a kneecap (patella) do? Alexander and Dimery 1985, they knew. 30 years later, my team is still trying to figure that out!

Sure, one could (and with great humility I’m sure Alexander would) mention others like Galileo and Marey and Muybridge and Fenn and Gray and Manter who came before him and did have a profound impact on the field. Alexander can, regardless, easily be mentioned in the same breath as luminaries of muscle physiology such as AV Hill and even Andrew + Julian Huxley. But I think many would agree that Alexander, despite coming later to the field, had a singular impact on this young field of comparative biomechanics. That impact began in the 1970s, when Dick Taylor and colleagues in comparative physiology were also exploding onto the scene with work at the Concord Field Station at Harvard University, and together biomechanics research there, in the UK, elsewhere in Europe and the world truly hit its stride, with momentum continuing today. I’m trying to think of some women who played a major role in the early history of biomechanics but it was characteristically a woefully male-dominated field. That balance has shifted from the 1970s to today, and my generation would cite luminaries such as Mimi Koehl as key influences. There are many female or non-white-male biomechanics researchers today that are stars in the field, so there seems to have been progress in diversifying this discipline’s population.

Hence, honouring Alexander’s impact on science, today our college gave Neill an honorary doctorate of science (DSc). Last year, I also helped organize a symposium at the Society for Vertebrate Paleontology’s conference in Berlin that honoured his impact specifically on palaeontology, too- compare his book “The Dynamics of Dinosaurs and Other Extinct Giants” to current work and you’ll see what fuelled much of that ongoing work, and how far/not far we’ve come since ~1989. Even 10 years later, his “Principles of Animal Locomotion“, with Biewener’s “Animal Locomotion“, remains one of the best books about our field (locomotion-wise; Vogel’s Comparative Biomechanics more broadly) , and his educational CD “How Animals Move“, if you can get it and make it work on your computer, is uniquely wonderful, with games and videos and tutorials that still would hold up well as compelling introductions to animal biomechanics. Indeed, I’ve counted at least 20 books penned by Alexander, including “Bones: The Unity of Form and Function” (under-appreciated, with gorgeous photos of skeletal morphology!).

1970s Alexander, with a sauropod leg.

1970s Alexander, with a sauropod leg.

And then there are the papers. I have no idea how many papers Neill has written –again and again I come across papers of his that I’ve never seen before. I tried to find out from the Leeds website how many papers he has, but they’re equally dumbfounded. I did manage to count 38 publications in Nature, starting in 1963 with “Frontal Foramina and Tripodes of the Characin Crenuchus,” and 6 in Science. So I think we can be safe in assuming that he has written everything that could be written in biomechanics, and we’re just playing catchup to his unique genius.

Seriously though, Alexander has some awesome publications stemming back over 50 years. I’m a big fan of his early work on land animals, such as with Calow in 1973 on “A mechanical analysis of a hind leg of a frog” and his paper “The mechanics of jumping by a dog” in 1974, which did groundbreaking integrations of quantitative anatomy and biomechanics. These papers kickstarted what today is the study of muscle architecture, which our lab (including my team) has published extensively on, for example. They also pioneered the integration of these anatomical data with simple theoretical models of locomotor mechanics, likewise enabling many researchers like me to ride on Alexander’s coattails. Indeed, while biomechanics often tends to veer into the abstract “assume a spherical horse”, away from anatomy and real organisms, Alexander managed to keep a focus on how anatomy and behaviour are related in whole animals, via biomechanics. As an anatomist as well as a biomechanist, I applaud that.

How do muscles work around joints? Alexander and Dimery 1985 figured out some of the key principles.

How do muscles work around joints? Alexander and Dimery 1985 figured out some of the key principles.

Alexander has researched areas as diverse as how fish swim, how dinosaurs ran, how elastic mechanisms make animal movement more efficient, how to model the form and function of animals (see his book “Optima for Animals” for optimization approaches he disseminated, typifying his elegant style of making complex maths seem simple and simple maths impressively powerful) and how animals walk and run, often as sole author. In these and other areas he has codified fundamental principles that help us understand how much in common many species have due to inescapable biomechanical constraints such as gravity, and how these principles can inspire robotic design or improvements in human/animal care such as prosthetics. Neill has also been a passionate science communicator, advising numerous documentaries on television.

~1990s Alexander, with model dinosaurs used to estimate mass and centre of mass.

~1990s Alexander, with model dinosaurs used to estimate mass and centre of mass.

Alexander’s “Dynamics of Dinosaurs” book, one of my favourites in my whole collection, is remarkably accessible in its communication of complex quantitative methods and data, which arguably has enhanced its impact on palaeontologists. Alexander’s other influences on palaeobiology include highly regarded reviews of jaw/feeding mechanics in fossil vertebrates (influencing the future application of finite element analysis to palaeontology), considerations of digestion and other aspects of metabolism, analysis of vertebral joint mechanics, and much more.  Additionally, he conducted pioneering analyses of allometric (size-related) scaling patterns in extant (and extinct; e.g. the moa) animals that continue to be cited today as valuable datasets with influential conclusions, by a wide array of studies including palaeontology—arguably, he helped compel palaeontologists to contribute more new data on extant animals via studies like these.

Neill Alexander did his MSc and PhD at Cambridge, followed by a DSc at the University of Wales, a Lecturer post at Bangor University and finally settling at the University of Leeds in 1969, where he remained until his retirement in 1999, although he maintains a Visiting Professorship there. I had the great pleasure of visiting him at his home in Leeds in 2014; a memory I will treasure forever, as I had the chance to chat 1-on-1 with him for some hours. He has been Secretary of the Zoological Society of London throughout most of the 1990s, President of the Society for Experimental Biology and International Society of Vertebrate Morphologists, long championing the fertile association of biomechanics with zoology, evolutionary biology and anatomy. More recently, he was a main editor of Proceedings of the Royal Society B for six years.

Many people I’ve spoken to about Neill before have stories of how he asked a single simple question at their talk, poster or peer review stage of publication, and how much that excited them to have attracted his sincere interest in their research. They tend to also speak of how that question cut to the core of their research and gave them a facepalm moment where they thought “why didn’t I think of that?”, but how he also asked that question in a nice way that didn’t disembowel them. I think that those recalling such experiences with Neill would agree that he is a professorial Professor: a model of senior mentorship in terms of how he can advise colleagues in a supportive, constructive and warmly authoritative, scholarly way. For a fairly recent example of his uniquely introspective and concise, see the little treasure “Hopes and Fears for Biomechanics”, a ~2005 lecture you can find here. I really like the “Fears” part. I share those fears- and maybe embody them at times…

My visit with RMcNeill Alexander in 2014.

My visit with RMcNeill Alexander in 2014.

Perhaps I have gushed enough, but I could go on! Professor RMcNeill Alexander, to summarise the prodigious extent of his research, is to biomechanics as Darwin is to biology as a whole. One could make a strong case for him being one of the most influential modern biologists. He is recognised for this by his status as a Fellow of the Royal Society (since 1987), and a CBE award, among many other accolades, accreditations and awards. And, if you’ve met him, you know that he is a gentle, humble, naturally curious and enthusiastic chap who instils a feeling of awe nonetheless, and still loves to talk about science and keeps abreast of developments in the field. And as the RVC is honouring Neill today, it is timely for me to honour him in this blog post. There can never be another giant in biomechanics like Alexander, and we should be thankful for the broad scientific shoulders upon which we are now, as a field, poised.

I hope others will chime in with comments below to share their own stories.



Read Full Post »

For about 3 years now I’ve used the #WIJF (i.e. acronym for What’s In John’s Freezer) hashtag to organize my social media efforts on this blog. Over that time I became aware that “wijf” in Dutch can be taken as a derogatory term for women. And indeed, these days I do see people tweeting derogatory things with the #wijf hashtag, along with other, tamer uses like mine. I’ve come to the decision, albeit gradually and with much internal debate, to stop using that hashtag so I can avoid association with the sexist Dutch word. This post is about why, and what’s next.

Stomach-Churning Rating: Debatable, but 0/10 by the standard of the usual gory things on this blog; no images.

I don’t speak Dutch, but 25 million or so people do. This is a blog about morphological science, and the Dutch have had (and continue to have) a disproportionately strong influence on that field. I’m not claiming to be perfect when it comes to feminist issues, but I listen and I try and I care. My undergraduate tutelage in science was almost exclusively driven by female scientists– I never thought about that before but it’s true; at least 5 different major faculty influences at the University of Wisconsin! I work at a university where ~85% of the students are female (common today in vet schools). My research team has featured 9 out of 16 female postgraduate staff and students since 2004, and a lot of my collaborators and friends are scientists or science afficionados who happen to be female. I have good reason to care, and social media has helped to raise my awareness of important matters within and outside of science that I do care a lot about.

So, while I tend to hate to abandon words (or hashtags), preferring to fight for alternative meanings (e.g. the word “design” in evolutionary biology), and I am a stubborn git, the #WIJF hashtag and acronym are different, I’ve decided, and it’s time to use something else. Admittedly, #WIJF hasn’t been that important to this blog as hashtag or acronym– mainly just I use it, and any “brand name recognition” or other things surely arise more from the full name of the blog. So abandoning #WIJF is an inconvenience but not devastating to my blog. I see this move as (1) taking control of a situation where the benefits of staying with the hashtag/acronym are minimal and the harms, while of debatable magnitude, outweigh those minimal benefits in my view, and (2) demonstrating that I don’t tolerate or want to be associated with sexism or other discrimination. And I hope that this move might inspire others to reflect similarly on their own behaviour. Morphology, like any science, is for everyone, and this blog is meant to be a friendly place.

But a thing that has held me back, even though it is admittedly trivial in the grand scheme of things, is what hashtag/acronym to use henceforth? I turn that over to you, Freezerinos. I have no good ideas and so I am crowdsourcing. I need something short (not #Whatsinjohnsfreezer, probably– too long), something associated with the title of the blog, but also something dissimilar to the naughty word “wijf” and thus inoffensive… ideally inoffensive in the ~7000 languages of the world (!?!?). That might not leave many options! What should be in John’s blog’s hashtag?

Read Full Post »

If you’ve been working in science for long enough, perhaps not very long at all, you’ve heard about (or witnessed) scientists in your field who get listed as co-authors on papers for political reasons alone. They may be an uninvolved but domineering professor or a fellow co-worker, a friend, a political ally, an overly protective museum curator, or just a jerk of any stripe. I read this article recently and felt it was symptomatic of the harm that bad supervisors (or other collaborators) do to science, including damage to the general reputation of professors and other mentors. There are cultural differences not only between countries (e.g. more authoritative, hierarchical cultures probably tolerate behaviour like this more) but also within institutions because of individual variation and local culture, tradition or other precedent. But this kind of honorary co-authorship turns my stomach—it is co-authorship bloat and a blight upon science. Honorary co-authorship should offend any reasonable scientist who actually works, at any level of the scientific hierarchy. So here’s my rant about it. Marshmallows and popcorn are welcomed if you want to watch my raving, but I hope this post stimulates discussion. A brief version of this did do that on my personal Facebook account, which motivated me to finish this public post.

Stomach-Churning Rating: 0/10 but it may provoke indigestion if you’ve been a victim of co-author bloat.

At its root, honorary co-authorship (HONCO) shows disdain for others’ efforts in research. “I get something for nothing, unlike others.” It persists because of deference to pressures from politics (I need to add this co-author or they’ll cause me trouble), other social dynamics (this person is my buddy; here’s a freebie for them), careerism (oneself/ally/student needs to be on this paper to boost their CV and move up in their career; or else), or even laziness (a minimal publishable unit mentality- e.g. any minor excuse for being a co-author is enough). All of these reasons for tolerating it, and apathy about the status quo, keep the fires of HONCO burning. My feeling from my past 20 years of experience in academia is that, as science is getting increasingly complex and requiring more collaborators and co-authors, the fire is raging to a point where it is visibly charring the integrity of science too often to just keep quiet about it and hope it doesn’t cause much damage.

There’s a flip side to HONCO, too– it’s not that, as some might take the article above to imply, we all need to boot senior authors off of papers. Senior authors, like other collaborators, have a reason for existing that encompasses — but is not limited to — boosting the careers of those they mentor. We scientists all want the satisfaction of doing science, even if the nature of our involvement in research evolves (and varies widely). Part of that satisfaction comes from publishing papers as the coup de grace to each project, and it’s a privilege that should be open to being earned by anyone qualified. Indeed, if adding HONCOs to papers is fraud, then removing worthy contributors from papers can be seen as a similar kind of fraud (unless a result of mutually agreed I’ll-help-you-for-nothing generosity). The broader point is, authors should deserve to be authors, and non-authors should not deserve to be authors.

On that latter issue, I think back to my grad school days and how my mentors Kevin Padian, Rodger Kram, Bob Full and others often gave me valuable input on my early papers (~1998-2002) but never earned co-authorship on them (exception: mentor Steve Gatesy’s vital role in our 2000 “abductors, adductors” paper). And frankly I feel a little bad now about that. Some of those mentors might have deserved co-authorship, but even when asked they declined, and just appeared in the Acknowledgements. It was the culture in my department at Berkeley, like many other USA grad schools at the time and perhaps now, that PhD students often did not put their supervisors on their papers and thus published single-author work. I see that less often today — but still varying among fields; e.g. in biomechanics, less single-authorship globally; in palaeontology and morphology, more single-authored work, but perhaps reducing overall. That is my off-the-cuff impression from the past >10 years.

I was shocked to see less (or often no) single-authored papers by lab colleagues once I moved to the UK to take up my present post– the prevalence of supervisors as senior authors on papers was starkly evident. On reflection, I now think that many of those multi-authored papers deserved to be such. It was not solo work and involved some significant steering, with key ideas originating from supervisors and thus constituting valid intellectual input. Yet I wondered then if it was a good thing or not, especially after hearing student complaints like waiting six months for comments from their supervisor on a manuscript. But this gets into a grey area that is best considered on a paper-by-paper basis, following clear criteria for authorship and contributions, and it involves difficulties inherent to some supervisor-supervisee relationships that I will not cover here. Much as supervisors need to manage their team, their team needs to manage them. ‘Nuff said.

Many institutions and journals have clear criteria for co-authorship, and publications have “author contributions” sections that are intended to make it clear who did what for a given paper – and thus whose responsibility any problems might be, too. HONCOs take credit without responsibility or merit, and are blatant fraud. I say it’s time we stand up to this disease. The criteria and contributions aspects of paper are part of the immune system of science that is there to help defend against academic misconduct. We need to work together to give that system a fighting chance.

There are huge grey areas in what criteria are enough for co-authorship. I have to wrestle with this for almost every paper I’m involved in– I am always thinking about whether I truly deserve to be listed on a paper, or whether others do. I’ve been training myself to think, and talk, about co-authorship criteria early in the process of research— that’s essential in avoiding bad blood later on down the line when it’s time to write up the work, when it’s possibly too late for others to earn co-authorship. This is a critical process that is best handled explicitly and in writing, especially in larger collaborations. What will the topic of any future paper(s) be and who will be involved as co-authors, or not? It’s a good agenda item for research meetings.

There are also grey areas in author contributions. How much editing of a paper is enough for co-authorship justification? Certainly not just spellchecking or adding comments saying “Great point!”, although both can be a bit helpful. Is funding a study a criterion? Sometimes– how much and how directly/indirectly did the funding help? Is providing data enough? Sometimes. In these days of open data, it seems like the data-provision criterion, part of the very hull that science floats upon, is weakening as a justification for co-authorship. It is becoming increasingly common to cite others’ papers for data, provide little new data oneself, and churn out papers without those data-papers’ authors involved. And that’s a good thing, to a degree. It’s nicer to invite published-data-providers on board a paper as collaborators, and they can often provide insight into the nature (and limitations or faults!) of the data. But adding co-authors can easily slide down the slippery slope of hooray-everyone’s-a-co-author (e.g. genetics papers with 1000+ co-authors, anyone?). I wrote up explicit co-authorship criteria here (Figshare login needed; 2nd pdf in the list) and here (Academia.edu login needed) if you’re curious how I handle it, but standards vary. Dr. William Pérez recently shared a good example of criteria with me; linked here.

In palaeontology and other specimen-based sciences, we get into some rough terrain — who collected the fossil (i.e. was on that field season and truly helped), identified it, prepared and curated it, published on it, or otherwise has “authority” over it, and which of them if any deserve co-authorship? I go to palaeontology conferences every year and listen over coffee/beers to colleagues complain about how their latest paper had such-and-such (and their students, pals, etc.) added onto the paper as HONCOs. Some museums or other institutions even have policies like this, requiring external users to add internal co-authors as a strong-arm tactic. An egregious past example: a CT-scanning facility I used once, and never again, even had the guff to call their mandatory joint-authorship policy for usage “non-collaborative access”… luckily we signed no such policy, and so we got our data, paid a reasonable fee for it, and had no HONCOs. Every time I hear about HONCOs, I wonder “How long can this kind of injustice last?” Yet there’s also the reality that finding and digging up a good field site or specimen(s); or analogous processes in science; takes a lot of time and effort and you don’t want others prematurely jumping your claim, which can be intellectual property theft, a different kind of misconduct. And there is good cause for sensitivity about non-Western countries that might not have the resources and training of staff to earn co-authorship as easily; flexibility might be necessary to avoid imperialist pillaging of their science with minimal benefit to their home country.

Yet there’s hope for minimizing HONCO infections. A wise person once said (slightly altered) “I’d rather light a candle than curse the darkness.” Problems can have solutions, even though cultural change tends to be agonizingly slow. But it can be slower still, or retrograde, if met with apathy. What can we do about HONCOs? Can we beat the bloat? What have I done myself before and what would I do differently now? I’ll take an inward look here.

Tolerating HONCOs isn’t a solution. I looked back on my experiences with >70 co-authored papers and technical book chapters since 1998. Luckily there are few instances where I’d even need to contemplate if a co-author was a HONCO. Most scientists I’ve worked with have clearly pulled their weight on papers or understood why they’re not co-authors on a given paper. More about that below. In those few instances of possible HONCOs, about five papers from several years ago, some colleagues provided research material/data but never commented on the manuscripts or other aspects of the work. I was disgruntled but tolerated it. It was a borderline grey area and I was a young academic who needed allies, and the data/specimens were important. Since then, I’ve curtailed collaborations with those people. To be fair, there were some papers where I didn’t do a ton (but did satisfy basic criteria for co-authorship, especially commenting on manuscripts) and I got buried in Middle-Authorland, and that’s fine with me; it wasn’t HONCO hell I was in. There were a few papers where I played a minor role and it wasn’t clear what other co-authors were contributing, but I was comfortable giving them the benefit of the doubt.

One anti-HONCO solution was on a more recent paper that involved a person who I had heard was a vector of HONCO infection. I stated early on in an email that only one person from their group could be a co-author on the resulting paper, and they could choose who it was and that person would be expected to contribute something beyond basic data. They wrote back agreeing to it and (magnanimously) putting a junior student forward for it, who did help, although they never substantially commented on the manuscript so I was a little disappointed. But in the grand scheme of things, this strategy worked in beating the HONCO bloat. I may have cost myself some political points that may stifle future collaborations with that senior person, but I feel satisfied that I did the right thing under the constraints, and damn the consequences. Containment of HONCO has its attendant risks of course. HONCO-rejects might get honked off. Maybe one has to pick their battles and concede ground sometimes, but how much do the ethics of such concessions weigh?

Another solution I used recently involved my own input on a paper. I was asked to join a “meta-analysis” paper as a co-author but the main work had already been done for it, and conclusions largely reached. I read the draft and saw places where I could help in a meaningful way, so with trepidation I agreed to help and did. But during the review process it became clear that (1) there was too much overlap between this paper and others by the same lead author, which made me uncomfortable; and (2) sections that I had contributed to didn’t really meld well with the main thrust of the paper and so were removed. As a consequence, I felt like a reluctant HONCO and asked to be removed from the paper as a co-author, even though I’d helped write sections of the main text that remained in the paper (but this was more stylistic in my view than deeply intellectual). I ended up in the Acknowledgements and relieved about it. I am comfortable removing myself from papers in which I don’t get a sense of satisfaction that I did something meriting co-author status. But it’s easier for more senior researchers like me to do that, compared to the quandary that sink-or-swim early-career researchers may face.

More broadly in academia, a key matter at stake is the CVs of researchers, especially junior ones, which these days require more and more papers (even minimal publishable units) to be competitive for jobs, awards and funding. Adding HONCOs to papers does strengthen individuals’ CVs, but in a parasitic way from the dilution of co-author contributions. And it’s just unethical, full stop. One solution: It’s thus up to senior people to lead from the front, showing that they don’t accept HONCOs themselves and encouraging more junior researchers to do the same when they can—or even questioning the contributions that potential new staff/students made to past papers, if their CV seems bloated (but such questions probe dangerous territory!). Junior people, however, still need to make a judgement call on how they’ll handle HONCOs with themselves or others. There is the issue of reputation to think about; complicity in the HONCO pandemic at any career level might be looked upon unfavourably by others, and scientists can be as gossipy as any humans, so bad ethics can bite you back.

I try to revisit co-authorship and the criteria involved throughout a project, especially as we begin the writing-up stage, to reduce risks of HONCOs or other maladies. An important aspect of collaboration is to ensure that people that might deserve co-authorship get an early chance to earn it, or else are told that they won’t be on board and why. Then they are not asked for further input unless it is needed, which might shift the balance and put them back on the co-author list. Critically, co-authorship is negotiable and should be a negotiation. One should not take it personally if not on a paper, but should treat others fairly and stay open-minded about co-authorship whenever possible. This has to be balanced against the risk of co-authorship bloat. Sure, so-and-so might add a little to a paper, but each co-author added complicates the project, probably slows it down, and diminishes the credit given to each other co-author. So a line must be drawn at some point. Maybe some co-authors and their contributions are best saved for a future paper, for example. This is a decision that the first, corresponding and senior author(s) should agree on, in consultation with others. But I also feel that undergraduate students and technicians often are the first to get the heave-ho from co-author considerations, which I’ve been trying to avoid lately when I can, as they deserve as much as anyone to have their co-author criteria scrutinized.

The Acknowledgements section of a paper is there for a reason, and it’s nice to show up there when you’ve truly helped a paper out whether as quasi-collaborative colleague, friendly draft-commenter, editor, reviewer or in other capacities. It is a far cry from being a co-author but it also typically implies that those people acknowledged are not to blame if something is wrong with the paper. I see Acknowledgements as “free space” that should be packed with thank-you’s to everyone one can think of that clearly assisted in some way. No one lists Acknowledged status on their CVs or gets other concrete benefits from them normally, but it is good social graces to use it generously. HONCOs’ proper home, at best, is there in the Acknowledgements, safely quarantined.

The Author Contributions section of a paper is something to take very seriously these days. I used to fill it out without much thought, but I’ve now gotten in the habit of scrutinizing it (where feasible) with every paper I’m involved in. Did author X really contribute to data analysis or writing the paper? Did all authors truly check and approve the final manuscript? “No” answers there are worrying. It is good research practice nowadays to put careful detail into this section of every paper, and even to openly discuss it among all authors so everyone agrees. Editors and reviewers should also pay heed to it, and readers of papers might find it increasingly interesting to peruse that section. Why should we care about author contribution lists in papers? Well, sure, it’s interesting to know who did what, that’s the main reason! It can reveal what skills an individual has or lacks, or their true input on the project vs. what the co-author order implies.

But there’s a deeper value to Author Contributions lists that is part of the academic immune system against HONCOs and other fraud. Anyone contributing to a particular part of a paper should be able to prove their contribution if challenged. For example, if a problem was suspected in a section of a paper, any authors listed as contributing to that section would be the first points of contact to check with about that possible problem. In a formal academic misconduct investigation, those contributing authors would need to walk through their contributions and defend (or correct) their work. It would be unpleasant to be asked how one contributed to such work if one didn’t do it, or to find out that someone listed you as contributing when you didn’t, and wouldn’t have accepted it if you had known. Attention to detail can pay off in any part of a research publication.

Ultimately, beating the blight of HONCO bloat will need teamwork from real co-authors, at every career level. Too often these academic dilemmas are broken down into “junior vs. senior” researcher false dichotomies. Yes, there’s a power structure and status quo that we need to be mindful of. Co-authorships, however, require collaboration and thus communication and co-operation.

It’s a long haul before we might see real progress; the fight against HONCOs must proceed paper-by-paper. There are worse problems that science faces, too, but my feeling is that HONCOs have gone far enough and it’s time to push back, and to earn the credit we claim as scientific authors. Honorary co-authorship is a dishonourable practice that is very different from other “honorary” kudos like honorary professorships or awards. Complex and collaborative science can mean longer co-author lists, absolutely, but it doesn’t mean handing out freebies to chums, students needing a boost, or erstwhile allies. It means more care is needed in designing and writing up research. And it also means that science is progressing; a progress we should all feel proud of in the end.

Do you have abhorrent HONCO chronicles of your own (anonymized please; no lynch mobs here!) or from public record? Or ideas for handling HONCO hazards? Please share and discuss.

Read Full Post »

When does a science story “end”? Never, probably. Science keeps voyaging on eternally in search of truth, and few if any stories in science truly “end”. But as science communicators of any stripe, we routinely have to make decisions about when a certain story has run its course; when the PR ship has sailed and the news cycle has ended. As scientists, we’re lucky if we have to consider this and should be grateful if and when our science even attracts media/science communication attention. But the point of today’s post; perhaps an obvious one but to my mind worthy of reflection on; is that scientists are not slaves to the PR machine– as a flip side to the previous self/science-promotion post, at some point we may have to say “This story about our research is done (for now).”

I routinely reflect on this when the media covers my research; I always have. My recent experience with New Yorker and BBC coverage of our penguin gait research (with James Proffitt and Emily Sparkes as well as Dr. Julia Clarke) got me thinking about this issue a lot, and talking about it quite a bit with James. This morning, over coffee, this blog post was born from my thoughts on that experience.

Stomach-Churning Rating: 7/10 for some mushy penguin specimens; PR officers might also get queasy.

I was waiting for a call from BBC radio one night almost three weeks ago, to do a recorded interview about our penguin research-in-progress, when I woke up surrounded by paramedics and was whisked off to the hospital. I never did that interview or any further ones. I won’t go into what went wrong but it relates to this old story. I’m OK now anyway. But for me, the penguin story had mostly ended before it began. However, I’d already agreed with James that we’d try to avoid doing further media stories beyond the New Yorker one and the BBC one, which was due out the next day and for which James (fortuitously instead of me!) was doing a live appearance on BBC Breakfast (TV). I got a few emails and calls about this story while recuperating in my hospital bed, including the one below, and turned down interview invitations for obvious reasons, with no arguments from anyone– at first.

For Jerry, the story never should have started, apparently. We all have our opinions on what stories are worth covering.

For Jerry, the story never should have started, apparently. We all have our opinions on what stories are worth covering. A “kind” email to receive in one’s hospital bed…

Then, after I recovered and got back to work, we kept getting a trickle of other interview/story invitations, and we declined them. Our PR office had suggested that we do a press release but we had already decided in advance not to, because we saw the story as just work-in-progress and I don’t like to do press releases about that kind of thing– except under extraordinary circumstances.

Finally, over a week after the BBC story aired, a major news agency wanted to film an interview with me about the story, which would get us (more) global coverage. They prefaced the invitation with the admission that they were latecomers to the story. Again I firmly said no; they could use existing footage but I could not do new interviews (these would inevitably take a half day or so of my time and energy). They wrote back saying they were going to go forward with the story anyway, and the journalist scolded me for not participating, saying that the story would have been so much better with a new film sequence of me in it. Maybe, but (1) I felt the story had run its course, (2) I’d had my hospitalization and a tragic death in the family, and (3) I was just returning, very jetlagged, from a short trip to the USA for other work. Enough already! I had other things to do. I didn’t follow up on what happened with that story. Maybe it didn’t even get published. I wasn’t left feeling very sympathetic.

Above: The BBC story

I kept thinking about being pressured and scolded by journalists, once in a while, for not joining in their news stories when they contradicted my own threshold for how much media coverage is enough. This reaching of a personal threshold had first happened to me 13 years ago when I published my first big paper, in Nature, on “Tyrannosaurus was not a fast runner.” After ~3 weeks of insane amounts of media coverage, I was exhausted and pulled the plug, refusing more interviews. It felt good to exert control over the process, and I learned a lot from learning to wield that control. I still use it routinely.

But… I am of course passionate about science communication, I feel it is a great thing for science to be in the public eye, and I actually love doing science communication stories about research-in-progress– too much science is shown as an endpoint, not a process. Indeed, that’s why I do this blog and other social media, most of which is science-in-progress and my thoughts about it. So I was and still am thrilled that we got such positive, broad, good quality media attention for our penguin work, but it was plenty.

Penguin bodies awaiting dissection for our latest work. Unfortunately, years of formalin, freezers and thawing cycles had rendered most of the soft tissues useless for our work. Photos here and below are of Natural History Museum (Tring) specimens from the ornithology collection; most collected in Antarctica ~50 yrs ago.

More sphenisciform science in progress: Penguin bodies awaiting dissection for our latest work. Unfortunately, years of formalin, freezers and thawing cycles had rendered most of the soft tissues useless for our work. Photos here and below are of Natural History Museum (Tring) specimens from the ornithology collection; most collected in Antarctica ~50 yrs ago.

Probably to many seasoned science communicators and scientists, my post’s message is blindingly obvious. Of course, scientists have rights — and responsbilities– in deciding how and when their research is covered. This is a negotiation process between their research team, their university, PR officers, journalists/media, funders and others involved– including the public. But less experienced scientists, and perhaps the public, might not realize how much control scientists do have over the amount of media attention they get. It’s easy to get caught up in a media frenzy surrounding one’s science (if you’re lucky enough to generate it at all) and feel the wind in one’s sails, thereby forgetting that you’re at the helm– you can decide when the journey is over (just be sure you communicate it diplomatically with others involved!).

This penguin did not survive the preservation process well; for whatever reason it had turned to mush, fit only for skeletonization. Gag. Its journey was definitely over.

This penguin did not survive the preservation process well; for whatever reason it had turned to mush, fit only for skeletonization. Gag. Its journey was definitely over.

As scientists, we have to balance enormous pressures and priorities: not just science communication and PR, but also our current main research, teaching, admin, personal lives, health, and so on. So we have to make hard decisions about how to balance these things. We should all reflect on what our dynamically shifting thresholds are for how much attention is enough, what priority level a given story has in our lives, and when the timing is right for any media attention. And as collaborative teams; more and more the norm in science; we should be discussing this issue and agreeing on it before it’s too late for us to exert much control.

One of our penguin chicks, in a better state of preservation than the adults. Photo by James Proffitt.

One of our penguin chicks from the Natural History Museum, in a better state of preservation than the adults. Photo by James Proffitt.

Penguin chick's right leg musculature in side view, exposing some nice muscles that gave us some useful data. Photo by James Proffitt.

Penguin chick’s right leg musculature in side view, exposing some decent muscles that gave us some useful data. Photo by James Proffitt.

Much like an over-played hit song, it’s not pretty when a science story gets over-milked and becomes too familiar and tedious, perhaps drawing attention away from other science that deserves attention. And we all will have our opinions on where that threshold of “too much attention” is. If we, as scientists, don’t think about those thresholds, we may end up rudderless or even wrecked on lonely islands of hype. I’ve seen scientists ostracized by their peers for over-hyping their work. It’s not fun. “Hey everybody, John is having a celery stick with peanut butter on it!” Celebrity culture doesn’t mean that everything scientists do deserves attention, and any amount of attention is deserved and good.

A great thing about science is that, in principle, it is eternal– a good science story can live forever while other science is built upon it. Each chapter in that story needs an ending, but there’s always the next chapter waiting for us, and that’s what keeps science vital and riveting. As scientists, we’re all authors of that story, with a lot of power over its narrative. We can decide when to save parts of that narrative for later, when the time is right. With our penguin story, we’ve only just begun and I’m incredibly excited about where it goes next.

How about other scientists, journalists and other afficionados of science? What examples of scientists taking charge of how their research gets covered do you find particularly instructive?

Read Full Post »

How do I manage my team of 10+ researchers without losing my mind <ahem> or otherwise having things fall apart? I’m often asked this, as I was today (10 December; I ruminated before posting this as I worried it was too boring). Whether those undesirable things have truly not transpired is perhaps debatable, but I’m still here and so is my team and their funding, so I take that as a good sign overall. But I usually give a lame answer to that question of how I do it all, like “I have no secrets, I just do it.” Which is superficially true, but…

Today was that time of year at the RVC when I conduct appraisals of the performance and development of my research staff, which is a procedure I once found horridly awkward and overly bureaucratic. But now that it focuses more on being helpful by learning from past missteps and plotting future steps in a (ideally) realistic fashion than on box-ticking or intimidation, I find the appraisals useful. The appraisals are useful at least for documenting progress and ensuring that teammates continue to develop their careers, not just crank out data and papers. By dissecting the year’s events, one comes to understand what happened, and what needs to happen in the next year.

The whole process crystalizes my own thoughts, by the end of a day of ~1 hour chats, on things like where there needs to be different coordination of team members in the coming year, or where I need to give more guidance, or where potential problems might arise. It especially helps us to sort out a timeline for the year… which inevitably still seems to go pear-shaped due to unexpected challenges, but we adapt and I think I am getting better myself at guessing how long research steps might take (pick an initial date that seems reasonable, move it back, then move it further back, then keep an eye on it).

Anyway, today the appraisals reminded me that I don’t have a good story for how I manage my team other than by doing these appraisals, which as an annual event are far from sufficient management but have become necessary. And so here I am with a post that goes through my approaches. Maybe you will find it useful or it will stimulate discussion. There are myriad styles of management. I am outlining here what facets of my style I can think of. There are parallels between this post and my earlier one on “success”, but I’ve tried to eliminate overlap.

Stomach-Churning Rating: 0/10 but no photos, long-read, bullet points AND top 10 list. A different kind of gore.

Successfully managing a large (for my field) research team leaves one with fewer choices than in a smaller team– in the latter case, you can be almost anywhere on the spectrum of hands-off vs. hands-on management and things may still go fine (or not). In the case of a large (and interdisciplinary) team, there’s no possibility to be heavily hands-on, especially with so many external collaborations piled on top of it all. So a balance has to be struck somewhere. As a result, inevitably I am forced into a managerial role where, over the years, I’ve become less directly in touch with the core methods we use, in terms of many nitty-gritty details. I’ve had to adapt to being comfortable with (1) emphasizing a big picture view that keeps the concepts at the forefront, (2) taking the constraints (e.g. time, technology and methods, which I do still therefore have to keep tabs on) into account in planning, (3) cultivating a level of trust in each team member that they will do a good job (also see “loyalty” below), and (4) maintaining the right level of overall expertise within the group (including external collaborators) that enables us to get research done to our standard. To do these things, I’ve had to learn to do these other things, which happen to form a top 10 list but are in no order:

  1. Communicate regularly– I’m an obsessive, well-organized emailer, in particular. E-mail is how I manage most of my collaborations within and outside my team, and how I keep track of much of the details. (Indeed, collaborators that aren’t so consistent with email are difficult for me) We do regular weekly team meetings in which we go around the table and review what we’re up to, and I do in-person chats or G+/Skype sessions fairly frequently to keep the ball rolling and everyone in synch. I now keep a notebook, or “memory cane” as I call it, to document meetings and to-do lists. Old school, but it works for me whereas my mental notebook started not to at times.
  2. Treat each person individually- everyone responds best to different management styles, so within my range of capabilities I vary my approach from more to less hands-off, or gentler vs. firmer. If people can handle robust criticism, or even if they can’t but they need to hear it, I can modulate to deliver that, or try to avoid crushing them. While I have high expectations of myself and those I work with, I also know that I have to be flexible because everyone is different.
  3. Value loyalty AND autonomy– Loyalty and trust matter hugely to me as a manager/collaborator. I believe in paying people back (e.g. expending a lot of effort in helping them move their career forward) for their dedicated work on my team, but also keeping in mind that I may need to make “sacrifices” (e.g. give them time off for side-projects I’m not involved in) to help them develop their career. I seek to avoid the extremes: fawningly helpless yes-men (rare, actually) or ~100% selfish what’s-in-it-for-me’s (not as rare but uncommon). Any good outcome can benefit a research manager even if they’re not a part of it, but also on a big team it’s about more than what benefits the 1st author or the senior author, but everyone, which is a tricky balance to attain.
  4. Prioritize endlessly– for me this means trying to keep myself from being the rate-limiting step in research. And I try to say “no” to new priorities if they don’t seem right for me. Sometimes it means getting little things done first to clear my desk (and mind) for bigger tasks; sometimes it means focusing on big tasks to the exclusion of smaller ones. Often it depends on my whims and energy level, but I try to keep those from harming others’ research. I make prioritized to-do lists and revisit them regularly.
  5. Allow chaos and failure/imperfection– This is the hardest for me. My mind does not work like a stereotypical accountant’s- I like a bit of disorder, as my seemingly messy office attests to. Oddly within that disorder, I find order, as my brain is still usually good at keeping things organized. I do like a certain level of involvement in research, and I get nervous when I feel that sliding down toward “uninvolved”– loss of control in research can be scary. Some degree of detachment, stepping aside and allowing for time to pass and people to self-organize or come ask for help to avoid disaster (or celebrate success), is necessary, though, because I cannot be everywhere at once and nothing can be perfect. And of course, I myself fail sometimes, but with alertness comes recognition and learning. Furthermore, too much control is micromanagement, which hurts morale, and “disorder” allows the flexibility that can bring serendipitous results (or disaster). And speaking of disaster, one has to be mentally prepared for it, and able to take a deep breath and react in the right way when it comes. Which leads to…
  6. Think brutally clearly – Despite all the swirling chaos of a large research team and many other responsibilities of an academic and father and all that, I have taught myself a skill that I point to as a vital one. I can stop what I’m doing and focus very intensely on a problem when I need to. If it’s within my expertise to solve it, by clearing my head (past experience with kendo, yoga and karate has helped me to do this), I usually can do it if I enter this intensely logical, calm, objective quasi-zen-state. I set my emotions aside (especially if it is a stressful situation) and figure out what’s possible, what’s impossible, and what needs to be done, and find what I think is the best course of action quite quickly, then act on that decisively (but without dogmatic inflexibility). In such moments, I find myself thinking “What is the right thing to do here?” and I almost instinctively know when I can see that right thing. At that moment I get a charge of adrenaline to act upon it, which helps me to move on quickly. From little but hard decisions to major crises, this ability serves me very well in my whole life. I maintain a duality between that singleminded focus and juggling/anarchy, often able to quickly switch between those modes as I need to.
  7. Work hardest when I work best (e.g. good sleep and caffeination level, mornings)- and let myself slack off when I’m not in prime working condition. I shrug aside guilt if I am “slacking”– I can’t do everything and some things must fall by the wayside if I can’t realistically resolve them in whatever state of mind I’m in. The slacking helps me recharge and refresh– by playing a quick video game or checking social media or cranking up some classic Iron Maiden/modern Menzingers, I can return to my work with new gusto, or even inspiration, because…
  8. Spend a lot of time thinking while I “slack off”, in little bursts (e.g. while checking Twitter). I let my brain process things that are going on, let go of them when I’m not getting anywhere with them, and return to them later. This is harder than it sounds as I still stubbornly or anxiously get stuck on things if they are stressing me out or exciting me a lot. But I am progressively improving at this staccato-thinking skill.
  9. Points 7+8 relate to my view that there is no “work-life balance” for me—it is all my life, and there’s still a lot of time to enjoy the non-work parts, but it’s all a blend that lets me be who I am. I don’t draw lines in the sand. Those just tend to make one feel bad, one way or another.
  10. Be human– try to avoid acting like a distant, emotionless robotic manager and cultivate more of a family-like team. Being labelled with the word “boss” can turn my stomach. “Mentor” and “collaborator” are more like what I aim for. Being open about my own flaws, failures, and life helps.

Long post, yeah! 1 hour on a train commute lets the thoughts flow. I hope that if you made it this far you found it interesting.

What do you do if you manage a team, what works for you or what stories do you have of research management? Celebrations and post-mortems are equally welcome.


Read Full Post »

I awoke on the floor in the aisle of my United Airlines flight to Los Angeles, with three unfamiliar men crouched around me, bearing serious expressions as they looked down on my prone body.

I was next to my seat. My daughter was crying inconsolably in her seat next to mine, and my wife was calling to me with an urgent tone from the next seat over.

Gradually, as my confusion faded and the men let go of me (I’d been cursing them out, in mangled words because I had bitten my tongue), I became aware that I was in intense pain, I could not move much, and my wife’s words became clearer:

I’d had a seizure. And so our relaxing family holiday, which had only just begun, ended. And so my waking nightmare began.

Stomach-Churning Rating: 5/10; lots of Anatomy Fail CT/x-ray images and gruesome descriptions, and a photo of some bruising.

I was helped back into my seat as I regained my senses, I noticed blood on me from my tongue, and I learned that we were 2 hours away from L.A. As I was acting more normal, and we were 5/6 of our journey along, there was no need to prematurely land the flight. I had fallen asleep while watching “22 Jump Street”, about 1.5 hrs in, and that’s when my seizure struck– much like the previous two seizures I’d had. Jonah Hill could be ruled out as a culprit, but going to sleep was an enabling factor. I got some over-the-counter painkillers and sat in a daze as time ticked by, we landed, and paramedics boarded the plane to whisk me off to the hospital with my family.

Two gruelling days and nights in a California hospital later, with my first night spent in a haze of clinical tests, begging for painkillers, yelling in pain every time I moved, and otherwise keeping my hospital roommate awake, the story became clearer: my seizure was so intense that I’d dislocated my right shoulder (unfortunately I’d not had much pain relief when the emergency room staff popped it back into my glenoid), probably dislocated my left shoulder too but then relocated it myself admist my thrashing, and done this (cue Anatomy Fail images):

Left shoulder, with the offending greater tubercle/tuberosity of the humerus showing fracture(s).

Left shoulder, with the offending greater tubercle/tuberosity of the humerus showing fracture(s).

Right shoulder x-ray, showing dislocation of the head of the humerus from the glenoid. Compare with above image- humerus has been shifted down. BUT no fractures, yay!

Right shoulder x-ray, showing dislocation of the head of the humerus from the glenoid. Compare with above image- humerus has been shifted down, the shoulder joint is facing you. BUT no fractures, yay!

CT scan axial slice showing my neck (on left), then scapula with fractured coracoid process ("bad") and displaced, fractured greater tubercle of humerus on right side.

CT scan axial slice showing my spine (on left), then scapula with fractured coracoid process (“Bad”) and displaced, fractured greater tubercle of humerus on right side (“V bad”).

So, that explains most of the pain I was in.

What’s amazing is that the fractures most likely occurred purely via my own uncontrolled muscle contractions. All the karate and weight-training I’d been doing certainly had made me stronger in my rotator cuff muscles, which attach to the greater tubercle of the humerus. And with inhibition of my motoneurons turned off during my seizure, and both agonist and antagonist muscles near-maximally turned on, rapid motions of my shoulders by my spasming muscles would have dislocated my shoulders and then wrenched apart some of the bony attachments of those same muscles. I’m glad I don’t remember this happening.

I had also complained of pain in my neck, so they did a CT scan and x-ray there too:

X-ray: No broken neck. This is good.

X-ray: No broken neck. This is good. Just muscle strain, which soon faded.

The left shoulder injuries created a hematoma, or mass of blood beneath my skin, and soon that surfaced and began draining down my arm (via the lymphatic system under gravity’s pull), creating fascinating patterns:

Bruises migrating; no pain associated with these, just superficial drainage of old blood.

Bruises migrating; no pain associated with these, just superficial drainage of old blood. This is tame, tame, tame compared to what my left ribcage looked like. I’ve spared you that.

But then more fundamentally there was the question of, why a seizure? With no clear warning? As I’ve explained before, I’d had a stroke ~12 yrs ago that caused a similar seizure but with no injuries to my postcranial body. So a series of MRI and CT scans ensued (the radiation I’ve had from the latter is good fodder for a superhero/villain origin tale? Marvel, I’ll await your call), and there was no clear damage or bleeding, and hence no stroke evident. Good news.

There are, however, at least two sizeable calcifications in my brain that are likely to be hardened scar tissue from my stroke. These may or may not have an identifiable affect on me or linkage with the seizure. Brain calcifications can happen for a variety of reasons, sometimes without clear ill effects.

Calcification in ?ventricle? of my cerebrum.

Calcification in parietal lobe of my cerebrum, from axial CT scan slice. But no bleeding (zone of altered density/contrast).

That is the state of the evidence. I’ve since had what semblance of a L.A. family holiday I could manage, benefitting from a touching surge of support from my family, friends and colleagues that has kept me from sinking entirely into despair and has brought quite a few smiles.

The plane flight home was tense. We were in the same seats again and one of the flight attendants recognized us and came to chat, eager to learn what had happened after we left the plane a week ago. He was very nice and the doctors had given me an “OK to fly” letter. But it was an evening flight. I needed to sleep, yet it was clear to me that sleep was no longer the fortress of regenerative sanctity that I was used to it being. Sleep had taken on a certain menace, because it was a state in which I’d now had three seizures. Warily, I drifted off to sleep after having some hearty chuckles at the ending to “22 Jump Street”. And while it was not very restful slumber, it was the friendly kind of slumber that held no convulsive violence within its embrace. We returned home safely.

In a rush, I cancelled my attendance at the Society of Vertebrate Paleontology conference this week, turning over the symposium I’d convened to honour one of my scientific heroes, biomechanist R. McNeill Alexander (who also could not attend due to ill health), to my co-convenors Eric Snively and Andreas Christian (by accounts I heard, all went well). I missed out on a lot of fun and the joy of watching 2 of my PhD students present posters on preliminary results of their research. Thanks to social media and email, however, I’ve been able to catch a lot of the highlights and excitement from that conference in Berlin.That has helped distract me somewhat from other goings-on.

Meanwhile, I’ve been resting, doing a minimal amount of catching up with work, having a lot of meetings with doctors to arrange treatment, and pondering my situation– a lot.

I know this much: I’ve had two violent seizures in a month (the previous one was milder but still bad, and not a story I need to tell here), and so I’m now an epileptic, technically. When and if I’ll have another seizure is totally uncertain, but to boost the odds in my favour I’m on anti-convulsant drugs for a long time now.

In about half of seizure cases, it’s never clear what caused the seizures. What caused my 2002 stroke is somewhat clear, but the mechanism behind that remains a mystery, and my other health problems likewise have a lot of question marks regarding their genesis and mutually causative relationships, if any. The outcome of this new development in my medical history is likely to be: “maybe your brain calcifications and scar tissue helped stimulate your new seizures, but we can’t be sure. The treatment is the same regardless: stay on anti-convulsants for a while, try going off them later, and see if seizures manifest themselves again or not.” Brains are freaking complicated; when they go haywire it can be perplexing why.

As a scientist, I thrill at finding uncertainty in my research topics because that always means there is work left to be done. But in my own life outside of science, stubborn, independent, strong-willed control freak that I can certainly be at times, I am not such a fan of uncertainty. In both cases the goal is to minimize that uncertainty by gathering more information, but in our lives we often encounter unscalable walls of uncertainty that persist because of lack of knowledge regarding a problem that vexes us, especially a medical problem. We then can feel in a helpless state, adrift on the horizon of science, waiting for explorers to push that horizon further and with it advance our treatment or at least our insight into ourselves.

When the subject of that uncertainty is not some detached, objective, unthreatening, exciting research topic but rather ourselves and our own future constitution and mortality, it thus becomes deeply personal and disconcerting. I’m grateful that I don’t have brain cancer or some other clear and present threat to my immediate vitality. Things could be a lot worse; I am here writing this blog after all. I’ll never forget now being in the ambulance and thinking “this may be the end of it all; I might not last much longer”, and choking out a farewell to my wife just in case things took a bad turn. I’m grateful for the amazing things that modern medicine and imaging techniques can do– these have saved my life so many times over, I cannot fathom how to quantify it. And I’m grateful for the people that have helped me through this so far. Fiercely independent as I may be, I can’t face everything alone.

I am reminded of words I read recently by Baruch Spinoza, “The highest activity a human being can attain is learning for understanding, because to understand is to be free.” To further paraphrase him, we love truth because it is knowledge that enables us to stay alive- without it, we are flying blind and soon will crash. With the freedom it brings, we know the landscape of our own life and where the frontiers of uncertainty lie (“here be dragons”).


The past two weeks have been horrendous for me. I’d been feeling healthy and stronger than ever in many ways, and my life as of my birthday a month ago felt pretty damn good. But now everything has come crashing down in disaster, and I have been suffering from the realization, once again, of how vulnerable I am and how little I can control, and the darkness that ushers in as the odds begin to stack up against our future lives. I am acutely aware now of where the “dragons” are.

I am taking one important step forward, though, in wresting life back onto the rails again- this week I undergo surgery to put my left shoulder back together. While that’s scary, to be sliced open and have my rotator cuff and bones carpentered back where they should be, I know I’m in good hands with a top UK shoulder surgeon and methods that are tried-and-true. The risks are small, although the recovery time will be long. There won’t be any hefting of big frozen elephant feet in my research soon, not for me, and so my enjoyable anatomy studies are going to have to change their track for coming months while I regain my strength and rely on others’ help.

(do you know the movie reference?)

(do you know the movie reference? I have a new empathy for Ash.)

Then we’re on to the frightening task of tackling the spasmodic-gorilla-in-the-room with neurologists. We’ll see where that journey leads.

One thing is certain: I’m still me and there’s still a lot of fight left in me, because I have a lot left to fight for, and people and knowledge to aid me in that fight. I can shoulder the burden of uncertainty in my life because I have all that. Off I go…

20 November UPDATE:

I’ve had surgery to put my greater tuberosity back where it belongs. Thanks to a skilled surgeon’s team, some sutures and nickel-titanium staples, I am back closer to my normal morphology and can begin recovering my (currently negligible) shoulder joint’s range of motion via some physiotherapy. Surgery went very well; I was just in hospital for ~30 hours; but the 9 days of recovery since have been brutally hard due to problems switching medications around. Today I got my stitches out and a beautiful x-ray showing plentiful healing; yay!

This is a slightly oblique anterior (front) view of my left shoulder/chest. Fracture callus means healing is working well!  Four surgical staples (bright white thingies on upper RH side of image): forever now a part of my anatomy.

This is a slightly oblique anterior (front) view of my left shoulder/chest. Fracture callus means healing is working well!
Four surgical staples (bright white thingies on upper RH side of image): forever now a part of my anatomy.

Read Full Post »

Older Posts »


Get every new post delivered to your Inbox.

Join 3,226 other followers