In the past, I’ve often run into scientists who, when defending their published or other research, respond something like this:
“Yeah those data (or methods) might be wrong but the conclusions are right regardless, so don’t worry.”
And I’ve said things like that before. However, I’ve since realized that this is a dangerous attitude, and in many contexts it is wrong.
If the data are guesses, as in the example I gave, then we might worry about them and want to improve them. The “data are guesses” context that I set the prior post in comes from Garland’s 1983 paper on the maximal speeds of mammals– you can download a pdf here if this link works (or Google it). Basically the analysis shows that, as mammals get bigger, they don’t speed up as a simple linear analysis might show you. Rather, at a moderate size of around 50-100kg body mass or so, they hit a plateau of maximal speed, then bigger mammals tend to move more slowly. However, all but a few of the data points in that paper are guesses, many coming from old literature. The elephant data points are excessively fast in the case of African elephants, and on a little blog-ish webpage from the early 2000s we chronicled the history of these data– it’s a fun read, I think. The most important, influential data plot from that paper by Garland is below, and I love it– this plot says a lot:
I’ve worried about the accuracy of those data points for a long time, especially as analyses keep re-using them– e.g. this paper, this one, and this one, by different authors. I’ve talked to several people about this paper over the past 20 years or so. The general feeling has been in agreement with Scientist 1 in the poll, or the quote above– it’s hard to imagine how the main conclusions of the paper would truly be wrong, despite the unavoidable flaws in the data. I’d agree with that statement still: I love that Garland paper after many years and many reads. It is a paper that is strongly related to hypotheses that my own research seeks out to test. I’ve also tried to fill in some real empirical data on maximal speeds for mammals (mainly elephants; others have been less attainable), to improve data that could be put into or compared with such an analysis. But it is very hard to get good data on even near-maximal speeds for most non-domesticated, non-trained species. So the situation seems to be tolerable. Not ideal, but tolerable. Since 1983, science seems to be moving slowly toward better understanding of the real-life patterns that the Garland paper first inferred, and that is good.
But…
My poll wasn’t really about that Garland paper. I could defend that paper- it makes the best of a tough situation, and it has stimulated a lot of research (197 citations according to Google; seems low actually, considering the influence I feel the paper has had).
I decided to do the poll because thinking about the Garland paper’s “(educated) guesses as data” led me to think of another context in which someone might say “Yeah those data might be wrong but the conclusions are right regardless, so don’t worry.” They might say it to defend their own work, such as to deflect concerns that the paper might be based on flawed data or methods that should be formally corrected. I’ve heard people say this a lot about their own work, and sometimes it might be defensible. But I think we should think harder about why we would say such things, and if we are justified in doing so.
We may not just be making the best of a tough situation in our own research. Yes, indeed, science is normally wrong to some degree. A more disconcerting situation is that our wrongs may be mistakes that others will proliferate in the future. Part of the reasoning for being strict stewards of our own data is this: It’s our responsibility as scientists to protect the integrity of the scientific record, particularly of our own published research because we may know that best. We’re not funded (by whatever source, unless we’re independently wealthy) just to further our own careers, although that’s important too, as we’re not robots. We’re funded to generate useful knowledge (including data) that others can use, for the benefit of the society/institution that funds us. All the more reason to share our critical data as we publish papers, but I won’t go off on that important tangent right now.
In the context described in the latter paragraph and the overly simplistic poll, I’d tend to favour data over conclusions, especially if forced to answer the question as phrased. The poll reveals that, like me, most (~58%) respondents also would tend to favour data over conclusions (yes, biased audience, perhaps- social media users might tend to be more savvy about data issues in science today? Small sample size, sure, that too!). Whereas very few (~10%) would favour conclusions, in the context of the poll. The many excellent comments on the poll post reveal the trickier nuances behind the poll’s overly simplistic question, and why many (~32%) did not favour one answer over the other.
If you’ve followed this blog for a while, you may be familiar with a post in which I ruminated over my own responsibilities and conundrums we face in work-life balance, personal happiness, and our desires to protect ourselves or judge/shame others. And if you’ve closely followed me on Twitter or Facebook, you may have noticed we corrected a paper recently and retracted another. So I’ve stuck by my guns lately, as I long have, to correct my team’s work when I’m aware of problems. But along the way I’ve learned a lot, too, about myself, science, collaboration, humanity, how to improve research practice or scrutiny, and the pain of errors vs. the satisfaction of doing the right thing. I’ve had some excellent advice from senior management at the RVC along the way, which I am thankful for.
I’ve been realizing I should minimize my own usage of the phrase “The science may be flawed but the conclusions are right.” That can be a more-or-less valid defence, as in the case of the classic Garland paper. But it can also be a mask (unintentional or not) that hides fear that past science might have real problems (or even just minor ones that nonetheless deserve fixing) that could distract one away from the pressing issues of current science. Science doesn’t appreciate the “pay no attention to the person behind the curtain” defence, however. And we owe it to future science to tidy up past messes, ensuring the soundness of science’s data.
We’re used to moving forward in science, not backward. Indeed, the idea of moving backward, undoing one’s own efforts, can be terrifying to a scientist– especially an early career researcher, who may feel they have more at risk. But it is at the very core of science’s ethos to undo itself, to fix itself, and then to move on forward again.
I hope that this blog post inspires other scientists to think about their own research and how they balance the priorities of keeping their research chugging along but also looking backwards and reassessing it as they proceed. It should become less common to say “Yeah those data might be wrong but the conclusions are right regardless, so don’t worry.” Or it might more common to politely question such a response in others. As I wrote before, there often are no simple, one-size-fits-all answers for how to best do science. Yet that means we should be wary of letting our own simple answers slip out, lest they blind us or others.
Maybe this is all bloody obvious or tedious to blog readers but I found it interesting to think about, so I’m sharing it. I’d enjoy hearing your thoughts.
Coming soon: more Mystery Anatomy, and a Richard Owen post I’ve long intended to do.
Imagine this: two scientists (colleagues, if you’re a scientist) are arguing thusly. Say it’s an argument about a classic paper in which much of the data subjected to detailed statistical analyses are quantitative guesses, not hard measurements. This could be in any field of science.
Scientist 1: “Conclusions are what matter most in science. If the data are guesses, but still roughly right, we shouldn’t worry much. The conclusions will still be sound regardless. That’s the high priority, because science advances by ideas gleaned from conclusions, inspiring other scientists.”
Scientist 2: “Data are what matter most in science. If the data are guesses, or flawed in some other way, this is a big problem and scientists must fix it. That’s the high priority, because science advances by data that lead to conclusions, or to more science.”
Who’s right? Have your say in this anonymous poll (please vote first before viewing results!):
[Wordpress is not showing the poll on all browsers so you may have to click the link]
And if you have more to say and don’t mind being non-anonymous, say more in the Comments- can you convince others of your answer? Or figure out what you think by ruminating in the comments?
I’m genuinely curious what people think. I have my own opinion, which has changed a lot over the past year. And I think it is a very important question scientists should think about, and discuss. I’m not just interested in scientists’ views though; anyone science-interested should join in.
Even nine years later, I still keep thinking back to a day, early in my career as an academic faculty member based in England, that traumatized me. Today I’m going to share my story of that day. I feel ready to share it.
Stomach-Churning Rating: hmm that’s a tough call, but I’ll say 1/10 because it’s just photos of live crocs and such.
This day was part of a research trip that lasted a couple of weeks, and it was in Florida, not England, and little of that trip went well at first. It transpired almost exactly 9 years ago today; around 20 August 2005. I took two 2nd/3rd year undergraduate students and our lab technician with me to Florida, meeting up with Dr. Kent Vliet, an experienced crocodile specialist, to study the biomechanics of crocodile locomotion, a subject I’ve been slowwwwwwly working on since my PhD days (see recent related blog post here). We were funded by an internal grant from my university that was supposed to be seed money to get data to lay groundwork for a future large UK research grant.
Cuban crocodile adult relaxing in a nearby enclosure. Pound-for-pound, a scary croc, but these acted like puppies with their trainers.
I’m interested in why only some crocodylian species, of some sizes and age classes, will do certain kinds of gaits, especially mammal-like gaits such as bounding and galloping. This strongly hints at some kind of size-related biomechanical mechanism that dissuades or prevents larger crocs from getting all jiggy with it. And at large size, with few potential predators to worry about and a largely aquatic ambush predator’s ecology, why would they need to? Crocodiles should undergo major biomechanical changes in tune with their ecological shifts as they grow up. I want to know how the anatomy of crocodiles relates to these changes, and what mechanism underlies their reduction of athletic abilities like bounding. That’s the scientific motivation for working with animals that can detach limbs from your body. (The crocodiles we worked with initially on this trip were small (about 1 meter long) and not very dangerous, but they still would have done some damage if they’d chosen to bite us, and I’ve worked with a few really nasty crocs before.)
Me putting motion capture markers onto an uncooperative young Siamese crocodile.
We worked at Gatorland (near Orlando) with some wonderfully trained crocodiles that would even sit in your lap or under your chair, and listened to vocal commands. The cuteness didn’t wear off, but our patience soon did. First, the force platform we’d borrowed (from mentor Rodger Kram’s lab; a ~$10,000 piece of useful gear) and its digital data acquisition system wouldn’t work to let us collect our data. That was very frustrating and even a very helpful local LabView software representative couldn’t solve all our problems. But at least we were able to start trying to collect data after four painstaking days of debugging while curious crocodiles and busy animal handlers waited around for us to get our act together. The stress level of our group was already mounting, and we had limited time plus plenty of real-life bugs (the bitey, itchy kind; including fire ants) and relentless heat to motivate us to get the research done.
Adorable baby Cuban crocodile.
Then the wonderfully trained crocodiles, as crocodiles will sometimes do, decided that they did not feel like doing more than a slow belly crawl over our force platform, at best. This was not a big surprise and so we patiently tried coaxing them for a couple of sweltering August days. We were working in their caged paddock, which contained a sloping grassy area, a small wooden roofed area, and then at the bottom of the slope was the crocodiles’ pond, where they sat and chilled out when they weren’t being called upon to strut their stuff for science. We didn’t get anything very useful from them, and then the weather forecast started looking ugly.
Hybrid Siamese crocodile in its pond in our enclosure, waiting to be studied.
We’d been watching reports of a tropical storm developing off the southeastern coast of Florida, and crossing our fingers that it would miss us. But it didn’t.
When the storm hit, we were hoping to weather the edge of the storm while we packed up, because we decided we’d done our best but our time had run out and we should move to our next site, the Alligator Farm and Zoological Park in St Augustine, where I’d worked a lot before with other Crocodylia. But the storm caught us off guard, too soon, and too violently.
To give some context to the situation, for the previous several days the local croc handlers had told us stories of how lightning routinely struck this area during storms, and was particularly prone to hitting the fences on the park perimeter, which we were close to. There was a blasted old tree nearby that vultures hung out in, and they related how that blasting had been done by lightning. One trainer had been hit twice by (luckily glancing) blows from lightning hitting the fences and such.
Ominous onlooker.
The storm came with pounding rain and a lot of lightning, much of it clearly striking nearby- with almost no delay between flashes and thunder, and visible sky-to-ground bolts. We debated taking our forceplate out of the ground near the crocodile pond, because sensitive electrical equipment and rain don’t go well together, but this would take precious time. The forceplate was covered with a tarp to keep the rain off. I decided that, in the interest of safety, we needed to all seek shelter and let the forceplate be.
I’ll never forget the memory of leaving that crocodile enclosure and seeing a terrible sight. The crocodile pond had swiftly flooded and engulfed our forceplate. This flooding also released all the (small) crocodiles which were now happily wandering their enclosure where we’d been sitting and working before.
Another subject awaits science.
At that point I figured there was no going back. Lightning + deepening floodwater + electrical equipment + crocodiles = not good, so I wagered my team’s safety against our loaned equipment’s, favouring the former.
We sprinted for cars and keepers’ huts, and got split up in the rain and commotion. As the rain calmed down, I ventured out to find the rest of the team. It turned out that amidst the havoc, our intrepid lab technician had marshalled people to go fetch the forceplate out from the flooded paddock, storm notwithstanding. We quickly set to drying it out, and during some tense time over the next day we did several rounds of testing its electronics to see if it would still work. Nope, it was dead. And we still had over a week of time left to do research, but without our most useful device. (A forceplate tells you how hard animals are pushing against the ground, and with other data such as those from our motion analysis cameras, how their limbs and joints function to support them)
We went on to St Augustine and got some decent data using just our cameras, for a wide variety of crocodiles, so the trip wasn’t a total loss. I got trapped by remnants of the storm while in Washington, DC and had to sleep on chairs in Dulles Airport overnight, but I got home, totally wrecked and frazzled from the experience.
That poorly-timed storm was part of a series of powerful storms that would produce Hurricane Katrina several days later, after we’d all left Florida. So we had it relatively easy.
I’m still shaken by the experience- as a tall person who grew up in an area with a lot of dangerous storms, I was already uneasy about lightning, feeling like I had a target on my back. But running from the lightning in that storm, after all the warnings we’d had about its bad history in this area, and how shockingly close the lightning was, leaves me almost phobic about lightning strikes. I’m in awe of lightning and enjoy thunderstorms, which I’ve seen few of since I left Wisconsin in 1995, but I now hate getting caught out in them.
The ill-fated forceplate and experimental area.
Moreover, the damage to the forceplate- which we managed to pay to repair and return to my colleague, and the failure of the Gatorland experiments, truly mortified me. I felt horrible and still feel ashamed. I don’t think I could have handled the situation much differently. It was just a shitty situation. That, and I wanted to show our undergrads a good time with research, yet what they ended up seeing was a debacle. I still have the emails I sent back to my research dean to describe what happened in the event, and they bring back the pain and stress now that I re-read them. But then… there’s a special stupid part to this story.
I tried to lighten the mood one night shortly after the storm by taking the team out to dinner, having a few drinks and then getting up to sing karaoke in front of the restaurant. I sang one of my favourite J Geil’s Band tunes– I have a nostalgic weakness for them- the song “Centerfold“. I not only didn’t sing it well (my heart was not in it and my body was shattered), and tried lamely to get the crowd involved (I think no one clapped or sang along), but also in retrospect it was a bad choice of song to be singing with two female undergrads there– I hadn’t thought about the song’s meanings when I chose to sing it, I just enjoyed it as a fun, goofy song that brought me back to innocent days of my youth in the early 1980’s. But it is not an innocent song.
So ironically, today what I feel the most embarrassed about, thinking about that whole trip and the failed experiment, is that karaoke performance. It was incredibly graceless and ill-timed and I don’t think anyone enjoyed it. I needed to unwind; the stress was crushing me; but oh… it was so damn awkward. I think I wanted to show to the team “I’m OK, I can still sing joyfully and have a good time even though we had a disastrous experiment and maybe nearly got electrified or bitten by submerged crocodiles or what-not, so you can relax too; we can move on and enjoy the rest of the trip” but in reality I proved to myself, at least, that I was not OK. And I’m still not OK about that experience. It still makes me cringe. Haunted, it took me many years to feel comfortable singing karaoke again.
It should have been a fun trip. I love working with crocodiles, but Florida is a treacherous place for field work (and many other things). I can’t say I grew stronger from this experience. There is no silver lining. It sucked, and I continually revisit it in my memory trying to find a lesson beyond “choose better times and better songs to sing karaoke with” or “stay away from floods, electricity and deadly beasts.”
So that wins, out of several good options, as the worst day(s) of my career that I can recall. I’ve had worse days in my life, but for uncomfortable science escapades this edges out some other contenders. Whenever I leave the lab to do research, I think of this experience and hope that I don’t see anything worse. It could have been much worse field work.
(Epilogue: the grants we’ve tried to fund for this crocodile gait project all got shot down, so it has lingered and we’ve done research on it gradually since, when we find time and students… And one of the students on this trip went on to do well in research and is finishing a PhD in the Structure & Motion Lab now, so we didn’t entirely scare them off science!)
I’ve been doing a series of career guidance sessions with my research team, and this past week we talked about how to structure a successful career path as a scientist. As part of that, I gave my thoughts on how to maximize chances of that “success” (traditional definition; getting a decent permanent job as a researcher, and doing a good job at it); without knowingly being a jerk or insincere. This process led me to re-inspect my own career for insights — not that I’ve been on perfect behaviour, but I do routinely reflect on choices I make.
I asked myself, “What does success mean to me?” to see what my answer was today. That led to me writing up this story of my career path, as an example of the twists and turns that can happen in the life of a scientist. I originally intended to share this story just with my team, but then I decided to turn into a full-on blog post, in my ongoing personal quest to open up and share my thoughts and experiences with others. For those who have read my advice to PhD students, there are some commonalities, but plenty of this is new.
Where my last post was partly about publicly exposing vulnerabilities in other scientists, this one is about privately finding one’s own vulnerabilities along with the strengths, and sharing them publicly. The story is about me, but the key points are more about how “success” can evolve in science (N=1 plus anecdotal observations of others).
Growing Up in Grad School
As an undergraduate student, I was clueless about my career until I applied to graduate school a second time. The first time I tried applying, I didn’t even know how to really go about it, or what I wanted to do beyond some sort of biology. Yet to my credit I was curious, creative, a swift learner with a great memory for science, and broadly educated in biology and other fields (thanks, parents and past teachers!). I read and watched “Jurassic Park” and lots of Stephen Jay Gould and Darwin or palaeontology books, and I just tried to actively learn all I could, reading compulsively. I even resolved to quit non-science reading for a few years, and stuck to that. I realized that a research career combining evolution and biomechanics was of interest to me, involving vertebrates and maybe fossils.
I got into grad school in 1995 and had a great project to study how dinosaurs moved, but I felt inadequate compared to my peers. So I dedicated myself even harder to reading and learning. I didn’t pass my first orals (qualifying exam; appraisal/defense) but that helped me to refocus even more resolutely on deep learning, especially to fill gaps in my knowledge of biomechanics methods that I’d later use. During this time I also learned website design and HTML code (mid-90s; early WWW!), working with several others on Berkeley’s UCMP website in my free time. I intensively networked with colleagues via email lists (the long-lived Dinosaur listproc) and at a lot of conferences, trying to figure out how science worked and how to go about my project. That was a powerful initial formative period.
It was a gruelling struggle and I’d had serious health problems (a narrow escape from cancer) around the same time, too. I frequently, throughout the 1990’s, doubted if I could make it in the field. I looked around me and could not see how I could become successful in what I wanted to do (marry biomechanics and evolutionary biology in stronger ways). I was so scared, so uncertain of my own work, that I didn’t know what to do—I had a project but had no clue how to really implement it. So two years passed in semi-paralysis, with little concrete science to show for it, and I gave a lot of *bad* internal seminars in Berkeley’s Friday biomechanics group. However, those bad seminars helped me to become a better speaker. I had a terrible fear of public speaking; on top of having little data, this experience was brutal for me. But I used it as practice, bent to the task of bettering myself.
A change in my career trajectory happened as my research slowly took root. I wrote some book chapters for a dinosaur encyclopedia in 1997, a simple paper describing a little dinosaur in 1998, then another paper on taxonomy published in 1999. [For those wanting to find out what any of these papers I mention are, they are on my Publications page, often with pdfs] These papers at least showed I could finish a research task; when I was younger I’d had some bad habits of not finishing work I started.
I visited a lot of museums and hung out with people there, socializing while learning about diverse fossils and their evolutionary anatomy, implementing what I’d learned from my own dissections and literature studies of living animals. This led to a poster (actually two big posters stacked atop each other; plotting the evolution of the reptilian pelvis and muscles) at a palaeontology meeting (SVP). This poster turned a few heads and I suppose convinced some that I knew something about bone and soft tissue anatomy.
Then in 1998, I did a 4-month visiting scholarship at Brown University with Steve Gatesy that had a big impact on my career: Steve helped me consolidate ideas about how anatomy related to function in dinosaurs, and how to interpret data from living animals (I did my first gait experiments, with guineafowl, which went sort of OK), and I loved Brown University’s EEB department environment. For once, I felt like a grown-up, as people started to listen to what I had to say. In retrospect, I was still just a kid in many other ways. I didn’t really achieve a lot of what Steve asked me to do; I was unfocused, but changing steadily.
In 1999, I gave a talk at SVP that was well received, based on that research with Gatesy, and then I gave it again at SICB. I had a few prominent scientists encouraging me to apply for faculty jobs (e.g., Beth Brainerd was very supportive)– this gave me a new charge of excitement and confidence. I finally began to feel like a real expert in my little area of science. That talk became our 2000 “Abductors, adductors…” paper in Paleobiology, which I still love for its integrative nature and broad, bold (but incompletely answered) questions. Yet when a respected professor at Berkeley told me before my University of Chicago faculty job interview “You act like a deer in the headlights too often,” I knew I had a long journey of self-improvement left. And a lot of that improvement just came with time– and plenty of mistakes.
Momentum continued to build for my career in 2000 as I took my anatomical work into more biomechanical directions and passed my orals. I gave an SVP Romer Prize (best student talk) presentation on my new T. rex biomechanical modelling work, and I won! I felt truly appreciated, not just as an expert but as an emerging young leader in my research area. I’ll never forget the standing ovation at the award announcement in Mexico City—seeing people I saw as famous and amazing get up and cheer for me was such a rush! Then I published two lengthy anatomical papers in Zool J Linn Soc in 2001, which still are my most cited works — even more than some of my subsequent Nature papers.
Evolution: Postdoc to Faculty
Also in 2001, I was awarded a NSF postdoc at Stanford to do exactly what I’d long wanted to do: build detailed biomechanical models of dinosaurs, using the anatomical work I’d done before. That was it: I saw evidence that I had “made it”. But that took about six years; toward the end of my PhD; to truly feel this way most of the time, and in some ways this feeling led to youthful overconfidence and brashness that I had to later try to shed. I feel fortunate that the rest of my career went more smoothly. I doubt I could have endured another six years of struggling as I did during my PhD. But it wasn’t easy, either. During my postdoc I had to force my brain to think like a mechanical engineer’s and that was a difficult mental struggle.
The year 2002 became a wild ride for me.
First, my T. rex “not a fast runner” paper got published in Nature, and I was thrown into the limelight of the news media for two weeks or so. Luckily I was ready for the onslaught — one of my mentors, Bob Full, warned me, “This will be huge. Prepare!” I handled it well and I learned a lot about science communication in the process.
Shortly after that publication, just before my wedding’s bachelor party, I developed terrible leg blood clots and had to cancel my party—but I recovered in time for the wedding, which was a fantastic event on a California clifftop. I enjoyed a good life and seemed healthy again. I kept working hard, I got my second paper accepted at Nature on bouncy-running elephants, and then…
Then I had a stroke, just before that Nature paper got published.
Everything came crashing to a halt and I had to think about what it all meant—these were gigantic life-and-death questions to face at age 31! Luckily, I recovered without much deficit at all, and I regained my momentum with renewed stubborn dedication and grit, although my recover took many months, and took its toll on my psyche. I’ve told this story before in this post about my brain.
I started seeing therapists to talk about my struggles, which was a mixed blessing: I became more aware of my personality flaws, but also more aware of how many of those flaws wouldn’t change. I’m still not sure if that was a good thing but it taught me a lot of humility, which I still revisit today. I also learned to find humour and wonder in the dark times, which colours even this blog.
In winter of 2003 I went to a biomechanics symposium in Calgary, invited by British colleague Alan Wilson. Later that spring, Alan encouraged me to apply for an RVC faculty job (“you’ll at least get an interview and a free trip to London”), which I said no to (vet school and England move didn’t seem right to me), but later changed my mind after thinking it over.
I got the RVC job offer the day before my actual job talk (luckily colleague David Polly warned me that things like this happened fast in the UK, unlike the months of negotiation in the USA!). I made the move in November 2003 and the rest was hard work, despite plenty of mistakes and lessons learned, that paid off a lot career-wise. If I hadn’t taken that job I’d have been unemployed, and I had postdoc fellowships and faculty job applications that got rejected in 2002-2003, so I was no stranger to rejection. It all could have gone so differently…
But it wasn’t a smooth odyssey either—there were family and financial struggles, and I was thousands of miles away while my mother succumbed to Alzheimer’s and my father swiftly fell victim to cancer, and I never was 100% healthy and strong after my troubles in 2002. Even in the late 2000’s, I felt inadequate and once confided to a colleague something like “I still feel like a postdoc here. I’m a faculty member and I don’t feel like I’ve succeeded.”
Since then, I’ve achieved some security that has at last washed that feeling away. That was a gradual process, but I think the key moment I realized that “I’ll be OK now” was in 2010 when I got the call, while on holiday in Wales (at the time touring Caernarfon Castle), informing me that my promotion to full Professor was being approved. It was an anticlimactic moment because that promotion process took 1 year, but it still felt great. It felt like success. I’ll never earn the “best scientist ever” award, so I am content. I don’t feel I have something big left to prove to myself in my career, so I can focus on other things now. It “only” took 15 or so years…
Ten Lessons Learned
When I look back on this experience and try to glean general lessons, my thoughts are:
1) Socializing matters so much for a scientific career. “Networking” isn’t a smarmy or supercilious approach, either; in fact, that insincerity can backfire and really hurt one’s reputation. I made a lot of friends early on — some of my best friends today are scientist colleagues. Many of these have turned into collaborators. Making friends in science is a win-win situation. Interacting with fellow scientists is one of the things I have always enjoyed most about science. Never has it been clearer to me how important the human element of science is. Diplomacy is a skill I never expected to use much in science, but I learned it through a lot of experience, and now I treasure it.
2) Developing a thicker skin is essential, but being vulnerable helps, too. Acting impervious just makes you seem inhuman and isolates you. Struggling is natural and helped me endure the tough times that came along with the good times, often in sharp transition. Science is freaking hard as a career. Even with all the hard work, nothing is guaranteed. Whether you’re weathering peer review critiques, politics, or health or other “life problems”, you need strength, whether it comes from inside you or from those around you. Embrace that you won’t be perfect but strive to do your best despite that. Regret failures briefly (be real with yourself), learn from them and then move on.
3) Reading the literature can be extremely valuable. So many of my ideas came from obsessive reading in diverse fields, and tying together diverse ideas or finding overlooked/unsolved questions and new ways to investigate them. I can’t understand why some scientists intentionally don’t try to read the literature (and encourage their students to follow this practice!), even though it is inevitable to fall behind the literature; you will always miss relevant stuff. I think it can only help to try to keep up that scholarly habit, and it is our debt to past scientists as well as our expectation of future ones—otherwise why publish?
4) I wish I learned even more skills when I was younger. It is so hard to find time and energy now to learn new approaches. This inevitably leads to a researcher becoming steadily less of a master of research methods and data to more of a manager of research. So I am thankful for having the wisdom accumulated via trial and error experiences to keep me relevant and useful to my awesome team. That sharing of wisdom and experience is becoming more and more enjoyable to me now.
5) Did I “succeed” via hard work or coincidence? Well, both—and more! I wouldn’t have gotten here without the hard work, but I look back and I see a lot of chance events that seemed innocent at the time, but some turned out to be deeply formative. Some decisions I made look good in retrospect, but they could have turned out badly, and I made some bad decisions, too; those are easy to overlook given that the net result has been progress. Nothing came easily, overall. And I had a lot of help from mentors, too; Kevin Padian and Scott Delp in particular. Even today, I would not say that my career is easy, by any stretch. I still can find it very draining, but it’s so fun, too!
6) Take care of yourself. I’ve learned the hard way that the saying “At least you have your health” is profoundly wise. I try to find plenty of time now to stop, breathe and observe my life, reflecting on the adventures I’ve had so far. The feelings evoked by this are rich and complex.
7) If I could go back, I’d change a lot of decisions I made. We all would. But I’m glad I’ve lived the life I’ve lived so far. At last, after almost 20 years of a career in science, I feel mostly comfortable in my own skin, more able to act rather than be frozen in the headlights of adversity. I know who I am and what I cannot be, and things I need to work on about myself. In some ways I feel more free than I’ve felt since childhood, because the success (as I’ve defined it in my life) has given me that freedom to try new things and take new risks, and I feel fortunate for that. I think I finally understand the phrase “academic freedom” and why it (and tenure) are so valuable in science today, because I have a good amount of academic freedom. I still try to fight my own limits and push myself to improve my world—the freedom I have allows this.
8) When I revisit the question of “what does success mean to me?” today I find that the answer is to be able to laugh, half-darkly, at myself—at my faults, my strengths, and the profound and the idiotic experiences of my life. I’ve found ways to both take my life seriously and to laugh at myself adrift in it. To see these crisply and then to embrace the whole as “this is me, I can deal with that” brings me a fresh and satisfying feeling.
9) Share your struggles — and successes — with those you trust. It helps. But even just a few years ago, the thought of sharing my career’s story online would have scared me.
10) As scientists we hope for success in our careers to give us some immortality of sorts. What immortality we win is but echoes of our real lives and selves. So I seek to inject some laughter into those echoes while revelling in the amazing moments that make up almost every day. I think it’s funny that I became a scientist and it worked out OK, and I’m grateful to the many that helped; no scientist succeeds on their own.
A major aspect of a traditional career in science is to test the hypothesis that you can succeed in a career as a scientist, which is a voyage of self-discovery, uncovering personal vulnerabilities and strengths. I feel that I am transitioning into whatever the next part of my science career will be; in part, to play a psychopomp role for others taking that voyage.
That’s my story so far. Thanks for sticking with it until the end. Please share your thoughts below.
This post is solely my opinion; not reflecting any views of my coauthors, my university, etc, and was written in my free time at home. I am just putting my current thoughts in writing, with the hope of stimulating some discussion. My post is based on some ruminations I’ve had over recent years, in which I’ve seen a lot of change happening in how science’s self-correcting process works, and the levels of openness in science, which are trends that seem likely to only get more intense.
That’s what this post ponders- where are we headed and what does it mean for scientists and science? Please stay to the end. It’s a long read, but I hope it is worth it. I raise some points at the end that I feel strongly about, and many people (not just scientists) might also agree with or be stimulated to think about more.
I’ve always tried to be proactive about correcting my (“my” including coauthors where relevant) papers, whether it was a publisher error I spotted or my/our own; I’ve done at least 5 such published corrections. Some of my later papers have “corrected” (by modifying and improving the methods and data) my older ones, to the degree that the older ones are almost obsolete. A key example is my 2002 Nature paper on “Tyrannosaurus rex was not a fast runner“- a well-cited paper that I am still proud of. I’ve published (with coauthors aplenty) about 10 papers since then that explore various strongly related themes, the accuracy of assumptions and estimates involved, and new ways to approach the 2002 paper’s main question. The message of that paper remains largely the same after all those studies, but the data have changed to the extent that it would no longer be viable to use them. Not that this paper was wrong; it’s just we found better ways to do the science in the 12 years since we wrote it.
I think that is the way that most of science works; we add new increments to old ones, and sooner or later the old ones become more historical milestones for the evolution of ideas than methods and data that we rely on anymore. And I think that is just fine. I cannot imagine it being any other way.
If you paid close attention over the past five months, you may have noticed a kerfuffle (to put it mildly) raised by former Microsoft guru/patent afficionado/chef/paleontologist Nathan Myhrvold over published estimates of dinosaur growth rates since the early 2000’s. The paper coincided with some emails to authors of papers in question, and some press attention, especially in the New York Times and the Economist. I’m not going to dwell on the details of what was right or wrong about this process, especially the scientific nuances behind the argument of Myhrvold vs. papers in question. What happened happened. And similar things are likely to happen again to others, if the current climate in science is any clue. More about that later.
But one outcome of this kerfuffle was that my coauthors and I went through (very willingly; indeed, by my own instigation) some formal procedures at our universities for examining allegations of flaws in publications. And now, as a result of those procedures, we issued a correction to this paper:
Hutchinson, J.R., Bates, K.T., Molnar, J., Allen, V., Makovicky, P.J. 2011. A computational analysis of limb and body dimensions in Tyrannosaurus rex with implications for locomotion, ontogeny, and growth. PLoS One 6(10): e26037. doi: 10.1371/journal.pone.0026037 (see explanatory webpage at: http://www.rvc.ac.uk/SML/Projects/3DTrexGrowth.cfm)
The paper correction is here: http://www.plosone.org/article/info%3Adoi/10.1371/journal.pone.0097055. Our investigations found that the growth rate estimates for Tyrannosaurus were not good enough to base any firm conclusions are, so we retracted all aspects of growth rates from that paper. The majority of the paper, about estimating body mass and segment dimensions (masses, centres of mass, inertia) and muscle sizes as well as their changes through growth and implications for locomotor ontogeny, still stands; it was not in question.
For those (most of you!) who have never gone through such a formal university procedure checking a paper, my description of it is that it is a big freakin’ deal! Outside experts may be called in to check the allegations and paper, you have to share all your data with them and go through the paper in great detail, retracing your steps, and this takes weeks or months. Those experts may need to get paid for their time. It is embarassing even if you didn’t make any errors yourself and even if you come out squeaky clean. And it takes a huge amount of your time and energy! My experience started on 16 December, reached a peak right around Xmas eve (yep…), and finally we submitted our correction to PLoS and got editorial approval on 20 March. So it involved three months of part-time but gruelling dissection of the science, and long discussions of how to best correct the problems. Many cooks! I have to admit that personally I found the process very stressful and draining.
Next time you wonder why science can be so slow at self-correction, this is the reason. The formal processes and busy people involved mean it MUST be slow– by the increasingly speedy standards of modern e-science, anyway. Much as doing science can be slow and cautious, re-checking it will be. Should be?
My message from that experience is to get out in front of problems like this, as an author. Don’t wait for someone else to point it out. If you find mistakes, correct them ASAP. Especially if they (1) involve inaccurate data in the paper (in text, figures, tables, whatever), (2) would lead others to be unable to reproduce your work in any way, even if they had all your original methods and data, or (3) alter your conclusions. It is far less excruciating to do it this way then to have someone else force you to do it, which will almost inevitably involve more formality, deeper probing, exhaustion and embarassment. And there is really no excuse that you don’t have time to do it. Especially if a formal process starts. I can’t even talk about another situation I’ve observed, which is ongoing after ~3 years and is MUCH worse, but I’ve learned more strongly than ever that you must demonstrate you are serious and proactive about correcting your work.
I’ve watched other scientists from diverse fields experience similar things– I’m far from alone. Skim Retraction Watch and you’ll get the picture. What I observe both excites me and frightens me. I have a few thoughts.
1) The drive to correct past science is a very good development and it’s what science is meant to be about. This is the most important thing!
2) The digital era, especially trends for open access and open data for papers, makes corrections much easier to discover and do. That is essentially good, and important, and it is changing everything about how we do science. Just watch… “we live in interesting times” encapsulates the many layers of feelings one should react with if you are an active researcher. I would not dare to guess what science will be like in 20 years, presumably when I’ll be near my retirement and looking back on it all!
3) The challenge comes in once humans get involved. We could all agree on the same lofty principles of science and digital data but even then, as complex human beings, we will have a wide spectrum of views on how to handle cases in general, or specific cases.
This leads to a corollary question– what are scientists? And that question is at the heart of almost everything controversial about scientific peer review, publishing and post-publication review/correction today, in my opinion. To answer this, we need to answer at least two sub-questions:
1–Are we mere cogs in something greater, meant to hunker down and work for the greater glory of the machine of science?
(Should scientists be another kind of public servant? Ascetic monks?)
2–Are we people meant to enjoy and live our own lives, making our own choices and value judgements even if they end up being not truly optimal for the greater glory of science?
(Why do we endure ~5-10 years of training, increasingly poor job prospects/security, dwindling research funds, mounting burdens of expectations [e.g., administrative work, extra teaching loads, all leading to reduced freedoms] and exponentially growing bureaucracies? How does our experience as scientists give meaning to our own lives, as recompense?)
The answer is, to some degree, yes to both of the main questions above, but how we reconcile these two answers is where the real action is. And this brew is made all the spicier by the addition of another global trend in academia: the corporatization of universities (“the business model”) and the concomitant, increasing concern of universities about public image/PR and marketing values. I will not go any further with that; I am just putting it out there; it exists.
The answer any person gives will determine how they handle a specific situation in science. You’ve reminded your colleague about possible errors in their work and they haven’t corrected it. Do you tell their university/boss or do you blog and tweet about it, to raise pressure and awareness and force their hand? Or do you continue the conversation and try to resolve it privately at any cost? Is your motive truly the greater glory of science, or are you a competitive (or worse yet, vindictive or bitter) person trying to climb up in the world by dragging others down? How should mentors counsel early career researchers to handle situations like this? Does/should any scientist truly act alone in such a regard? There may be no easy, or even mutually exclusive, answers to these questions.
We’re all in an increasingly complex new world of science. Change is coming, and what that change will be like or when, no one truly knows. But ponder this:
Open data, open science, open review and post-publication review, in regards to correcting/retracting past publications: how far down the rabbit hole do we go?
The dinosaur growth rates paper kerfuffle concerned numerous papers that date back to earlier days of science, when traditions and expectations differed from today’s. Do we judge all past work by today’s standards, and enforce corrections on past work regardless of the standards of its time? If we answer some degree of “yes” to this, we’re in trouble. We approach a reductio ad absurdum: we might logic ourselves into a corner where that great machine of science is directed to churn up great scientific works of their time. Should Darwin’s or Einstein’s errors be corrected or retracted by a formal process like those we use today? Who would do such an insane thing? No one (I hope), but my point is this: there is a risk that is carried in the vigorous winds of the rush to make science look, or act, perfect, that we dispose of the neonate in conjunction with the abstergent solution.
There is always another way. Science’s incremental, self-correcting process can be carried out quite effectively by publishing new papers that correct and improve on old ones, rather than dismantling the older papers themselves. I’m not arguing for getting rid of retractions and corrections. But, where simple corrections don’t suffice, and where there is no evidence of misconduct or other terrible aspects of humanity’s role in science, perhaps publishing a new paper is a better way than demolishing the old. Perhaps it should be the preferred or default approach. I hope that this is the direction that the Myhrvold kerfuffle leans more toward, because the issues at stake are so many, so academic in nature, and so complex (little black/white and right/wrong) that openly addressing them in substantial papers by many researchers seems the best way forward. That’s all I’ll say about that.
I still feel we did the right thing with our T. rex growth paper’s correction. There is plenty of scope for researchers to re-investigate the growth question in later papers. But I can imagine situations in which we hastily tear down our or others’ hard work in order to show how serious we are about science’s great machine, brandishing lofty ideals with zeal– and leaving unfairly maligned scientists as casualties in our wake. I am reminded of outbursts over extreme implementations of security procedures at airports in the USA, which were labelled “security theatre” for their extreme cost, showiness and inconvenience, with negligible evidence of security improvements.
The last thing we want in science is an analogous monstrosity that we might call “scientific theatre.” We need corrective procedures for and by scientists, that serve both science and scientists best. Everyone needs to be a part of this, and we can all probably do better, but how we do it… that is an interesting adventure we are on. I am not wise enough to say how it should happen, beyond what I’ve written here. But…
A symptom of scientific theatre might be a tendency to rely on public shaming of scientists as punishment for their wrongs, or as encouragement for them to come clean. I know why it’s done. Maybe it’s the easy way out; point at someone, yell at them in a passionate tone backed up with those lofty ideals, and the mob mentality will back you up, and they will be duly shamed. You can probably think of good examples. If you’re on social media you probably see a lot of it. There are naughty scientists out there, much as there are naughty humans of any career, and their exploits make a good story for us to gawk at, and often after a good dose of shaming they seem to go away.
But Jon Ronson‘s ponderings of the phenomenon of public shaming got me thinking (e.g., from this WTF podcast episode; go to about 1 hr 9 min): does public shaming belong in science? As Ronson said, targets of severe public shaming have described it as “the worst pain ever”, and sometimes “there’s no recourse” for them. Is this the best way to live together in this world? Is it really worth it, for scientists to do to others or to risk having done to them? What actually are its costs? We all do it in our lives sometimes, but it deserves introspection. I think there are lessons from the dinosaur growth rates kerfuffle to be learned about public shaming, and this is emblematic of problems that science needs to work out for how it does its own policing. I think this is a very, very important issue for us all to consider, in the global-audience age of the internet as well as in context of the intense pressures on scientists today. I have no easy answers. I am as lost as anyone.
What do you think?
EDIT: I am reminded by comments below that 2 other blog posts helped inspire/coagulate my thoughts via the alchemy of my brain, so here they are:
This post was just published yesterday in a shorter, edited form in The Conversation UK, with the addition of some of my latest thoughts and the application of the editor’s keen scalpel. Check that out, but check this out too if you really like the topic and want the raw original version! I’ve changed some images, just for fun. The text here is about 2/3 longer.
Recently, the anatomy of animals comes up a lot, at least implicitly, in science news stories or internet blogs. Anatomy, if you look for it, is everywhere in organismal and evolutionary biology. The study of anatomy has undergone a renaissance lately, in a dynamic phase energized by new technologies that enable new discoveries and spark renewed interest. It is the zombie science, risen from what some had assumed was its eternal grave!
Stomach-Churning Rating: 4/10; there’s a dead elephant but no gore.
My own team has re-discovered how elephants have a false “sixth toe” that has been a mystery since it was first mentioned in 1710, and we’ve illuminated how that odd bit of bone evolved in the elephant lineage. This “sixth toe” is a modified sesamoid kind of bone; a small, tendon-anchoring lever. Typical mammals just have a little nubbin of sesamoid bone around their ankles and wrists that is easily overlooked by anatomists, but evolution sometimes co-opts as raw material to turn into false fingers or toes. In several groups of mammals, these sesamoids lost their role as a tendon’s lever and gained a new function, more like that of a finger, by becoming drastically enlarged and elongated during evolution. Giant pandas use similar structures to grasp bamboo, and moles use them to dig. We’ve shown that elephants evolved these giant toe-like structures as they became larger and more terrestrial, starting to stand up on tip-toe, supported by “high-heels” made of fat. Those fatty heels benefit from a stiff, toe-like structure that helps control and support them, while the fatty pads spread out elephants’ ponderous weight.
Crocodile lung anatomy and air flow, by Emma Schachner.
I’ve also helped colleagues at the University of Utah (Drs. Emma Schachner and Colleen Farmer) reveal, to much astonishment, that crocodiles have remarkably “bird-like” lungs in which air flows in a one-way loop rather than tidally back and forth as in mammalian lungs. They originally discovered this by questioning what the real anatomy of crocodile lungs was like- was it just a simple sac-like structure, perhaps more like the fractal pattern in mammalian lungs, and how did it work? This question bears directly on how birds evolved their remarkable system of lungs and air sacs that in many ways move air around more effectively than mammalian lungs do. Crocodile lungs indicate that “avian” hallmarks of lung form and function, including one-way air flow, were already present in the distant ancestors of dinosaurs; these traits were thus inherited by birds and crocodiles. Those same colleagues have gone on to show that this feature also exists in monitor lizards, raising the question (almost unthinkable 10-20 years ago) of whether those bird-like lungs are actually a very ancient and common feature for land animals.
Speaking of monitor lizards, anatomy has revealed how they (and some other lizards) all have venom glands that make their bites even nastier, and these organs probably were inherited by snakes. For decades, scientists had thought that some monitor lizards, especially the huge Komodo dragons, drooled bacteria-laden saliva that killed their victims with septic shock. Detailed anatomical and molecular investigations showed instead that modified salivary glands produced highly effective venom, and in many species of lizards, not just the big Komodos. So the victims of numerous toothy lizard species die not only from vicious wounds, but also from worsened bleeding and other circulatory problems promoted by the venomous saliva. And furthermore, this would mean that venom did not evolve separately in the two known venomous lizards (Gila monster and beaded lizard) and snakes, but was inherited from their common ancestor and became more enhanced in those more venomous species—an inference that general lizard anatomy supports, but which came as a big surprise when revealed by Bryan Fry and colleagues in 2005.
There’s so much more. Anatomy has recently uncovered how lunge-feeding whales have a special sense organ in their chin that helps them detect how expansive their gape is, aiding them to engulf vast amounts of food. Scientists have discovered tiny gears in the legs of leafhoppers that help them make astounding and precise leaps. Who knew that crocodilians have tiny sense organs in the outer skin of their jaws (and other parts of their bodies) that help them detect vibrations in the water, probably aiding in communication and feeding? Science knows, thanks to anatomy.
Just two decades or so ago, when I was starting my PhD studies at the University of California in Berkeley, there was talk about the death of anatomy as a research subject; both among scientists and the general public. What happened? Why did anatomy “die” and what has resuscitated it?
TH Huxley, anatomist extraordinaire, caricatured in a lecture about “bones and stones, and such-like things” (source)
Anatomy’s Legacy
In the 16th through 19th centuries, the field of gross anatomy as applied to humans or other organisms was one of the premier sciences. Doctor-anatomist Jean Francois Fernel, who invented the word “physiology”, wrote in 1542 that (translation) “Anatomy is to physiology as geography is to history; it describes the theatre of events.” This theatric analogy justified the study of anatomy for many early scientists, some of whom also sought to understand it to bring them closer to understanding the nature of God. Anatomy gained impetus, even catapulting scientists like Thomas Henry Huxley (“Darwin’s bulldog”) into celebrity status, from the realisation that organisms had a common evolutionary history and thus their anatomy did too. Thus comparative anatomy became a central focus of evolutionary biology.
But then something happened to anatomical research that can be hard to put a finger on. Gradually, anatomy became a field that was scoffed at as outmoded, irrelevant, or just “solved”; nothing important being left to discover. As a graduate student in the 1990s, I remember encountering this attitude. This apparent eclipse of anatomy accelerated with the ascent of genetics, with anatomy reaching its nadir in the 1950s-1970s as techniques to study molecular and cellular biology (especially DNA) flourished.
One could argue that molecular and cellular biology are anatomy to some degree, especially for single-celled organisms and viruses. Yet today anatomy at the whole organ, organism or lineage level revels in a renaissance that deserves inspection and reflection on its own terms.
Anatomy’s Rise
Surely, we now know the anatomy of humans and some other species quite well, but even with these species scientists continue to learn new things and rediscover old aspects of anatomy that laid forgotten in classic studies. For example, last year Belgian scientists re-discovered the anterolateral ligament of the human knee, overlooked since 1879. They described it, and its importance for how our knees function, in novel detail, and a lot of media attention was drawn to this realisation that there are some things we still don’t understand about our own bodies.
A huge part of this resurgence of anatomical science is technology, especially imaging techniques- we are no longer simply limited to the dissecting knife and light microscope as tools, but armed with digital technology such as 3-D computer graphics, computed tomography (series of x-rays) and other imaging modalities. Do you have a spare particle accelerator? Well then you can do amazing synchrotron imaging studies of micro-anatomy, even in fairly large specimens. Last year, my co-worker Stephanie Pierce and colleagues (including myself) used this synchrotron approach to substantially rewrite our understanding of how the backbone evolved in early land animals (tetrapods). We found that the four individual bones that made up the vertebrae of Devonian tetrapods (such as the iconic Ichthyostega) had been misunderstood by the previous 100+ years of anatomical research. Parts that were thought to lie at the front of the vertebra actually lay at the rear, and vice versa. We also discovered that, hidden inside the ribcage of one gorgeous specimen of Ichthyostega, there was the first evidence of a sternum, or breastbone; a structure that would have been important for supporting the chest of the first land vertebrates when they ventured out of water.
Recently, anatomists have become very excited by the realization that a standard tissue staining solution, “Lugol’s” or potassium iodide iodine, can be used to reveal soft tissue details in CT scans. Prior to this recognition, CT scans were mainly used in anatomical research to study bone morphology, because the density contrast within calcified tissues and between them and soft tissues gives clearer images. To study soft tissue anatomy, you typically needed an MRI scanner, which is less commonly accessible, often slower and more expensive, and sometimes lower resolution than a CT scanner. But now we can turn our CT scanners into soft tissue scanners by soaking our specimens in this contrast solution, allowing highly detailed studies of muscles and bones, completely intact and in 3D. Colleagues at Bristol just published a gorgeous study of the head of a common buzzard, sharing 3D pdf files of the gross anatomy of this raptorial bird and promoting a new way to study and illustrate anatomy via digital dissections- you can view their beautiful results here. Or below (by Stephan Lautenschlager et al.)!
These examples show how anatomy has been transformed as a field because we now can peer inside the bodies of organisms in unprecedented detail, sharing and preserve those data in high-resolution digital formats. We can do this without the concern that a unique new species from Brazilian rainforests or exciting fossil discovery from the Cambrian period would be destroyed if we probed certain questions about its anatomy that are not visible from the outside– a perspective in which science had often remained trapped for centuries. These tools became rapidly more diverse and accessible from the 1990s onward, so as a young scientist I got to see some of the “before” and “after” influences on anatomical research—these have been very exciting times!
When I started my PhD in 1995, it was an amazing luxury to first get a digital camera to use to take photographs for research, and then a small laser scanner for making 3D digital models of fossils, with intermittent access to a CT scanner in 2001 and now full-time access to one since 2003. These stepwise improvements in technology have totally transformed the way I study anatomy. In the 1990s, you dissected a specimen and it was reduced to little scraps; at best you might have some decent two-dimensional photographs of the dissection and some beetle-cleaned bones as a museum specimen. Now, we CT or MRI scan specimens as routine practice, preserving many mega- or gigabytes of data on its internal and external, three-dimensional anatomy in lush detail, before scalpel ever touches skin. Computational power, too, has grown to the point where incredibly detailed 3D digital models produced from imaging real specimens can be manipulated with ease, so science can better address what anatomy means for animal physiology, behaviour, biomechanics and evolution. We’re at the point now where anatomical research seems no longer impeded by technology– the kinds of questions we can ask are more limited by access to good anatomical data (such as rare specimens) than by the ways we acquire and use those data.
My experience mirrors my colleagues’. Larry Witmer at Ohio University in the USA, past president of the International Society for Vertebrate Morphologists, has gone from dissecting bird heads in the 1990s to becoming a master of digital head anatomy, having collected 3D digital scans of hundreds of specimens, fossil and otherwise. His team has used these data to great success, for example revealing how dinosaurs’ fleshy nostrils were located in the front of their snouts (not high up on the skull, as some anatomists had speculated based on external bony anatomy alone). They have also contributed new, gorgeous data on the 3D anatomy of living animals such as opossums, ostriches, iguanas and us, freely available on their “Visible Interactive Animal” anatomy website. Witmer comments on the changes of anatomical techniques and practice: “For extinct animals like dinosaurs, these approaches are finally putting the exploration of the evolution of function and behavior on a sound scientific footing.”
I write an anatomy-based blog called “What’s in John’s Freezer?” (haha, so meta!), in which I recount the studies of animal form and function that my research team and others conduct, often using valuable specimens stored in our lab’s many freezers. I started this blog almost two years ago because I noticed a keen interest, or even hunger for, stories about anatomy amongst the general public; and yet few blogs explicitly were about anatomy for its own sake. This interest became very clear to me when I was a consultant for the BAFTA award-winning documentary series “Inside Nature’s Giants” in 2009, and I was noticing more documentaries and other programmes presenting anatomy in explicit detail that would have been considered too risky 10 years earlier. So not only is anatomy a vigorous, rigorous science today, but people want to hear about it. Just in recent weeks, the UK has had “Dissected” as two 1-hour documentaries and “Secrets of Bones” as back-to-back six 30-minute episodes, all very explicitly about anatomy, and on PRIME TIME television! And PBS in the USA has had “Your Inner Fish,” chock full of anatomy. I. Love. This.
Before the scalpel: the elephant from Inside Nature’s Giants
There are many ways to hear about anatomy on the internet these days, reinforcing the notion that it enjoys strong public engagement. Anatomical illustrators play a vital role now much as they did in the dawn of anatomical sciences– conveying anatomy clearly requires good artistic sensibilities, so it is foolish to undervalue these skills. The internet age has made disseminating such imagery routine and high-resolution, but we can all be better about giving due credit (and payment) to artists who create the images that make our work so much more accessible. Social media groups on the internet have sprung up to celebrate new discoveries- watch the Facebook or Twitter feeds of “I F@*%$ing Love Science” or “The Featured Creature,” to name but two popular venues, and you’ll see a lot of fascinating comparative animal anatomy there, even if the word “anatomy” isn’t necessarily used. I’d be remiss not to cite Emily Graslie’s popular, unflinchingly fun social media-based explorations of gooey animal anatomy in “The Brain Scoop”. I’d like to celebrate that these three highly successful disseminators of (at least partly) anatomical outreach are all run by women—anatomical science can (and should!) defy the hackneyed stereotype that only boys like messy stuff like dissections. There are many more such examples. Anatomy is for everyone! It is easy to relate to, because we all live in fleshy anatomical bodies that rouse our curiosity from an early age, and everywhere in nature there are surprising parallels with — as well as bizarre differences from — our anatomical body-plans.
Anatomy’s Relevance
What good is anatomical knowledge? A great example comes from gecko toes, but I could pick many others. Millions of fine filaments, modified toe scales called setae, use micro-molecular forces called van der Waals interactions to help geckos cling to seemingly un-clingable surfaces like smooth glass. Gecko setae have been studied in such detail that we can now create their anatomy in sufficient detail to make revolutionary super-adhesives, such as the product “Geckskin”, 16 square inches of which can currently suspend 700 pounds aloft. This is perhaps the most famous example from recent applications of anatomy, but Robert Full’s Poly-Pedal laboratory at Berkeley, among many other research groups excelling at bio-inspired innovation in robotics and other fields of engineering and design, regularly spins off new ideas from the principle that “diversity enables discovery”, as applied to the sundry forms and functions found in organisms. By studying the humble cockroach, they have created new ways of building legged robots that can scour earthquake wreckage for survivors or explore faraway planets. By asking “how does a lizard use its big tail during leaping?” they have discovered principles that they then use to construct robots that can jump over or between obstacles. Much of this research relates to how anatomical traits determine the behaviours that a whole, living, dynamic organism is capable of performing.
Whereas when I was a graduate student, anatomists and molecular biologists butted heads more often than was healthy for either of them, competing for importance (and funding!), today the scene is changing. With the rise of “evo devo”, evolutionary developmental biology, and the ubiquity of genomic data as well as epigenetic perspectives, scientists want to explain “the phenotype”—what the genome helps to produce via seemingly endless developmental and genetic mechanisms. Phenotypes often are simply anatomy, and so anatomists now have new relevance, often collaborating with those skilled in molecular techniques or other methods such as computational biology. One example of a hot topic in this field is, “how do turtles build their shells and how did that shell evolve?” To resolve this still controversial issue, we need to know what a shell is made of, what features in fossils could have been precursors to a modern shell, how turtles are related to other living and extinct animals, how a living turtle makes its shell, and how the molecular signals involved are composed and used in animals that have or lack shells. The first three questions require a lot of anatomical data, and the others involve their fair share, too.
Questions like these draw scientists from disparate disciplines closer together, and thanks to that proximity we’re inching closer to an answer to this longstanding question in evolutionary biology and anatomy, illustrated above in the video. As a consequence, the lines between anatomists and molecular/cellular biologists increasingly are becoming blurred, and that synthesis of people, techniques and perspectives seems to be a healthy (and inevitable?) trend for science. But there’s still a long way to go in finding a happy marriage between anatomists and the molecular/cellular biologists whose work eclipsed theirs in past decades. Old controversies like “should we use molecules or morphology to figure out how animals are related to each other?” are slowly dying out, as the answer becomes evident to be “Yes. Both.” (especially when fossils can be included!) Such dwindling controversies contribute to the healing of disciplinary rifts and the unruffling of parochial feathers.
Yet many anatomists would point to lingering obstacles that give them concern for their future; funding is but one of them (few would argue that gross anatomical research is as well off in provision of funding as genetics is, for example). There are clear mismatches between the hefty importance, vitality, popularity and rigour of anatomical science and its perception or its role in academia.
Romane 1892, covering Haeckel’s classic, early evo-devo work (probably partly faked, but still hugely influential) (source)
Anatomy’s Future
One worry the trend that anatomy as a scientific discipline is clearly flourishing in research while it dwindles in teaching. Fewer and fewer universities seem to be teaching the basics of comparative anatomy that were a mainstay of biology programmes a century ago. Yet anatomy is everywhere now in biology, and in the public eye. It inspires us with its beauty and wonder—when you marvel at the glory of beholding a newly discovered species, you are captivated by its phenotypic pulchritude. Anatomy is still the theatre in which function and physiology are enacted, and the physical encapsulation of the phenotype that evolution moulds through interactions with the environment. But there is cause for concern that biology students are not learning much about that theatre, or that medical schools increasingly seem to eschew hands-on anatomical dissection in favour of digital learning. Would you want a doctor to treat you if they mainly knew human anatomy from a CGI version on an LCD screen in medical school, and hence were less aware of all the complexity and variation that a real body can house?
Anatomy has an identity problem, too, stemming from decades of (Western?) cultural attitudes (e.g. the “dead science” meme) and from its own success—by being so integral to so many aspects of biology, anatomy seems to have integrated itself toward academic oblivion, feeding the perception of its own obsolescence. I myself struggled with what label to apply to myself as an early career researcher- I was afraid that calling myself an “anatomist” would render me quaint or unambitious in the eyes of faculty job interview panels, and I know that many of my peers felt the same. I resolved that inner crisis years ago and came to love identifying myself at least partly as an anatomist. I settled on the label “evolutionary biomechanist” as the best term for my speciality. In order to reconstruct evolution or how animals work (biomechanics), we first often need to describe key aspects of anatomy, and we still discover new, awesome things about anatomy in the process. I still openly cheer on anatomy as a discipline because its importance is so fundamental to what I do, and I am far from alone in that attitude. Other colleagues that do anatomical research use other labels for themselves like “biomechanist”, “physiologist,” or “palaeontologist”, because those words better capture the wide range of research and teaching that they do, but I bet also because some of them likely still fear the perceived stigma of the word “anatomy” among judgemental scientists, or even the public. At the same time, many of us get hired at medical, veterinary or biology schools/departments because we can teach anatomy-based courses, so there is still hope.
Few would now agree with Honoré de Balzac’s 19th century opinion that “No man should marry until he has studied anatomy and dissected at least one woman”, but we should hearken back to what classical scientists knew well: it is to the benefit of science, humanity and the world to treasure the anatomy that is all around us. We inherit that treasure through teaching; to abscond this duty is to abandon this trove. With millions of species around today and countless more in the past, there should always be a wealth of anatomy for everyone to learn from, teach about, and rejoice.
X-ray technology has revolutionized anatomical studies; what’s next? Ponder that as this ostrich wing x-ray waves goodbye.
I am sure someone, in the vast literature on science communication out there, has written about this much better than I can, but I want to share my perspective on an issue I think about a lot: the tension between being a human, full of biases and faults and emotions, and doing science, which at its core seems inimical to these human attributes.
Stomach-Churning Rating: 1/10; nothing but banal meme pics ahead…
This is not a rant; it is an introspective discourse, and I hope that you join in at the end in the Comments with your own reflections. But it fits into my blog’s category of rant-like perambulations, which tend to share an ancestral trait of being about something broader than freezer-based anatomical research. As such, it is far from a well-thought-out product. It is very much a thought-in-progress; ideal for a blog post.
(Dr./Mr.)Spock of the Star Trek series is often conveyed as an enviably ideal scientific mind, especially for his Vulcan trait of being mostly logical– except for occasional outbreaks of humanity that serve as nice plot devices and character quirks. Yet I have to wonder, what kind of scientist would he really be, in modern terms? It wasn’t Spock-fanboying that got me to write this post (I am no Trekkie), but he does serve as a useful straw man benchmark for some of my main points.
“Emotions are alien to me – I am a scientist.” (Spock – Paradise syndrome)
The first ingredient of the tension I refer to above is a core theme in science communication: revealing that scientists are human beings (gasp!) with all the same attributes as other people, and that these human traits may make the story more personable or (perhaps in the best stories) reveal something wonderful, or troubling, about how science works.
The second ingredient is simply the scientific process and its components, such as logic, objectivity, parsimony, repeatability, openness and working for the greater good of science and/or humankind.
There is a maxim in critical thinking that quite a few scientists hold: One’s beliefs (small “B”– i.e. that which we provisionally accept as reality) should be no stronger than the evidence that supports them. A corollary is that one should be swift, or at least able, to change one’s beliefs if the evidence shifts in favour of a better (e.g. more parsimonious/comprehensive) one.
It is a pretty damn good maxim, overall. But in viewing, or imagining (as “what-if?” scenarios), you may find that some scientists’ reactions to their beliefs/opinions/ideas — especially regarding conclusions that their research has reached — can occasionally violate this principle. That violation would almost always be caused by some concoction of their human traits opposing the functionality of this maxim and its corollary.
For example (and this is how I got thinking about this issue this week; I started writing the post on 5 December, then paused while awaiting further inspiration/getting normal work done/fucking around), what if Richard Dawkins was confronted with strong evidence that The Selfish Gene’s main precepts were wrong? This is a mere heuristic example, although I was thinking about it because David Dobbs wrote a piece that seemed to be claiming that the balance of scientific evidence was shifting against selfish genes (and he later shifted/clarified his views as part of a very interesting and often confusing discussion, especially with Jerry Coyne– here). It doesn’t matter if it’s Dawkins (or Dobbs) or some other famous scientist and their best or most famous idea. But would they quickly follow the aforementioned maxim and shift their beliefs, discarding all their prior hard work and acclaim? (a later, palaeontological, event in December caused me to reflect on a possibly better example, but it’s so controversial, messy and drenched in human-ness that I won’t discuss it here… sorry. If you really want a palaeo-example, insert Alan Feduccia and “birds aren’t dinosaurs” here as an old one.)
I’d say they’d be reluctant to quickly discard their prior work, and so might I, and to a degree that’s a good, proper thing. A second maxim comes into play here, but it is a tricky one: “Extraordinary claims require extraordinary evidence.” For a big scientific idea to be discarded, one would want extraordinary scientific evidence to the contrary. And additionally, one might not want to quickly shift their views to accomodate that new evidence, perhaps, as a hasty rush to a new paradigm/hypothesis could be very risky if the “extraordinary” evidence later turned out itself to be bunk, or just misinterpreted. Here, basic scientific practice might hold up well.
But, but… that “extraordinary evidence” could be very hard to interpret– this is the tricky bit. What is “extraordinary?” Often in science, evidence isn’t as stark and crisp as p<0.05 (a statistical threshold of significance). Much evidence requires a judgement call– a human judgement call — at some step in its scrutiny, often as a provisional crutch pending more evidence. Therein lies a predicament for any scientist changing any views they cherish. How good are the methods used to accumulate contrary evidence? Does that evidence and its favoured conclusion pass the “straight-face test” of plausibility?
All this weighing of diverse evidence can lead to subjectivity… but that’s not such a bad thing perhaps. It’s a very human thing. And it weighs heavily in how we perceive the strength of scientific methods and evidence. Much as we strive as scientists to minimize subjectivity, it is there in many areas of scientific inquiry, because we are there doing the science, and because subjectivity can be a practical tool. Sometimes subjectivity is needed to move on past a quagmire of complex science. For example, in my own work, reconstructing the soft tissue anatomy of extinct dinosaurs and other critters is needed, despite some varying degrees of subjectivity, to test hypotheses about their behaviour or physiology. I’ve written at length about that subjectivity in my own research and it’s something I think about constantly. It bugs me, but it is there to stay for some time.
One might look at this kind of situation and say “Aha! The problem is humans! We’re too subjective and illogical and other things that spit in the face of science! What we need is a Dr. Spock. Or better yet, turn the science over to computers or robots. Let amoral, strictly logical machines do our science for us.” And to a degree, that is true; computers help enormously and it is often good to use them as research tools. Evolutionary biology has profited enormously from turning over the critical task of making phylogenetic trees largely to computers (after the very human and often subjective task of character analysis to codify the data put into a computer– but I’d best not go off on this precipitous tangent now, much as I find it interesting!). This has shrugged off (some of) the chains of the too-subjective, too-authority-driven Linnaean/evolutionary taxonomy.
But I opine that Spock would be a miserable scientist, and much as it is inevitable that computers and robots will increasingly come to dominate key procedures in science, it is vital that humans remain in the driver’s seat. Yes, stupid, biased, selfish, egocentric, socially awkward, meatbag humans. Gotta love ’em. But we love science partly because we love our fellow meatbags, and we love the passion that a good scientist shares with a good appreciator of science– this is the lifeblood of science communication itself. Science is one of the loftier things that humans do– it jostles our deeper emotions of awe and wonder, fear and anxiety. Without human scientists doing science, making human mistakes that make fantastic stories about science and humanity, and without those scientists promoting science as a fundamentally human endeavour, much of that joy and wonder would be leached out of science along with the uncomfortable bits.
Spock represents the boring -but necessary- face of science. Sure, Spock as a half-human could still have watered-down, plot-convenient levels of the same emotions that fuel human scientists, and he had to have them to be an enjoyable character (as did his later analogue, Data; to me, emotion chip or not, Data still had some emotions).
But I wouldn’t want to have Spock running my academic department, chairing a funding body, or working in my lab.
Spock might be a good lab technician (or not), but could he lead a research team, inspiring and mentoring them to new heights of achievement? Science is great because we humans get to do it. We get to discover stuff that makes us feel like superheroes, and we get to share the joy of those discoveries with others, to celebrate another achievement of humanity in comprehending the universe.
And science is great because it involves this tension between the recklessly irrational human side of our nature and our capacity to be ruthlessly logical. I hear a lot of scientists complaining about aspects of being a scientist that are more about aspects of being human. Yes, academic job hiring, and departmental politics, and grant funding councils, and the peer review/publishing system, and early career development, and so many other (all?) aspects of being a scientist have fundamental flaws that can make them very aggravating and leave people despondent (or worse). And there are ways that we can improve these flaws and make the system work better. We need to discuss those ways; we need to subject science itself to peer review.
But science, like any human endeavour, might never be fair. As long as humans do science, science will be full of imbalance and error. I am not trying to excuse our naughty species for those faults! We need to remain vigilant for them both in ourselves and in others! However, I embrace them, like I might an embarrassingly inept relative, as part of a greater whole; a sloppy symptom of our meatbaggy excellence. To rid ourselves of the bad elements of human-driven science, to some degree, would require us to hand over science to some other agency. In the process, we’d be robbing ourselves of a big, steamy, stinky. glorious, effervescent, staggeringly beautiful chunk of our humanity.
Spock isn’t coming to take over science anytime soon, and I celebrate that. To err is human, and to do science is to err, from time to time. But science, messy self-correcting process that it is, will untangle that thicket of biases and cockups over time. If we inspect it closely it will always be full of things we don’t like, and weeding those undesirables out is the job of every scientist of any stripe. Self-reflection and doubt are important weed-plucking tools in our arsenal for this task, because every scientist should keep their own garden tidy while they scrutinize others’. This is a task that I, as a scientist, try to take seriously but I admit my own failures (e.g. being an overly harsh, competitive, demanding reviewer in my younger years… I have mellowed).
So here’s to human-driven science. Live long and publish!
Up next: FREEZERMAS!!! A week-long extraganza of all that this blog is really about, centred around Darwin’s birthday. Starts Sunday!
At this writing (17 October, 2013), I am headed home after a 10-day trip to China as part of an RVC delegation participating in a London Universities International Partnership (LUIP) event (celebrating London innovations, especially those developed with Chinese input) as part of a broader UK/London-China trade mission. I am still processing what has been an astonishing, exhausting, exhilarating, chaotic, lavish, smog-ridden, and inspiring visit. As a simple scientist, I’ve found myself in the midst of major global politics, business and science policy, with little time to assimilate what has happened but still learning plenty about how the bigger world, way beyond my lab, operates. I thought I’d share that experience, by way of pictures illustrating key – or just unusual or interesting – events and places from my journey. It was surreal, in so many ways…
Stomach-Churning Rating: 0/10 except for a couple of odd statues. No squat-toilets; I will spare you those.
Odd decoration above entrance to the art gallery building that housed the LUIP event.
Several months ago the RVC selected me to help RVC Access director Nina Davies and colleagues set up an exhibit, as part of the LUIP event, featuring the work that my team has done, and is still doing, with Chinese collaborators at the IVPP in Beijing (exemplified by this past post). Dinosaurs and 3D computer modelling were thought to be a good potential draw for the public (ya think?) as opposed to more controversial subjects such as avian flu, with which the RVC also has research strengths and Chinese collaborations. I saw it as a great chance to go spend time at the IVPP’s spectacular fossil collection and develop ongoing collaborations with scientists there like Drs. Zhou Zhonghe and Xu Xing. Subsequently, I learned that it was a small enough event that I’d probably be meeting Boris Johnson (Mayor of London) there as well, possibly even presenting our research to him.
Hallway lined with art galleries, one of which is the Yang Gallery, which the event was held in.
The preparations for the exhibit were full of surprises, as you might expect a long-distance interaction between UK and Chinese people to be, especially if you’ve spent time in China and know some of the broad-brush cultural differences (e.g. “Yes” can mean no, and “maybe” usually means no). There were many cooks involved! Artists, policymakers, scientists, universities… and then the Mayor’s office got thrown into the action, and then it snowballed, with UK Higher Education and Science minister Rt Hon MP David Willetts coming to the LUIP event, and UK Foreign Chancellor George Osborne then scheduling a related trip to China at the same time. Meanwhile, I just supplied some images (courtesy of Luis Rey) and a video (by Vivian Allen and Julia Molnar) from our past paper to illustrate what we’re doing with Chinese collaborators.
There wasn’t time to prepare a fancy exhibit with lots of bells and whistles, but I was pleasantly surprised by what the LUIP organizers cooked up from what we provided, as photos below show. The addition of four great casts of fossils on loan from the IVPP was crucial and made us stand out from all the other exhibits in a big way! The event was held in the trendy 798 Art District in eastern Beijing, which is an old industrial area converted to a surprisingly bohemian, touristy area that still sports its rusting old industrial infrastructure, but bedecked with modern art! That really worked for me as a setting. This was my third visit to Beijing/China but my first time in this gritty area of the city, which I recommend spending an afternoon in sometime if you visit– the streets are lined with cafes and art galleries.
Boris bike and nice design of exhibits (placed on/around the giant letters LONDON). The back wall sports a Communist slogan, partly painted over, exhorting the workers to give their full effort for the glory of Chairman Mao or something (seriously). The building was once a weapons factory, I was told.
All the work we put into this event was a big deal to me, but as the event developed, and the schedule for my 10 day visit shifted almost daily as various political factions shuffled the LUIP and UK trade mission plans, I became aware of the vastly broader issues at play, and humbled by their scope. Sure, studying the 3D changes of dinosaur body shape across >225 million years is truly awesome to conduct, but the socio-political issues around the LUIP event boggled and baffled me. Issues like “How do we get more Chinese students to come study at London universities?”, “How do Chinese parents feel about their students studying to become veterinarians?” and “What are the key obstacles limiting UK-Chinese collaborations and how can they be resolved?” gradually eclipsed the technical, scientific issues in my mind, and I started to feel lost. I learned a lot from this eye-opening experience.
The rest of his post is mostly a photo blog to illustrate the goings-on, but I consider some psychological/philosophical matters toward the end.
The London innovation event lighting gets tested out– and looks sweet.
Boris arrives, and proceeds to tour the exhibits rather than give his speech as planned. But it worked out OK in the end; he had two exhibit tours and a speech in the middle.
Minister Willetts arrives and prepares to speak about UK higher education for Chinese students.
I give Minister Willetts a tour of our fabulous fossil casts.
Left to right = back in time through avian evolution, represented by Yixianornis, Pengornis, Jeholornis and Microraptor casts, courtesy of the IVPP.
Arguably one of the most important fossil finds (ever?), the “four-winged” dinosaur Microraptor.
Added benefit of thaw in UK-Chinese relations: Microraptors for everyone!!! Well, for me anyway. And a cast, not a real one. But still pretty damn cool, and now it’s in my office for comparative research and teaching. See?
Darwin greets Microraptor in my office.
Like I said at the start, I don’t have a profound insight from this trip, not yet if ever. But it has obviously made a strong impression on me. It has reinforced some thoughts about Big Life Stuff. With the jetlag, the big geopolitical issues, the foreign country, the opulence, and my research thrown into that heady brew (ahem, along with some Tsingtao beer), I became lost. And I liked it, even though I was totally clueless at times, just looking around wide-eyed at the events unfolding and hearing about the political manoeuvring behind the scenes (e.g. how would big figures like Boris and Willetts share the limelight? And the news media was playing up the question of whether Boris’s or Osborne’s contingents were “winning” in some sense of some struggle, even though ostensibly they are on the same Tory team).
But we’re all clueless; we’re all lost. In some ways that’s a good thing. We have work to do; broad landscapes to explore whether evolutionary or socioeconomic or whatnot. There are big questions left, and no easy answers sometimes. That’s a bad thing, too; if we were less lost in major issues like climate change or habitat destruction or gross imbalance in wealth/power, the world would be a better place.
Quite apropos! Rockin’ artwork found in the 798 art district surrounding the Yang Gallery.
I find it helpful at times to ground myself in the knowledge that I am lost just like everyone else. There are different ways we can get lost: such as in pondering how dinosaur anatomy and physiology transformed over the Mesozoic era, or in throwing ourselves into weighty issues of business and diplomacy in the real world. To pretend we’re not lost risks becoming foolhardy; to exemplify the Dunning-Kruger effect.
It might be helpful for others to remind themselves of this sense of being lost, and that we all feel it or at least should at times. Students may sometimes look to their professors and think they have some monopoly on wisdom, but they’re lost too, and surely in some ways more lost than any of their students.
Smaller scale dino art in a local shop.
Boris got a bit lost, too, when he came to my exhibit – pondering the dinosaur-bird fossils, he pondered out loud “There’s some bone that birds and reptiles both have that shows they’re related… the, umm, the ischium?” Not understanding what he meant by this (all tetrapods have an ischium), I redirected him, along with a reassuring comment that he’d done his homework. I did this a bit clumsily as the multitude of news cameras and lights and boom-mikes hovered around us in eager anticipation of Something Interesting Happening, and as his minders began to urge him to move onward through the LUIP exhibit. I noted the wrist of a dinosaur like Microraptor and how it already had the unusual wing-folding mechanism that modern birds now use during flapping flight or to keep their feathers off the ground when standing. He seemed to sort of like that, then shook my hand and said something like “very impressive, well done” and moved on to the next exhibit. (Willetts fared a bit better and stayed longer, but science is his business)
Random artwork from the Yang Gallery and around the 798 Art District follows… I liked the style. My kind of funky art. The statue above combines childlike toy aspects with sinister jingoistic imagery. And the next one, well… see for yourself.
In that brief, frantic conversation, we were both lost, and I think none the less of Mayor Johnson for it. He’d come off the plane, rushed to hotel and to the LUIP event, gave an impassioned speech about London and China, and then was whisked around between a dozen or so exhibits, pursued all the while by a throng of media and minders and gawkers- was he expected to know all the sundry details of maniraptoran evolution at that point? No. But we had some fun and smiled for the cameras and then it was all over as we spun off, reeling into our different orbits. I wouldn’t be surprised if, from time to time, a politician like Boris pinches himself and thinks privately, “Wow, these issues I am embroiled in are so convoluted. I am totally, utterly lost.” I think that’s a healthy thing, and I enjoyed repeated doses of that feeling during my trip. In science, we often deal with a sense of awe or wonder—that is the sunny side of being lost. The other side, which can coexist sometimes in duality with awe/wonder, is the more fearful/anxious side, like when you’re stuck in a foreign city far from your hotel; surrounded by alien, fantastic scenery; and night is falling but no taxis are around to take you back, and the locals are starting to watch you to see if you’ll do something stupid (this was me, briefly, after doing some evening mall-shopping in Shanghai). How we react to that duality is, in some way, our choice. I point to a scientist studying evolution and a creationist freaking out about the subject as a good example of two polar opposites in how an awesome topic in science can evoke very different reactions within that duality. A seasoned traveller who likes to throw themselves into a city and experience blissful, unpredictable immersion, and a worrisome tourist who can’t stray far from their tour group provide analogous examples. But I digress; this post is in danger of becoming lost… Enjoy some cool statues as the denouement. Get lost in the comments—what makes you have that sense of awe, or being lost, and how do you deal with it?
Hola from Barcelona, where 500ish of us are telling each other about the latest research in the field of morphology (like anatomy, but broader, deeper, more explanatory; but if you prefer to think of it as anatomy that’s OK by me)!
#ICVM and #ICVM2013 (favoured) are the hashtags, and http://icvm2013.com/ is the website, and there’s Facebook and all that too! You can read the full programme and abstracts here. It’s the best damn conference in the universe and I am not remotely biased. It happens every 3 years somewhere in the world and is always chock full of 5 days of glorious new information on animal form and function and much more, with just too many interesting talks to ever be able to take it all in.
I am speaking a few times and want to share a talk that is about sharing the glory of morphology in public.
Morphology research, that is; please put your clothing back on!
It’s a text-heavier talk than my rules-of-conference-talks normally would allow, but I’m going for it, as that makes it better for sharing because my dulcet tones will not accompany the version I am sharing online. Someday in the future, at a conference venue that is better set up for reliably live-broadcasting a talk (this is NO FAULT of the excellent organizing committee of ICVM/ISVM!), I would just do it live, but not today, not here.
The point of the talk should be obvious from the first slide (as in my last post). But I’ll presage it by saying that another subtext, which might not come through so strongly in the slides as opposed to my spoken words, is that we need to tell people that we’re doing morphology/anatomy research! We should not be shy of that label because deans or geneticists or conventional wisdom or what/whomever might say (very, very wrongly!) that it is a dead or obsolete science.
While natural history, evolution, palaeontology and other fields allied to morphology do pretty well in the public eye, I don’t see people often reminded that what they are being told about in science communication is a NEW DISCOVERY IN ORGANISMAL MORPHOLOGY and that we are still discovering such new things about morphology all the freaking time! (e.g. my team’s research on elephant false sixth toes, or Nick Pyenson‘s team’s research on whale chin sense organs to name just 2 such studies, both published on the same day in Science!)
Indeed, many of those discoveries such as new fossils/exotic living things with cool features, cool developmental mechanisms that produced complex structures, or insights into how organisms are able to do amazing things are implicitly morphological discoveries, but the fanfare too often goes to natural history, palaeontology, evo-devo or some other area rather than explicitly to morphology.
In contrast, I too often hear people poo-pooing anatomical research as yesterday’s science.
Vesalius’s classic skeleton (from Wikipedia), which is great but to me also conjures misleading connotations of anatomy as a defunct discipline that old dead dudes did.
We need to sell ourselves better not only in that regard, of a renaissance of discoveries and insights in our field, but also in the sense of being in a renaissance that is driven by TOTALLY AWESOME TECHNOLOGICAL AND METHODOLOGICAL ADVANCES, especially computerized tools. We’re just as fancy in terms of techy stuff as any other biologists, but we don’t shout it from the rooftops as much as other disciplines do.
We’re not just primitive scientists armed only with scalpels and maybe a ruler now and then, although that simple approach still has its sublime merits. We’re building finite element models, running dynamic computer simulations, taking high-resolution CT or synchotron scans, manipulating embryos, digging up fossils, sequencing genes– you name it, morphologists may be doing it! (For similar views see Marvalee Wake’s recent review of herpetology & morphology; I’m by far not the first person to make the arguments I’m making in my talk, but I am putting a personal spin on them)
And of course, as the talk is being delivered by me, you might rightly expect that I’ll say that we need to do more of this kind of cheerleading where we have maximal visibility and interaction, which includes online via social media, etc. I’ll discuss one other venue which has featured prominently here on this blog, too: documentaries. Oh I’m not done with that hobby horse, no sirree, not by a long shot!!
Anyway I should get back to preparing my talk but here is the link to the slideshow (props to Anne Osterrieder for the inspiration to put my slides up here):
Please discuss anything related to this topic in the Comments– I’d love to hear what you think!
I am happy to clarify what my shorthand notes in the slide text mean if needed. There are links in the talk to other sites, which you can click and explore.
This is a rant, but stick with me and this rant might have a silver lining toward the end, or at least a voice of reason within the roiling cloud of bitter blog-scowling. And there are pictures of cats.
My little tiger, Karmella.
Like probably almost anyone in the 21st century that does research in a field of biology, I grew up watching nature documentaries on TV, and that influenced me to become a scientist. Doubtless it remains a powerful influence on other people, despite the massive de-science-ification of certain cable channels ostensibly, or at least potentially, dedicated to communicating science and nature (Animal Planet and History Channel, we’re looking at you).
But now I’ve seen behind the curtain. There’s still magic to behold there (e.g. working with early episodes of Inside Nature’s Giants), to be sure. However, some of my experiences have led me to become increasingly discontented with the relationship between TV documentaries and scientists.
Black leopard with glowing motion capture markers and eyes; eerie image from our past studies.
Here’s a common flow of events, and how they sometimes veer into frustration or worse:
Once a month or so, especially concentrated around this time (May-June-ish), I get a call or email from a documentary producer or researcher who is fishing for expert advice as they build a proposal for a documentary. I’m always very happy to talk with them and direct them to the best researchers to speak to, or papers to read, or to aspects of my own work that fit in with their idea for a documentary. Sometimes their idea is a bad one and I’m not afraid to tell them that and try to steer them toward a better idea; on occasion that seems to work, but more often they have their plan already and are reluctant to deviate from it.
About 3/4 of the time, I either never again hear from these nascent documentaries or else hear back maybe one more time (even to meet for coffee or give them a tour of our campus)– presumably, the proposal fails at that stage as it doesn’t excite executives. I’ve easily grown to accept this status quo after some initial disappointments. Much like in science, some ideas just don’t pass the muster of “peer review”, and documentary makers are operating under more of a market economy than science tends to be. Sifting is inevitable, and the time I spend helping people at this stage is quite minimal, plus it’s fun to see the sausage being made in its earliest stages. All fair so far…?
Alexis and technician setting up gear for one of our past studies of how cats move.
The frustration naturally ramps up the more one invests in helping documentaries through their gestation period. I’m sure it’s very frustrating and stressful for TV makers, too, to spend days or months on a project and then have the rug pulled out from under them by those on high. Hopefully they are getting paid for their time; all I can speak to is my experience. My experience is that all this early input I regularly provide is pro bono.
I used to mention that my time is not cheap, and I had a policy (after a few disappointments and lost time) that I should get paid around £100/hour for my time, even at the early consulting stage. That fee went straight into my research funds to help send grad students to conferences or buy small consumables; it was definitely worth my effort and felt very fair. Since the 2008 economic downturn, I’ve rapidly abandoned that policy, because it seems clear to me that documentary makers of late tend to be working on more austere budgets. I’m sympathetic to that, and the payoff for a documentary that gets made with my input is often quite substantial in terms of personal satisfaction, PR/science communication, happy university/grant funders, etc. On rare occasions, I still do get paid for my time (albeit essentially never by the BBC); Inside Nature’s Giants was generous in that regard, for example.
How the leopard got glowy spots: motion capture markers from our past studies.
But at some point a line needs to be drawn, where the helpful relationship between scientists and documentary makers veers from mutualism into parasitism, or just careless disregard. I’ve been featured in roughly eight different TV documentaries since 2004, but there were almost as many (six or so) other documentary spots that went beyond the proposal stage into actual filming (easily 8+ hours of time) and never aired; either being cancelled entirely or having my scenes cut. All too frequently, I don’t hear about this cutting/cancellation until very late and after my inquiries like “Any news about the air date for your programme?”
Several times I’ve heard nothing at all from a documentary after filming, only to watch the programme and reach the end credits to find no sign of me or my team’s research (in one embarrassing case that really soured my attitude, the RVC had broadcast to the college to watch the show to see me in action, and upon watching we found out I was cut. Ouch!). At that point I really do wonder, is it all worth it? Hours or days invested in calls, emails, paperwork, travel, arranging and replicating an experiment, repeating filmed scenes and lines, working to TV producers’ scripts and demanding timetables. All that is totally worth it if the show gets made. But if the odds are ~60/40 or so that I get cut, I think I have cause to do more than shrug. The people I’ve worked with on documentaries can be wonderfully kind and full of thanks and other approbations, and they often impress me with their enthusiasm for the programme and their very hard, tenacious work making it all happen. It is jarring, then, to find out “Oh, you’ve been cut from the show, I’m so very sorry, the executives made that decision and it was a bitter pill for us to swallow, believe me– take care and I hope we can work together again.”
Above: Performance art illustrating what it’s like to have your science filmed for a documentary, then cut; graciously acted out by a cat (R.I.P.).
My aggravation has resurfaced after filming with BBC Horizon’s new documentary on “The Secret Life of the Cat,” airing right now. Alan Wilson’s team, from our lab, is featured prominently there, so that is fantastic for the Structure & Motion Lab (also check out his purrfectly timed Nature paper on cheetah agility vs speed, also from this week!). It’s hopefully going to be a nifty show; I’ve seen some of the behind-the-scenes stuff develop. (EDIT: I’ve seen it now and it was pretty good in terms of imagery and showing off Alan’s team’s technology, but the science was pretty weakly portrayed– even laypeople I’ve spoken to said “Cats avoid each other… duh!” and the evolutionary storytelling didn’t convince me as much as I’d like; it came across as arm-waving, which is a shame if the two featured cat researchers actually have built a scientifically reasonable case for it. One could not tell if the “changes” in 1 village’s cats evidenced by 1 week’s observation were happening within a cat’s lifetime or were truly evolutionary and recent. I don’t think I’ll watch the 2nd segment.)
I was filmed for a segment which probably would have been in the 2nd part of the show airing on Friday night, but I found out last week that it got cut with a week left before airing. I will be watching the show anyway, of course. I’m not that bitter. The segment featuring my team’s research was about how cats of different sizes do not do what other land mammals do, which is to straighten their legs as size increases across evolutionary spans. This helps support their body weight more effectively, but I explained in the filming segment that in cats, the lack of a change of posture in size may have other benefits despite the cost in weight support: it can make them more stealthy, more agile/maneuverable (segue to the cheetah paper cited above!), or even better able to negotiate rough terrain. Hence a domestic cat is in a biomechanical sense in many ways much more like a tiger than it should be for a “typical mammal”– an athlete, specialized for the hunt. And smaller cats are relatively much more athletic than bigger ones because they don’t suffer from the reduced ability to support body weight that bigger cats do. This may be, for example, why cheetahs are not very large compared with tigers or lions; they are at a “happy medium” size for agility and speed. But this all got cut, I am told.
Random cat that sidled up to us during some research into cat movements; so meta!
For my would-be-part in the show, we recreated experiments that I did with then-postdoc Alexis Wiktorowicz Conroy and others (a paper yet to be published, but hopefully coming very soon) that showed how cats large and small use such similar mechanisms in terms of postures as well as forces and moments (rotational forces). In these recreations, I got an RVC clinician to bring her cat Rocket (?IIRC) Ricochet over to be filmed walking over forceplates with high-speed video recording it. The cat didn’t do much for us; it probably found our huge lab a bit overwhelming; but it did give us at least one good video and force trace for the programme. Next we did the same thing with two tigers at Colchester Zoo, and got some excellent footage, including a tiger launching itself out of its indoor enclosure to come outside, while rapidly making a turn past the camera. The latter tiger “ate” (well, ripped to shreds, literally) the rubber mat that covered my pressure pad, too, which was mostly funny — and the film crew has reimbursed me for that as well as for the drive to/from the zoo. The filming experience was good; the people were nice; but the end result was a bummer.
Advantage of visiting Colchester Zoo for research: going behind the scenes and meeting a baby aardvark (that’s not a cat).
My segment, as far as I could tell, had cool footage and added a nice extra (if intellectual) context to the “secret life of cats” theme, so it’s a shame that it got cut. I heard that famed Toxoplasma-and-cat-behaviour researcher Prof. Joanne Webster‘s segment also got cut, so at least I’m in good company. I don’t have those cool videos of slo-mo cats and tigers with me now but will put them up early next week on my Youtube channel; stay tuned. They won’t ever show up on a documentary anyway; typically when footage gets cut it just vanishes into TV-land’s bowels.
So I’m not happy. Not at all. Bitter? Yeah, a bit. Spoiled brat scientist? I’d say that would be an overly cynical perspective on it. I do recognize that I am lucky that the research I do has a strong public appeal sometimes; many scientists will never be in a documentary or get much PR of any kind. But I think anyone has a right to examine their situation in life and ask, applying basic logic, whether it is fair treatment under the circumstances. Hence I have become disillusioned and angry about the relationship of documentary makers and scientists. Not just me, but us scientists in general. We’re unpaid actors playing sizeable roles and with major expertise. We give documentaries some sci-cred, too, simply by appearing onscreen with “Professor Snugglebunny from Smoochbridge University” in the caption. Supposedly, and often truly, we get good PR for it, when our segments don’t get cut or are not edited to obliterate the context or due credit. But it’s those latter instances that raise the question of fairness. If the segment gets cut, we simply have wasted our time. And to a busy scientist, that is like jabbing me with a hot poker.
Serenity now!
[Aside: I’m waiting to hear what has happened to another documentary I was filmed for, and again spent ~2 days on, Channel 5’s “Nature Shock: Giraffe Feast” which should be airing soon… no word yet if I’ve made the final cut but the show’s airing has been delayed; hopefully not a bad sign. I am crossing my fingers… it seemed like a great show with a cool idea, and my segment raised some fun anatomical and biomechanical issues about giraffes.]
I know I’m not alone. I’m going to end my rant and see what feedback it draws.
But don’t get me wrong— it’s not all sour grapes, not by any means. I’ve still had eight-ish pretty good TV documentary experiences (cough, Dino Gangs, cough!). I’ve had great experiences working with documentaries; indeed, Inside Nature’s Giants was one of the best experiences of my career to date. And I’m sure many other scientists have had positive experiences. In answer to my provocative “Why bother?” in the headline, there are plenty of good reasons to bother working with documentaries if you are a scientist whose research they want to feature… but only if you have some assurance that it will be worth your while, perhaps? How much of a gamble should we be bothering with? That brings me to my main point, a general query–
But what about the bad? And is it all worth it, in your views, given the risks of wasting time? Do we deserve some scientists’ bill-of-media-rights or something; a documentary-actor-scientists’ guild (90% joking here)? What should our rights be and should we push harder for them? Or do we just sit back and take the good with the bad, biting our lips? (I’m obviously not the type…)
I’d like to hear from not only the seasoned veterans who’ve experienced various ups and downs, but also from anyone that has views, anecdotes; whatever. I’m not aware of anyone collecting horror stories of documentary mishaps and mistreatments experienced by scientists, but that could start here. Please do share; even if you just got a call wondering if you’d want to help a documentary and then never heard back. Who knows where it would lead, but I think it’s helpful to bring these issues to the fore and discuss them openly.