Feeds:
Posts
Comments

Archive for the ‘RANT’ Category

Vulnerability, Strength and Success

I’ve been doing a series of career guidance sessions with my research team, and this past week we talked about how to structure a successful career path as a scientist. As part of that, I gave my thoughts on how to maximize chances of that “success” (traditional definition; getting a decent permanent job as a researcher, and doing a good job at it); without knowingly being a jerk or insincere. This process led me to re-inspect my own career for insights — not that I’ve been on perfect behaviour, but I do routinely reflect on choices I make.

I asked myself, “What does success mean to me?” to see what my answer was today. That led to me writing up this story of my career path, as an example of the twists and turns that can happen in the life of a scientist. I originally intended to share this story just with my team, but then I decided to turn into a full-on blog post, in my ongoing personal quest to open up and share my thoughts and experiences with others. For those who have read my advice to PhD students, there are some commonalities, but plenty of this is new.

Where my last post was partly about publicly exposing vulnerabilities in other scientists, this one is about privately finding one’s own vulnerabilities along with the strengths, and sharing them publicly. The story is about me, but the key points are more about how “success” can evolve in science (N=1 plus anecdotal observations of others).

 

Growing Up in Grad School

As an undergraduate student, I was clueless about my career until I applied to graduate school a second time. The first time I tried applying, I didn’t even know how to really go about it, or what I wanted to do beyond some sort of biology. Yet to my credit I was curious, creative, a swift learner with a great memory for science, and broadly educated in biology and other fields (thanks, parents and past teachers!). I read and watched “Jurassic Park” and lots of Stephen Jay Gould and Darwin or palaeontology books, and I just tried to actively learn all I could, reading compulsively. I even resolved to quit non-science reading for a few years, and stuck to that. I realized that a research career combining evolution and biomechanics was of interest to me, involving vertebrates and maybe fossils.

I got into grad school in 1995 and had a great project to study how dinosaurs moved, but I felt inadequate compared to my peers. So I dedicated myself even harder to reading and learning. I didn’t pass my first orals (qualifying exam; appraisal/defense) but that helped me to refocus even more resolutely on deep learning, especially to fill gaps in my knowledge of biomechanics methods that I’d later use. During this time I also learned website design and HTML code (mid-90s; early WWW!), working with several others on Berkeley’s UCMP website in my free time. I intensively networked with colleagues via email lists (the long-lived Dinosaur listproc) and at a lot of conferences, trying to figure out how science worked and how to go about my project. That was a powerful initial formative period.

It was a gruelling struggle and I’d had serious health problems (a narrow escape from cancer) around the same time, too. I frequently, throughout the 1990’s, doubted if I could make it in the field. I looked around me and could not see how I could become successful in what I wanted to do (marry biomechanics and evolutionary biology in stronger ways). I was so scared, so uncertain of my own work, that I didn’t know what to do—I had a project but had no clue how to really implement it. So two years passed in semi-paralysis, with little concrete science to show for it, and I gave a lot of *bad* internal seminars in Berkeley’s Friday biomechanics group. However, those bad seminars helped me to become a better speaker. I had a terrible fear of public speaking; on top of having little data, this experience was brutal for me. But I used it as practice, bent to the task of bettering myself.

A change in my career trajectory happened as my research slowly took root. I wrote some book chapters for a dinosaur encyclopedia in 1997, a simple paper describing a little dinosaur in 1998, then another paper on taxonomy published in 1999. [For those wanting to find out what any of these papers I mention are, they are on my Publications page, often with pdfs] These papers at least showed I could finish a research task; when I was younger I’d had some bad habits of not finishing work I started.

I visited a lot of museums and hung out with people there, socializing while learning about diverse fossils and their evolutionary anatomy, implementing what I’d learned from my own dissections and literature studies of living animals. This led to a poster (actually two big posters stacked atop each other; plotting the evolution of the reptilian pelvis and muscles) at a palaeontology meeting (SVP). This poster turned a few heads and I suppose convinced some that I knew something about bone and soft tissue anatomy.

Then in 1998, I did a 4-month visiting scholarship at Brown University with Steve Gatesy that had a big impact on my career: Steve helped me consolidate ideas about how anatomy related to function in dinosaurs, and how to interpret data from living animals (I did my first gait experiments, with guineafowl, which went sort of OK), and I loved Brown University’s EEB department environment. For once, I felt like a grown-up, as people started to listen to what I had to say. In retrospect, I was still just a kid in many other ways. I didn’t really achieve a lot of what Steve asked me to do; I was unfocused, but changing steadily.

In 1999, I gave a talk at SVP that was well received, based on that research with Gatesy, and then I gave it again at SICB. I had a few prominent scientists encouraging me to apply for faculty jobs (e.g., Beth Brainerd was very supportive)– this gave me a new charge of excitement and confidence. I finally began to feel like a real expert in my little area of science. That talk became our 2000 “Abductors, adductors…” paper in Paleobiology, which I still love for its integrative nature and broad, bold (but incompletely answered) questions. Yet when a respected professor at Berkeley told me before my University of Chicago faculty job interview “You act like a deer in the headlights too often,” I knew I had a long journey of self-improvement left. And a lot of that improvement just came with time– and plenty of mistakes.

Momentum continued to build for my career in 2000 as I took my anatomical work into more biomechanical directions and passed my orals. I gave an SVP Romer Prize (best student talk) presentation on my new T. rex biomechanical modelling work, and I won! I felt truly appreciated, not just as an expert but as an emerging young leader in my research area. I’ll never forget the standing ovation at the award announcement in Mexico City—seeing people I saw as famous and amazing get up and cheer for me was such a rush! Then I published two lengthy anatomical papers in Zool J Linn Soc in 2001, which still are my most cited works — even more than some of my subsequent Nature papers.

 

Evolution: Postdoc to Faculty

Also in 2001, I was awarded a NSF postdoc at Stanford to do exactly what I’d long wanted to do: build detailed biomechanical models of dinosaurs, using the anatomical work I’d done before. That was it: I saw evidence that I had “made it”. But that took about six years; toward the end of my PhD; to truly feel this way most of the time, and in some ways this feeling led to youthful overconfidence and brashness that I had to later try to shed. I feel fortunate that the rest of my career went more smoothly. I doubt I could have endured another six years of struggling as I did during my PhD. But it wasn’t easy, either. During my postdoc I had to force my brain to think like a mechanical engineer’s and that was a difficult mental struggle.

The year 2002 became a wild ride for me.

First, my T. rex “not a fast runner” paper got published in Nature, and I was thrown into the limelight of the news media for two weeks or so. Luckily I was ready for the onslaught — one of my mentors, Bob Full, warned me, “This will be huge. Prepare!” I handled it well and I learned a lot about science communication in the process.

Shortly after that publication, just before my wedding’s bachelor party, I developed terrible leg blood clots and had to cancel my party—but I recovered in time for the wedding, which was a fantastic event on a California clifftop. I enjoyed a good life and seemed healthy again. I kept working hard, I got my second paper accepted at Nature on bouncy-running elephants, and then…

Then I had a stroke, just before that Nature paper got published.

Everything came crashing to a halt and I had to think about what it all meant—these were gigantic life-and-death questions to face at age 31! Luckily, I recovered without much deficit at all, and I regained my momentum with renewed stubborn dedication and grit, although my recover took many months, and took its toll on my psyche. I’ve told this story before in this post about my brain.

I started seeing therapists to talk about my struggles, which was a mixed blessing: I became more aware of my personality flaws, but also more aware of how many of those flaws wouldn’t change. I’m still not sure if that was a good thing but it taught me a lot of humility, which I still revisit today. I also learned to find humour and wonder in the dark times, which colours even this blog.

In winter of 2003 I went to a biomechanics symposium in Calgary, invited by British colleague Alan Wilson. Later that spring, Alan encouraged me to apply for an RVC faculty job (“you’ll at least get an interview and a free trip to London”), which I said no to (vet school and England move didn’t seem right to me), but later changed my mind after thinking it over.

I got the RVC job offer the day before my actual job talk (luckily colleague David Polly warned me that things like this happened fast in the UK, unlike the months of negotiation in the USA!). I made the move in November 2003 and the rest was hard work, despite plenty of mistakes and lessons learned, that paid off a lot career-wise. If I hadn’t taken that job I’d have been unemployed, and I had postdoc fellowships and faculty job applications that got rejected in 2002-2003, so I was no stranger to rejection. It all could have gone so differently…

But it wasn’t a smooth odyssey either—there were family and financial struggles, and I was thousands of miles away while my mother succumbed to Alzheimer’s and my father swiftly fell victim to cancer, and I never was 100% healthy and strong after my troubles in 2002. Even in the late 2000’s, I felt inadequate and once confided to a colleague something like “I still feel like a postdoc here. I’m a faculty member and I don’t feel like I’ve succeeded.”

Since then, I’ve achieved some security that has at last washed that feeling away. That was a gradual process,  but I think the key moment I realized that “I’ll be OK now”  was in 2010 when I got the call, while on holiday in Wales (at the time touring Caernarfon Castle), informing me that my promotion to full Professor was being approved. It was an anticlimactic moment because that promotion process took 1 year, but it still felt great. It felt like success. I’ll never earn the “best scientist ever” award, so I am content. I don’t feel I have something big left to prove to myself in my career, so I can focus on other things now. It “only” took 15 or so years…

 

Ten Lessons Learned

When I look back on this experience and try to glean general lessons, my thoughts are:

1)     Socializing matters so much for a scientific career. “Networking” isn’t a smarmy or supercilious approach, either; in fact, that insincerity can backfire and really hurt one’s reputation. I made a lot of friends early on — some of my best friends today are scientist colleagues. Many of these have turned into collaborators. Making friends in science is a win-win situation. Interacting with fellow scientists is one of the things I have always enjoyed most about science. Never has it been clearer to me how important the human element of science is. Diplomacy is a skill I never expected to use much in science, but I learned it through a lot of experience, and now I treasure it.

2)     Developing a thicker skin is essential, but being vulnerable helps, too. Acting impervious just makes you seem inhuman and isolates you. Struggling is natural and helped me endure the tough times that came along with the good times, often in sharp transition. Science is freaking hard as a career. Even with all the hard work, nothing is guaranteed. Whether you’re weathering peer review critiques, politics, or health or other “life problems”, you need strength, whether it comes from inside you or from those around you. Embrace that you won’t be perfect but strive to do your best despite that. Regret failures briefly (be real with yourself), learn from them and then move on.

3)     Reading the literature can be extremely valuable. So many of my ideas came from obsessive reading in diverse fields, and tying together diverse ideas or finding overlooked/unsolved questions and new ways to investigate them. I can’t understand why some scientists intentionally don’t try to read the literature (and encourage their students to follow this practice!), even though it is inevitable to fall behind the literature; you will always miss relevant stuff. I think it can only help to try to keep up that scholarly habit, and it is our debt to past scientists as well as our expectation of future ones—otherwise why publish?

4)     I wish I learned even more skills when I was younger. It is so hard to find time and energy now to learn new approaches. This inevitably leads to a researcher becoming steadily less of a master of research methods and data to more of a manager of research. So I am thankful for having the wisdom accumulated via trial and error experiences to keep me relevant and useful to my awesome team. That sharing of wisdom and experience is becoming more and more enjoyable to me now.

5)     Did I “succeed” via hard work or coincidence? Well, both—and more! I wouldn’t have gotten here without the hard work, but I look back and I see a lot of chance events that seemed innocent at the time, but some turned out to be deeply formative. Some decisions I made look good in retrospect, but they could have turned out badly, and I made some bad decisions, too; those are easy to overlook given that the net result has been progress. Nothing came easily, overall. And I had a lot of help from mentors, too; Kevin Padian and Scott Delp in particular. Even today, I would not say that my career is easy, by any stretch. I still can find it very draining, but it’s so fun, too!

6)     Take care of yourself. I’ve learned the hard way that the saying “At least you have your health” is profoundly wise. I try to find plenty of time now to stop, breathe and observe my life, reflecting on the adventures I’ve had so far. The feelings evoked by this are rich and complex.

7)     If I could go back, I’d change a lot of decisions I made. We all would. But I’m glad I’ve lived the life I’ve lived so far. At last, after almost 20 years of a career in science, I feel mostly comfortable in my own skin, more able to act rather than be frozen in the headlights of adversity. I know who I am and what I cannot be, and things I need to work on about myself. In some ways I feel more free than I’ve felt since childhood, because the success (as I’ve defined it in my life) has given me that freedom to try new things and take new risks, and I feel fortunate for that. I think I finally understand the phrase “academic freedom” and why it (and tenure) are so valuable in science today, because I have a good amount of academic freedom. I still try to fight my own limits and push myself to improve my world—the freedom I have allows this.

8)     When I revisit the question of “what does success mean to me?” today I find that the answer is to be able to laugh, half-darkly, at myself—at my faults, my strengths, and the profound and the idiotic experiences of my life. I’ve found ways to both take my life seriously and to laugh at myself adrift in it. To see these crisply and then to embrace the whole as “this is me, I can deal with that” brings me a fresh and satisfying feeling.

9)     Share your struggles —  and successes — with those you trust. It helps. But even just a few years ago, the thought of sharing my career’s story online would have scared me.

10)     As scientists we hope for success in our careers to give us some immortality of sorts. What immortality we win is but echoes of our real lives and selves. So I seek to inject some laughter into those echoes while revelling in the amazing moments that make up almost every day. I think it’s funny that I became a scientist and it worked out OK, and I’m grateful to the many that helped; no scientist succeeds on their own.

A major aspect of a traditional career in science is to test the hypothesis that you can succeed in a career as a scientist, which is a voyage of self-discovery, uncovering personal vulnerabilities and strengths. I feel that I am transitioning into whatever the next part of my science career will be; in part, to play a psychopomp role for others taking that voyage.

That’s my story so far. Thanks for sticking with it until the end. Please share your thoughts below.

Read Full Post »

This post is solely my opinion; not reflecting any views of my coauthors, my university, etc, and was written in my free time at home. I am just putting my current thoughts in writing, with the hope of stimulating some discussion. My post is based on some ruminations I’ve had over recent years, in which I’ve seen a lot of change happening in how science’s self-correcting process works, and the levels of openness in science, which are trends that seem likely to only get more intense.

That’s what this post ponders- where are we headed and what does it mean for scientists and science? Please stay to the end. It’s a long read, but I hope it is worth it. I raise some points at the end that I feel strongly about, and many people (not just scientists) might also agree with or be stimulated to think about more.

I’ve always tried to be proactive about correcting my (“my” including coauthors where relevant) papers, whether it was a publisher error I spotted or my/our own; I’ve done at least 5 such published corrections. Some of my later papers have “corrected” (by modifying and improving the methods and data) my older ones, to the degree that the older ones are almost obsolete. A key example is my 2002 Nature paper on “Tyrannosaurus rex was not a fast runner“- a well-cited paper that I am still proud of. I’ve published (with coauthors aplenty) about 10 papers since then that explore various strongly related themes, the accuracy of assumptions and estimates involved, and new ways to approach the 2002 paper’s main question. The message of that paper remains largely the same after all those studies, but the data have changed to the extent that it would no longer be viable to use them. Not that this paper was wrong; it’s just we found better ways to do the science in the 12 years since we wrote it.

I think that is the way that most of science works; we add new increments to old ones, and sooner or later the old ones become more historical milestones for the evolution of ideas than methods and data that we rely on anymore. And I think that is just fine. I cannot imagine it being any other way.

If you paid close attention over the past five months, you may have noticed a kerfuffle (to put it mildly) raised by former Microsoft guru/patent afficionado/chef/paleontologist Nathan Myhrvold over published estimates of dinosaur growth rates since the early 2000’s. The paper coincided with some emails to authors of papers in question, and some press attention, especially in the New York Times and the Economist. I’m not going to dwell on the details of what was right or wrong about this process, especially the scientific nuances behind the argument of Myhrvold vs. papers in question. What happened happened. And similar things are likely to happen again to others, if the current climate in science is any clue. More about that later.

But one outcome of this kerfuffle was that my coauthors and I went through (very willingly; indeed, by my own instigation) some formal procedures at our universities for examining allegations of flaws in publications. And now, as a result of those procedures, we issued a correction to this paper:

Hutchinson, J.R., Bates, K.T., Molnar, J., Allen, V., Makovicky, P.J. 2011. A computational analysis of limb and body dimensions in Tyrannosaurus rex with implications for locomotion, ontogeny, and growth. PLoS One 6(10): e26037. doi: 10.1371/journal.pone.0026037  (see explanatory webpage at: http://www.rvc.ac.uk/SML/Projects/3DTrexGrowth.cfm)

The paper correction is here: http://www.plosone.org/article/info%3Adoi/10.1371/journal.pone.0097055. Our investigations found that the growth rate estimates for Tyrannosaurus were not good enough to base any firm conclusions are, so we retracted all aspects of growth rates from that paper. The majority of the paper, about estimating body mass and segment dimensions (masses, centres of mass, inertia) and muscle sizes as well as their changes through growth and implications for locomotor ontogeny, still stands; it was not in question.

For those (most of you!) who have never gone through such a formal university procedure checking a paper, my description of it is that it is a big freakin’ deal! Outside experts may be called in to check the allegations and paper, you have to share all your data with them and go through the paper in great detail, retracing your steps, and this takes weeks or months. Those experts may need to get paid for their time. It is embarassing even if you didn’t make any errors yourself and even if you come out squeaky clean. And it takes a huge amount of your time and energy! My experience started on 16 December, reached a peak right around Xmas eve (yep…), and finally we submitted our correction to PLoS and got editorial approval on 20 March. So it involved three months of part-time but gruelling dissection of the science, and long discussions of how to best correct the problems. Many cooks! I have to admit that personally I found the process very stressful and draining.

Next time you wonder why science can be so slow at self-correction, this is the reason. The formal processes and busy people involved mean it MUST be slow– by the increasingly speedy standards of  modern e-science, anyway. Much as doing science can be slow and cautious, re-checking it will be. Should be?

My message from that experience is to get out in front of problems like this, as an author. Don’t wait for someone else to point it out. If you find mistakes, correct them ASAP. Especially if they (1) involve inaccurate data in the paper (in text, figures, tables, whatever), (2) would lead others to be unable to reproduce your work in any way, even if they had all your original methods and data, or (3) alter your conclusions. It is far less excruciating to do it this way then to have someone else force you to do it, which will almost inevitably involve more formality, deeper probing, exhaustion and embarassment. And there is really no excuse that you don’t have time to do it. Especially if a formal process starts. I can’t even talk about another situation I’ve observed, which is ongoing after ~3 years and is MUCH worse, but I’ve learned more strongly than ever that you must demonstrate you are serious and proactive about correcting your work.

I’ve watched other scientists from diverse fields experience similar things– I’m far from alone. Skim Retraction Watch and you’ll get the picture. What I observe both excites me and frightens me. I have a few thoughts.

1) The drive to correct past science is a very good development and it’s what science is meant to be about. This is the most important thing!

2) The digital era, especially trends for open access and open data for papers, makes corrections much easier to discover and do. That is essentially good, and important, and it is changing everything about how we do science. Just watch… “we live in interesting times” encapsulates the many layers of feelings one should react with if you are an active researcher. I would not dare to guess what science will be like in 20 years, presumably when I’ll be near my retirement and looking back on it all!

3) The challenge comes in once humans get involved. We could all agree on the same lofty principles of science and digital data but even then, as complex human beings, we will have a wide spectrum of views on how to handle cases in general, or specific cases.

This leads to a corollary question– what are scientists? And that question is at the heart of almost everything controversial about scientific peer review, publishing and post-publication review/correction today, in my opinion. To answer this, we need to answer at least two sub-questions:

1–Are we mere cogs in something greater, meant to hunker down and work for the greater glory of the machine of science?

(Should scientists be another kind of public servant? Ascetic monks?)

2–Are we people meant to enjoy and live our own lives, making our own choices and value judgements even if they end up being not truly optimal for the greater glory of science?

(Why do we endure ~5-10 years of training, increasingly poor job prospects/security, dwindling research funds, mounting burdens of expectations [e.g., administrative work, extra teaching loads, all leading to reduced freedoms] and exponentially growing bureaucracies? How does our experience as scientists give meaning to our own lives, as recompense?)

The answer is, to some degree, yes to both of the main questions above, but how we reconcile these two answers is where the real action is. And this brew is made all the spicier by the addition of another global trend in academia: the corporatization of universities (“the business model”) and the concomitant, increasing concern of universities about public image/PR and marketing values. I will not go any further with that; I am just putting it out there; it exists.

The answer any person gives will determine how they handle a specific situation in science. You’ve reminded your colleague about possible errors in their work and they haven’t corrected it. Do you tell their university/boss or do you blog and tweet about it, to raise pressure and awareness and force their hand? Or do you continue the conversation and try to resolve it privately at any cost? Is your motive truly the greater glory of science, or are you a competitive (or worse yet, vindictive or bitter) person trying to climb up in the world by dragging others down? How should mentors counsel early career researchers to handle situations like this? Does/should any scientist truly act alone in such a regard? There may be no easy, or even mutually exclusive, answers to these questions.

We’re all in an increasingly complex new world of science. Change is coming, and what that change will be like or when, no one truly knows. But ponder this:

Open data, open science, open review and post-publication review, in regards to correcting/retracting past publications: how far down the rabbit hole do we go?

The dinosaur growth rates paper kerfuffle concerned numerous papers that date back to earlier days of science, when traditions and expectations differed from today’s. Do we judge all past work by today’s standards, and enforce corrections on past work regardless of the standards of its time? If we answer some degree of “yes” to this, we’re in trouble. We approach a reductio ad absurdum: we might logic ourselves into a corner where that great machine of science is directed to churn up great scientific works of their time. Should Darwin’s or Einstein’s errors be corrected or retracted by a formal process like those we use today? Who would do such an insane thing? No one (I hope), but my point is this: there is a risk that is carried in the vigorous winds of the rush to make science look, or act, perfect, that we dispose of the neonate in conjunction with the abstergent solution.

OK I used 1 image...

There is always another way. Science’s incremental, self-correcting process can be carried out quite effectively by publishing new papers that correct and improve on old ones, rather than dismantling the older papers themselves. I’m not arguing for getting rid of retractions and corrections. But, where simple corrections don’t suffice, and where there is no evidence of misconduct or other terrible aspects of humanity’s role in science, perhaps publishing a new paper is a better way than demolishing the old. Perhaps it should be the preferred or default approach. I hope that this is the direction that the Myhrvold kerfuffle leans more toward, because the issues at stake are so many, so academic in nature, and so complex (little black/white and right/wrong) that openly addressing them in substantial papers by many researchers seems the best way forward. That’s all I’ll say about that.

I still feel we did the right thing with our T. rex growth paper’s correction. There is plenty of scope for researchers to re-investigate the growth question in later papers.  But I can imagine situations in which we hastily tear down our or others’ hard work in order to show how serious we are about science’s great machine, brandishing lofty ideals with zeal– and leaving unfairly maligned scientists as casualties in our wake. I am reminded of outbursts over extreme implementations of security procedures at airports in the USA, which were labelled “security theatre” for their extreme cost, showiness and inconvenience, with negligible evidence of security improvements.

The last thing we want in science is an analogous monstrosity that we might call “scientific theatre.” We need corrective procedures for and by scientists, that serve both science and scientists best. Everyone needs to be a part of this, and we can all probably do better, but how we do it… that is an interesting adventure we are on. I am not wise enough to say how it should happen, beyond what I’ve written here. But…

A symptom of scientific theatre might be a tendency to rely on public shaming of scientists as punishment for their wrongs, or as encouragement for them to come clean. I know why it’s done. Maybe it’s the easy way out; point at someone, yell at them in a passionate tone backed up with those lofty ideals, and the mob mentality will back you up, and they will be duly shamed. You can probably think of good examples. If you’re on social media you probably see a lot of it. There are naughty scientists out there, much as there are naughty humans of any career, and their exploits make a good story for us to gawk at, and often after a good dose of shaming they seem to go away.

But Jon Ronson‘s ponderings of the phenomenon of public shaming got me thinking (e.g., from this WTF podcast episode; go to about 1 hr 9 min): does public shaming belong in science? As Ronson said, targets of severe public shaming have described it as “the worst pain ever”, and sometimes “there’s no recourse” for them. Is this the best way to live together in this world? Is it really worth it, for scientists to do to others or to risk having done to them? What actually are its costs? We all do it in our lives sometimes, but it deserves introspection. I think there are lessons from the dinosaur growth rates kerfuffle to be learned about public shaming, and this is emblematic of problems that science needs to work out for how it does its own policing. I think this is a very, very important issue for us all to consider, in the global-audience age of the internet as well as in context of the intense pressures on scientists today. I have no easy answers. I am as lost as anyone.

What do you think?

 

EDIT: I am reminded by comments below that 2 other blog posts helped inspire/coagulate my thoughts via the alchemy of my brain, so here they are:

http://dynamicecology.wordpress.com/2014/02/24/post-publication-review-signs-of-the-times/ Which considers the early days of the Myhrvold kerfuffle.

http://blogs.discovermagazine.com/neuroskeptic/2014/01/27/post-publication-cyber-bullying/ Which considers how professional and personal selves may get wounded in scientific exchanges.

Read Full Post »

This post was just published yesterday in a shorter, edited form in The Conversation UK, with the addition of some of my latest thoughts and the application of the editor’s keen scalpel. Check that out, but check this out too if you really like the topic and want the raw original version! I’ve changed some images, just for fun. The text here is about 2/3 longer.

Recently, the anatomy of animals comes up a lot, at least implicitly, in science news stories or internet blogs. Anatomy, if you look for it, is everywhere in organismal and evolutionary biology. The study of anatomy has undergone a renaissance lately, in a dynamic phase energized by new technologies that enable new discoveries and spark renewed interest. It is the zombie science, risen from what some had assumed was its eternal grave!

Stomach-Churning Rating: 4/10; there’s a dead elephant but no gore.

My own team has re-discovered how elephants have a false “sixth toe” that has been a mystery since it was first mentioned in 1710, and we’ve illuminated how that odd bit of bone evolved in the elephant lineage. This “sixth toe” is a modified sesamoid kind of bone; a small, tendon-anchoring lever. Typical mammals just have a little nubbin of sesamoid bone around their ankles and wrists that is easily overlooked by anatomists, but evolution sometimes co-opts as raw material to turn into false fingers or toes. In several groups of mammals, these sesamoids lost their role as a tendon’s lever and gained a new function, more like that of a finger, by becoming drastically enlarged and elongated during evolution. Giant pandas use similar structures to grasp bamboo, and moles use them to dig. We’ve shown that elephants evolved these giant toe-like structures as they became larger and more terrestrial, starting to stand up on tip-toe, supported by “high-heels” made of fat. Those fatty heels benefit from a stiff, toe-like structure that helps control and support them, while the fatty pads spread out elephants’ ponderous weight.

Crocodile lung anatomy and air flow, by Emma Schachner.

Crocodile lung anatomy and air flow, by Emma Schachner.

I’ve also helped colleagues at the University of Utah (Drs. Emma Schachner and Colleen Farmer) reveal, to much astonishment, that crocodiles have remarkably “bird-like” lungs in which air flows in a one-way loop rather than tidally back and forth as in mammalian lungs. They originally discovered this by questioning what the real anatomy of crocodile lungs was like- was it just a simple sac-like structure, perhaps more like the fractal pattern in mammalian lungs, and how did it work? This question bears directly on how birds evolved their remarkable system of lungs and air sacs that in many ways move air around more effectively than mammalian lungs do. Crocodile lungs indicate that “avian” hallmarks of lung form and function, including one-way air flow, were already present in the distant ancestors of dinosaurs; these traits were thus inherited by birds and crocodiles. Those same colleagues have gone on to show that this feature also exists in monitor lizards, raising the question (almost unthinkable 10-20 years ago) of whether those bird-like lungs are actually a very ancient and common feature for land animals.

Speaking of monitor lizards, anatomy has revealed how they (and some other lizards) all have venom glands that make their bites even nastier, and these organs probably were inherited by snakes. For decades, scientists had thought that some monitor lizards, especially the huge Komodo dragons, drooled bacteria-laden saliva that killed their victims with septic shock. Detailed anatomical and molecular investigations showed instead that modified salivary glands produced highly effective venom, and in many species of lizards, not just the big Komodos. So the victims of numerous toothy lizard species die not only from vicious wounds, but also from worsened bleeding and other circulatory problems promoted by the venomous saliva. And furthermore, this would mean that venom did not evolve separately in the two known venomous lizards (Gila monster and beaded lizard) and snakes, but was inherited from their common ancestor and became more enhanced in those more venomous species—an inference that general lizard anatomy supports, but which came as a big surprise when revealed by Bryan Fry and colleagues in 2005.

There’s so much more. Anatomy has recently uncovered how lunge-feeding whales have a special sense organ in their chin that helps them detect how expansive their gape is, aiding them to engulf vast amounts of food. Scientists have discovered tiny gears in the legs of leafhoppers that help them make astounding and precise leaps. Who knew that crocodilians have tiny sense organs in the outer skin of their jaws (and other parts of their bodies) that help them detect vibrations in the water, probably aiding in communication and feeding? Science knows, thanks to anatomy.

Just two decades or so ago, when I was starting my PhD studies at the University of California in Berkeley, there was talk about the death of anatomy as a research subject; both among scientists and the general public. What happened? Why did anatomy “die” and what has resuscitated it?

 

TH Huxley, anatomist extraordinaire

TH Huxley, anatomist extraordinaire, caricatured in a lecture about “bones and stones, and such-like things” (source)

Anatomy’s Legacy

In the 16th through 19th centuries, the field of gross anatomy as applied to humans or other organisms was one of the premier sciences. Doctor-anatomist Jean Francois Fernel, who invented the word “physiology”, wrote in 1542 that (translation) “Anatomy is to physiology as geography is to history; it describes the theatre of events.” This theatric analogy justified the study of anatomy for many early scientists, some of whom also sought to understand it to bring them closer to understanding the nature of God. Anatomy gained impetus, even catapulting scientists like Thomas Henry Huxley (“Darwin’s bulldog”) into celebrity status, from the realisation that organisms had a common evolutionary history and thus their anatomy did too. Thus comparative anatomy became a central focus of evolutionary biology.

But then something happened to anatomical research that can be hard to put a finger on. Gradually, anatomy became a field that was scoffed at as outmoded, irrelevant, or just “solved”; nothing important being left to discover. As a graduate student in the 1990s, I remember encountering this attitude. This apparent eclipse of anatomy accelerated with the ascent of genetics, with anatomy reaching its nadir in the 1950s-1970s as techniques to study molecular and cellular biology (especially DNA) flourished.

One could argue that molecular and cellular biology are anatomy to some degree, especially for single-celled organisms and viruses. Yet today anatomy at the whole organ, organism or lineage level revels in a renaissance that deserves inspection and reflection on its own terms.

 

Anatomy’s Rise

Surely, we now know the anatomy of humans and some other species quite well, but even with these species scientists continue to learn new things and rediscover old aspects of anatomy that laid forgotten in classic studies. For example, last year Belgian scientists re-discovered the anterolateral ligament of the human knee, overlooked since 1879. They described it, and its importance for how our knees function, in novel detail, and a lot of media attention was drawn to this realisation that there are some things we still don’t understand about our own bodies.

A huge part of this resurgence of anatomical science is technology, especially imaging techniques- we are no longer simply limited to the dissecting knife and light microscope as tools, but armed with digital technology such as 3-D computer graphics, computed tomography (series of x-rays) and other imaging modalities. Do you have a spare particle accelerator? Well then you can do amazing synchrotron imaging studies of micro-anatomy, even in fairly large specimens. Last year, my co-worker Stephanie Pierce and colleagues (including myself) used this synchrotron approach to substantially rewrite our understanding of how the backbone evolved in early land animals (tetrapods). We found that the four individual bones that made up the vertebrae of Devonian tetrapods (such as the iconic Ichthyostega) had been misunderstood by the previous 100+ years of anatomical research. Parts that were thought to lie at the front of the vertebra actually lay at the rear, and vice versa. We also discovered that, hidden inside the ribcage of one gorgeous specimen of Ichthyostega, there was the first evidence of a sternum, or breastbone; a structure that would have been important for supporting the chest of the first land vertebrates when they ventured out of water.

Recently, anatomists have become very excited by the realization that a standard tissue staining solution, “Lugol’s” or potassium iodide iodine, can be used to reveal soft tissue details in CT scans. Prior to this recognition, CT scans were mainly used in anatomical research to study bone morphology, because the density contrast within calcified tissues and between them and soft tissues gives clearer images. To study soft tissue anatomy, you typically needed an MRI scanner, which is less commonly accessible, often slower and more expensive, and sometimes lower resolution than a CT scanner. But now we can turn our CT scanners into soft tissue scanners by soaking our specimens in this contrast solution, allowing highly detailed studies of muscles and bones, completely intact and in 3D. Colleagues at Bristol just published a gorgeous study of the head of a common buzzard, sharing 3D pdf files of the gross anatomy of this raptorial bird and promoting a new way to study and illustrate anatomy via digital dissections- you can view their beautiful results here. Or below (by Stephan Lautenschlager et al.)!

Buzzard-head

These examples show how anatomy has been transformed as a field because we now can peer inside the bodies of organisms in unprecedented detail, sharing and preserve those data in high-resolution digital formats. We can do this without the concern that a unique new species from Brazilian rainforests or exciting fossil discovery from the Cambrian period would be destroyed if we probed certain questions about its anatomy that are not visible from the outside– a perspective in which science had often remained trapped for centuries. These tools became rapidly more diverse and accessible from the 1990s onward, so as a young scientist I got to see some of the “before” and “after” influences on anatomical research—these have been very exciting times!

When I started my PhD in 1995, it was an amazing luxury to first get a digital camera to use to take photographs for research, and then a small laser scanner for making 3D digital models of fossils, with intermittent access to a CT scanner in 2001 and now full-time access to one since 2003. These stepwise improvements in technology have totally transformed the way I study anatomy. In the 1990s, you dissected a specimen and it was reduced to little scraps; at best you might have some decent two-dimensional photographs of the dissection and some beetle-cleaned bones as a museum specimen. Now, we CT or MRI scan specimens as routine practice, preserving many mega- or gigabytes of data on its internal and external, three-dimensional anatomy in lush detail, before scalpel ever touches skin. Computational power, too, has grown to the point where incredibly detailed 3D digital models produced from imaging real specimens can be manipulated with ease, so science can better address what anatomy means for animal physiology, behaviour, biomechanics and evolution. We’re at the point now where anatomical research seems no longer impeded by technology– the kinds of questions we can ask are more limited by access to good anatomical data (such as rare specimens) than by the ways we acquire and use those data.

My experience mirrors my colleagues’. Larry Witmer at Ohio University in the USA, past president of the International Society for Vertebrate Morphologists, has gone from dissecting bird heads in the 1990s to becoming a master of digital head anatomy, having collected 3D digital scans of hundreds of specimens, fossil and otherwise. His team has used these data to great success, for example revealing how dinosaurs’ fleshy nostrils were located in the front of their snouts (not high up on the skull, as some anatomists had speculated based on external bony anatomy alone). They have also contributed new, gorgeous data on the 3D anatomy of living animals such as opossums, ostriches, iguanas and us, freely available on their “Visible Interactive Animal” anatomy website. Witmer comments on the changes of anatomical techniques and practice: “For extinct animals like dinosaurs, these approaches are finally putting the exploration of the evolution of function and behavior on a sound scientific footing.

I write an anatomy-based blog called “What’s in John’s Freezer?” (haha, so meta!), in which I recount the studies of animal form and function that my research team and others conduct, often using valuable specimens stored in our lab’s many freezers. I started this blog almost two years ago because I noticed a keen interest, or even hunger for, stories about anatomy amongst the general public; and yet few blogs explicitly were about anatomy for its own sake. This interest became very clear to me when I was a consultant for the BAFTA award-winning documentary series “Inside Nature’s Giants” in 2009, and I was noticing more documentaries and other programmes presenting anatomy in explicit detail that would have been considered too risky 10 years earlier. So not only is anatomy a vigorous, rigorous science today, but people want to hear about it. Just in recent weeks, the UK has had “Dissected” as two 1-hour documentaries and “Secrets of Bones” as back-to-back six 30-minute episodes, all very explicitly about anatomy, and on PRIME TIME television! And PBS in the USA has had “Your Inner Fish,” chock full of anatomy. I. Love. This.

Before the scalpel: the elephant from Inside Nature's Giants

Before the scalpel: the elephant from Inside Nature’s Giants

There are many ways to hear about anatomy on the internet these days, reinforcing the notion that it enjoys strong public engagement. Anatomical illustrators play a vital role now much as they did in the dawn of anatomical sciences– conveying anatomy clearly requires good artistic sensibilities, so it is foolish to undervalue these skills. The internet age has made disseminating such imagery routine and high-resolution, but we can all be better about giving due credit (and payment) to artists who create the images that make our work so much more accessible. Social media groups on the internet have sprung up to celebrate new discoveries- watch the Facebook or Twitter feeds of “I F@*%$ing Love Science” or “The Featured Creature,” to name but two popular venues, and you’ll see a lot of fascinating comparative animal anatomy there, even if the word “anatomy” isn’t necessarily used. I’d be remiss not to cite Emily Graslie’s popular, unflinchingly fun social media-based explorations of gooey animal anatomy in “The Brain Scoop”. I’d like to celebrate that these three highly successful disseminators of (at least partly) anatomical outreach are all run by women—anatomical science can (and should!) defy the hackneyed stereotype that only boys like messy stuff like dissections. There are many more such examples. Anatomy is for everyone! It is easy to relate to, because we all live in fleshy anatomical bodies that rouse our curiosity from an early age, and everywhere in nature there are surprising parallels with — as well as bizarre differences from — our anatomical body-plans.

 

Anatomy’s Relevance

What good is anatomical knowledge? A great example comes from gecko toes, but I could pick many others. Millions of fine filaments, modified toe scales called setae, use micro-molecular forces called van der Waals interactions to help geckos cling to seemingly un-clingable surfaces like smooth glass. Gecko setae have been studied in such detail that we can now create their anatomy in sufficient detail to make revolutionary super-adhesives, such as the product “Geckskin”, 16 square inches of which can currently suspend 700 pounds aloft. This is perhaps the most famous example from recent applications of anatomy, but Robert Full’s Poly-Pedal laboratory at Berkeley, among many other research groups excelling at bio-inspired innovation in robotics and other fields of engineering and design, regularly spins off new ideas from the principle that “diversity enables discovery”, as applied to the sundry forms and functions found in organisms. By studying the humble cockroach, they have created new ways of building legged robots that can scour earthquake wreckage for survivors or explore faraway planets. By asking “how does a lizard use its big tail during leaping?” they have discovered principles that they then use to construct robots that can jump over or between obstacles. Much of this research relates to how anatomical traits determine the behaviours that a whole, living, dynamic organism is capable of performing.

Whereas when I was a graduate student, anatomists and molecular biologists butted heads more often than was healthy for either of them, competing for importance (and funding!), today the scene is changing. With the rise of “evo devo”, evolutionary developmental biology, and the ubiquity of genomic data as well as epigenetic perspectives, scientists want to explain “the phenotype”—what the genome helps to produce via seemingly endless developmental and genetic mechanisms. Phenotypes often are simply anatomy, and so anatomists now have new relevance, often collaborating with those skilled in molecular techniques or other methods such as computational biology. One example of a hot topic in this field is, “how do turtles build their shells and how did that shell evolve?” To resolve this still controversial issue, we need to know what a shell is made of, what features in fossils could have been precursors to a modern shell, how turtles are related to other living and extinct animals, how a living turtle makes its shell, and how the molecular signals involved are composed and used in animals that have or lack shells. The first three questions require a lot of anatomical data, and the others involve their fair share, too.

Questions like these draw scientists from disparate disciplines closer together, and thanks to that proximity we’re inching closer to an answer to this longstanding question in evolutionary biology and anatomy, illustrated above in the video.  As a consequence, the lines between anatomists and molecular/cellular biologists increasingly are becoming blurred, and that synthesis of people, techniques and perspectives seems to be a healthy (and inevitable?) trend for science. But there’s still a long way to go in finding a happy marriage between anatomists and the molecular/cellular biologists whose work eclipsed theirs in past decades. Old controversies like “should we use molecules or morphology to figure out how animals are related to each other?” are slowly dying out, as the answer becomes evident to be “Yes. Both.” (especially when fossils can be included!) Such dwindling controversies contribute to the healing of disciplinary rifts and the unruffling of parochial feathers.

Yet many anatomists would point to lingering obstacles that give them concern for their future; funding is but one of them (few would argue that gross anatomical research is as well off in provision of funding as genetics is, for example). There are clear mismatches between the hefty importance, vitality, popularity and rigour of anatomical science and its perception or its role in academia.

Romane 1892, covering Haeckel's classic, early evo-devo work (probably partly faked, but still hugely influential)

Romane 1892, covering Haeckel’s classic, early evo-devo work (probably partly faked, but still hugely influential) (source)

 

Anatomy’s Future

One worry the trend that anatomy as a scientific discipline is clearly flourishing in research while it dwindles in teaching. Fewer and fewer universities seem to be teaching the basics of comparative anatomy that were a mainstay of biology programmes a century ago. Yet anatomy is everywhere now in biology, and in the public eye. It inspires us with its beauty and wonder—when you marvel at the glory of beholding a newly discovered species, you are captivated by its phenotypic pulchritude. Anatomy is still the theatre in which function and physiology are enacted, and the physical encapsulation of the phenotype that evolution moulds through interactions with the environment. But there is cause for concern that biology students are not learning much about that theatre, or that medical schools increasingly seem to eschew hands-on anatomical dissection in favour of digital learning. Would you want a doctor to treat you if they mainly knew human anatomy from a CGI version on an LCD screen in medical school, and hence were less aware of all the complexity and variation that a real body can house?

Anatomy has an identity problem, too, stemming from decades of (Western?) cultural attitudes (e.g. the “dead science” meme) and from its own success—by being so integral to so many aspects of biology, anatomy seems to have integrated itself toward academic oblivion, feeding the perception of its own obsolescence.  I myself struggled with what label to apply to myself as an early career researcher- I was afraid that calling myself an “anatomist” would render me quaint or unambitious in the eyes of faculty job interview panels, and I know that many of my peers felt the same. I resolved that inner crisis years ago and came to love identifying myself at least partly as an anatomist. I settled on the label “evolutionary biomechanist” as the best term for my speciality. In order to reconstruct evolution or how animals work (biomechanics), we first often need to describe key aspects of anatomy, and we still discover new, awesome things about anatomy in the process. I still openly cheer on anatomy as a discipline because its importance is so fundamental to what I do, and I am far from alone in that attitude. Other colleagues that do anatomical research use other labels for themselves like “biomechanist”, “physiologist,” or “palaeontologist”, because those words better capture the wide range of research and teaching that they do, but I bet also because some of them likely still fear the perceived stigma of the word “anatomy” among judgemental scientists, or even the public. At the same time, many of us get hired at medical, veterinary or biology schools/departments because we can teach anatomy-based courses, so there is still hope.

Few would now agree with Honoré de Balzac’s 19th century opinion that “No man should marry until he has studied anatomy and dissected at least one woman”, but we should hearken back to what classical scientists knew well: it is to the benefit of science, humanity and the world to treasure the anatomy that is all around us. We inherit that treasure through teaching; to abscond this duty is to abandon this trove. With millions of species around today and countless more in the past, there should always be a wealth of anatomy for everyone to learn from, teach about, and rejoice.

X-ray technology has revolutionized anatomical studies; what's next? Ponder that as this ostrich wing x-ray waves goodbye.

X-ray technology has revolutionized anatomical studies; what’s next? Ponder that as this ostrich wing x-ray waves goodbye.

Like this post? You might also find my Slideshare talk on the popularity of anatomy interesting- see my old post here for info!

Read Full Post »

I am sure someone, in the vast literature on science communication out there, has written about this much better than I can, but I want to share my perspective on an issue I think about a lot: the tension between being a human, full of biases and faults and emotions, and doing science, which at its core seems inimical to these human attributes.

Stomach-Churning Rating: 1/10; nothing but banal meme pics ahead…

This is not a rant; it is an introspective discourse, and I hope that you join in at the end in the Comments with your own reflections. But it fits into my blog’s category of rant-like perambulations, which tend to share an ancestral trait of being about something broader than freezer-based anatomical research. As such, it is far from a well-thought-out product. It is very much a thought-in-progress; ideal for a blog post.

(Dr./Mr.)Spock of the Star Trek series is often conveyed as an enviably ideal scientific mind, especially for his Vulcan trait of being mostly logical– except for occasional outbreaks of humanity that serve as nice plot devices and character quirks. Yet I have to wonder, what kind of scientist would he really be, in modern terms? It wasn’t Spock-fanboying that got me to write this post (I am no Trekkie), but he does serve as a useful straw man benchmark for some of my main points.

“Emotions are alien to me – I am a scientist.” (Spock – Paradise syndrome)

The first ingredient of the tension I refer to above is a core theme in science communication: revealing that scientists are human beings (gasp!) with all the same attributes as other people, and that these human traits may make the story more personable or (perhaps in the best stories) reveal something wonderful, or troubling, about how science works.

The second ingredient is simply the scientific process and its components, such as logic, objectivity, parsimony, repeatability, openness and working for the greater good of science and/or humankind.

There is a maxim in critical thinking that quite a few scientists hold: One’s beliefs (small “B”– i.e. that which we provisionally accept as reality) should be no stronger than the evidence that supports them. A corollary is that one should be swift, or at least able, to change one’s beliefs if the evidence shifts in favour of a better (e.g. more parsimonious/comprehensive) one.

It is a pretty damn good maxim, overall. But in viewing, or imagining (as “what-if?” scenarios), you may find that some scientists’ reactions to their beliefs/opinions/ideas — especially regarding conclusions that their research has reached — can occasionally violate this principle. That violation would almost always be caused by some concoction of their human traits opposing the functionality of this maxim and its corollary.

Spock quote

For example (and this is how I got thinking about this issue this week; I started writing the post on 5 December, then paused while awaiting further inspiration/getting normal work done/fucking around), what if Richard Dawkins was confronted with strong evidence that The Selfish Gene’s main precepts were wrong? This is a mere heuristic example, although I was thinking about it because David Dobbs wrote a piece that seemed to be claiming that the balance of scientific evidence was shifting against selfish genes (and he later shifted/clarified his views as part of a very interesting and often confusing discussion, especially with Jerry Coyne– here). It doesn’t matter if it’s Dawkins (or Dobbs) or some other famous scientist and their best or most famous idea. But would they quickly follow the aforementioned maxim and shift their beliefs, discarding all their prior hard work and acclaim? (a later, palaeontological, event in December caused me to reflect on a possibly better example, but it’s so controversial, messy and drenched in human-ness that I won’t discuss it here… sorry. If you really want a palaeo-example, insert Alan Feduccia and “birds aren’t dinosaurs” here as an old one.)

I’d say they’d be reluctant to quickly discard their prior work, and so might I, and to a degree that’s a good, proper thing. A second maxim comes into play here, but it is a tricky one: “Extraordinary claims require extraordinary evidence.” For a big scientific idea to be discarded, one would want extraordinary scientific evidence to the contrary. And additionally, one might not want to quickly shift their views to accomodate that new evidence, perhaps, as a hasty rush to a new paradigm/hypothesis could be very risky if the “extraordinary” evidence later turned out itself to be bunk, or just misinterpreted. Here, basic scientific practice might hold up well.

kirk

But, but… that “extraordinary evidence” could be very hard to interpret– this is the tricky bit. What is “extraordinary?” Often in science, evidence isn’t as stark and crisp as p<0.05 (a statistical threshold of significance). Much evidence requires a judgement call– a human judgement call — at some step in its scrutiny, often as a provisional crutch pending more evidence. Therein lies a predicament for any scientist changing any views they cherish. How good are the methods used to accumulate contrary evidence? Does that evidence and its favoured conclusion pass the “straight-face test” of plausibility?

All this weighing of diverse evidence can lead to subjectivity… but that’s not such a bad thing perhaps. It’s a very human thing. And it weighs heavily in how we perceive the strength of scientific methods and evidence. Much as we strive as scientists to minimize subjectivity, it is there in many areas of scientific inquiry, because we are there doing the science, and because subjectivity can be a practical tool. Sometimes subjectivity is needed to move on past a quagmire of complex science. For example, in my own work, reconstructing the soft tissue anatomy of extinct dinosaurs and other critters is needed, despite some varying degrees of subjectivity, to test hypotheses about their behaviour or physiology. I’ve written at length about that subjectivity in my own research and it’s something I think about constantly. It bugs me, but it is there to stay for some time.

One might look at this kind of situation and say “Aha! The problem is humans! We’re too subjective and illogical and other things that spit in the face of science! What we need is a Dr. Spock. Or better yet, turn the science over to computers or robots. Let amoral, strictly logical machines do our science for us.” And to a degree, that is true; computers help enormously and it is often good to use them as research tools. Evolutionary biology has profited enormously from turning over the critical task of making phylogenetic trees largely to computers (after the very human and often subjective task of character analysis to codify the data put into a computer– but I’d best not go off on this precipitous tangent now, much as I find it interesting!). This has shrugged off (some of) the chains of the too-subjective, too-authority-driven Linnaean/evolutionary taxonomy.

But I opine that Spock would be a miserable scientist, and much as it is inevitable that computers and robots will increasingly come to dominate key procedures in science, it is vital that humans remain in the driver’s seat. Yes, stupid, biased, selfish, egocentric, socially awkward, meatbag humans. Gotta love ’em. But we love science partly because we love our fellow meatbags, and we love the passion that a good scientist shares with a good appreciator of science– this is the lifeblood of science communication itself. Science is one of the loftier things that humans do– it jostles our deeper emotions of awe and wonder, fear and anxiety. Without human scientists doing science, making human mistakes that make fantastic stories about science and humanity, and without those scientists promoting science as a fundamentally human endeavour, much of that joy and wonder would be leached out of science along with the uncomfortable bits.

Bendernator

Spock represents the boring -but necessary- face of science. Sure, Spock as a half-human could still have watered-down, plot-convenient levels of the same emotions that fuel human scientists, and he had to have them to be an enjoyable character (as did his later analogue, Data; to me, emotion chip or not, Data still had some emotions).

But I wouldn’t want to have Spock running my academic department, chairing a funding body, or working in my lab.

Spock might be a good lab technician (or not), but could he lead a research team, inspiring and mentoring them to new heights of achievement? Science is great because we humans get to do it. We get to discover stuff that makes us feel like superheroes, and we get to share the joy of those discoveries with others, to celebrate another achievement of humanity in comprehending the universe.

And science is great because it involves this tension between the recklessly irrational human side of our nature and our capacity to be ruthlessly logical. I hear a lot of scientists complaining about aspects of being a scientist that are more about aspects of being human. Yes, academic job hiring, and departmental politics, and grant funding councils, and the peer review/publishing system, and early career development, and so many other (all?) aspects of being a scientist have fundamental flaws that can make them very aggravating and leave people despondent (or worse). And there are ways that we can improve these flaws and make the system work better. We need to discuss those ways; we need to subject science itself to peer review.

But science, like any human endeavour, might never be fair. As long as humans do science, science will be full of imbalance and error. I am not trying to excuse our naughty species for those faults! We need to remain vigilant for them both in ourselves and in others! However, I embrace them, like I might an embarrassingly inept relative, as part of a greater whole; a sloppy symptom of our meatbaggy excellence. To rid ourselves of the bad elements of human-driven science, to some degree, would require us to hand over science to some other agency. In the process, we’d be robbing ourselves of a big, steamy, stinky. glorious, effervescent, staggeringly beautiful chunk of our humanity.

Spock isn’t coming to take over science anytime soon, and I celebrate that. To err is human, and to do science is to err, from time to time. But science, messy self-correcting process that it is, will untangle that thicket of biases and cockups over time. If we inspect it closely it will always be full of things we don’t like, and weeding those undesirables out is the job of every scientist of any stripe. Self-reflection and doubt are important weed-plucking tools in our arsenal for this task, because every scientist should keep their own garden tidy while they scrutinize others’. This is a task that I, as a scientist, try to take seriously but I admit my own failures (e.g. being an overly harsh, competitive, demanding reviewer in my younger years… I have mellowed).

humanity-and-logic

So here’s to human-driven science. Live long and publish!

Up next: FREEZERMAS!!! A week-long extraganza of all that this blog is really about, centred around Darwin’s birthday. Starts Sunday!

Read Full Post »

Yesterday I encountered the question that, as a scientist who has studied a certain chunky Cretaceous carnivore a lot, most deflates me and makes me want to go study cancer therapeutic methods or energy sources that are alternatives to fossil fuels (but I’d be useless at either). I will explain why this is at the end of the post.

The question stems from a new discovery, reported in Proceedings of the National Academy of Sciences (PNAS) and thus expected to be one of the more important or exciting studies this year (no, I’m not going to get into the issue here of whether these “high impact” journals include the best scientific research or the most superficial or hyped “tabloid” science; they publish both, and not in mutual exclusivity). It’s a broken Tyrannosaurus rex tooth embedded in a duckbill dinosaur’s tail bone, which healed after the injury, showing that the animal survived the attack.

If you’re with me so far, you might be making the logical leap that this fossil find is then linked to the hotbed of furious controversy that still leaves palaeontology in crisis almost 100 years after Lambe suggested it for the tyrannosaur Gorgosaurus. If the hadrosaur survived an attack from a T. rex, then T. rex was a habitual predator and OMG JACK HORNER AND OTHERS BEFORE HIM WERE WRONG!

And you’d be right.

My encounter with the question stemmed from an email from a science journalist (Matt Kaplan) that, as is normal practice, shared a copy of the unpublished paper and asked for comments from me to potentially use in an article he was writing for the science journal Nature’s news site. Here, then, was my off-the-cuff response:


“Ooh. I do have a pretty strong opinion on this. Not sure if you’d want to use it but here goes. I may regret it, but this hits my hot buttons for One of the Worst Questions in All of Palaeobiology!

The T. rex “predator vs. scavenger” so-called controversy has sadly distracted the public from vastly more important, real controversies in palaeontology since it was most strongly voiced by Dr Jack Horner in the 1990s. I find this very unfortunate. It is not like scientists sit around scratching their heads in befuddlement over the question, or debate it endlessly in scientific meetings. Virtually any palaeontologist who knows about the biology of extant meat-eaters and the fossil evidence of Late Cretaceous dinosaurs accepts that T. rex was both a predator and scavenger; it was a carnivore like virtually any other kind that has ever been known to exist.

While the discovery is nice evidence, it is not particularly exciting in a scientific sense and is only one isolated element from species that lived for hundreds of thousands of years, which to me changes nothing and allows no generalizations about the biology of any species, only the statement that at one point in time a Tyrannosaurus bit a hadrosaur that survived the encounter. There is no real substance to the controversy that T. rex was “either” a predator or scavenger. It is just something that scientists drum up now and then to get media attention. I hope that soon we can move on to more pressing questions about the biology of extinct animals, but the media needs to recognize that this is just hype and they are being played in a rather foolish way; likewise scientists that still feel this is an exciting question need to move on. Maybe this specimen will allow that. But somehow my cynical side leads me to suspect that this “controversy” will just persist because people want it to, regardless of logic or evidence. (bold font added; see below)

Great galloping lizards, I am so tired of this nonsense. Maybe there is educational value in showing how science deals with provocative half-baked ideas about celebrity species, but scientists in the community need to speak up and say what the real science is about. It’s not about this “controversy”. Modern palaeontology is so much better than this.

Sorry for the rant. Maybe it’s too extreme but I’m just fed up with this non-issue! I suspect a huge proportion of our field feels similarly, however.”


(I later redacted a bit of it where I got a little too excited and used the word “curmudgeon”; a mistake, as that could be seen as ad hominem rather than a term of endearment, and this issue is about the science and not the people, per se. That bit is redacted here, too. I’ve also redacted a sentence in which I made an opinion on whether the paper should have been published in PNAS; that is mostly irrelevant here. I was not a reviewer, and authors/reviewers/editors have to make that decision. This would be a massive tangent away from what this blog post is intended to be about! I know some of the authors and don’t want to offend them, but this is about the science and how it is represented to the world, not about these particular authors or even this paper itself.)

Importantly, Kaplan’s story did include my skeptical quote at the end. I am curious to see how many other news stories covering this paper go that far.

Would a T. rex prey on, or just scavenge, a giant chicken? (art by Luis Rey)

Would a T. rex prey on, or just scavenge — or have a great time racing — a giant chicken? (art by Luis Rey)

I will stop right here and acknowledge that I’ve published a lot on a somewhat related topic: how fast a T. rex could run or if it could run at all. To me, that’s a great scientific question that has consequences not only for the predator/scavenger false dichotomy, but also for general theories of locomotor biomechanics (can an animal the size of a large elephant run as well as or better than said elephant? What are the thresholds of size and maximal running/jumping/other athletic abilities and how do they vary in different evolutionary lineages? And so on.). I’ll defend the validity of that question to the bitter end, even if it’s a question I’ve grown a little (but only a little) tired of and generally feel is about as well settled as these things can be in palaeontology (see my review here). I’ll also defend that it has been a real controversy (I have plenty of old emails, formal rebuttals submitted by colleagues, and other discourse as evidence of this) since I tackled it starting in 2002 and sort of finishing by 2011. I am sensitive about the issue of hyping my research up– this is something I’ve been careful about. I set a reasonable bar of how much is too much, check myself continuously with reflective thought, and I do not feel I have ever really crossed that bar, away from science-promotion into darker realms. This is partly why I’ve stopped addressing this issue in my current work. I feel like the science we’ve done on this is enough for now, and to keep beating the same drum would be excessive, unless we discovered a surprising new way to address the questions better, or a very different and more compelling answer to them.

T. rex: scavenger or predator?” was controversial back  in 1994 when Horner published “The Complete T. rex”, where he laid out his arguments. Brian Switek covered this quite well in his post on it, so I will not review that history. There was a big Museum of the Rockies exhibit about it that toured the USA, and other media attention surrounding it, so Horner’s name became attached to the idea as a result. Other such as Lambe and Colinvaux had addressed it before, but their ideas never seemed to gain as much currency as Horner’s did. But this post is not about that.

What this post is about is a consideration of why this is still an issue that the media report on (and scientists publish on; the two are synergistic of course), if most scientists aware of past debates are in good agreement that a T. rex was like most other carnivores and was opportunistic as a switch-hitting scavenger-predator, not a remarkably stupid animal that would turn down a proper meal that was dead/alive. Indeed, the Nature news piece has a juicy quote from Horner that implies (although I do not know if it was edited or if important context is missing) that he has been in favour of the opportunistic predator-scavenger conclusion for some time. Thus, as Switek’s article notes, even the strongest advocates of the obligate scavenger hypothesis(?) have changed their minds; indeed, that 2011 blog post intimates that this had already happened at least 2 years ago.

For many years, nothing has been published in the main peer-reviewed literature that favours that extreme “obligate scavenger” hypothesis. If I am wrong and there is a scientific debate, where are the recent papers (say within the past 5 years) that are strong, respectable arguments in favour of it? I contend that it is a dead issue. And if it is just about the middle ground; i.e. what percent of its time did a T. rex spend hunting vs. scavenging; we have no clue and may never know, and it’s not a very interesting question.

But who then is feeding off of this moribund equine; this defunct tyranno-parrot?

In thinking about my reply to the journalist over the past 2 days, I am reminded again of my general feeling that this is no longer a question of scientific evidence; the important bit in bold font above. Maybe we just like this “hypothesis” or the “controversy”, or maybe we’re lazy and don’t want to have to hunt for real debates in science.

But who are “the people?” I do not feel that The Public should be blamed; they are the people that The Scientists and The Media ostensibly are seeking to inform about what the state of modern knowledge and uncertainty is in science. So when I get asked about the controversy after a public lecture, I always try to go into detail about it. I don’t sigh and say “go Google it”. Nor do I do this to a journalist. Indeed, I’ve generally headed this issue off at the pass and added a blurb to press releases/webpages explaining my T. rex research to explain how it relates to the non-controversy; example here.

I have to begin turning my finger of accusation away from scientists and toward some of the media, because they must play a huge role in the shennanigans. Yes, scientists should know better then to play this up as a valid, heated, modern controversy. That is true. Yet I have a feeling that the balance of blame should also fall heavily on the side of media (general and science news) that continue to report on this issue uncritically as a real controversy. Thus the general public thinks it still is, and scientists/journals keep issuing papers/press releases that it is, leading to more reporting on this “controversy”, and the beast refuses to die. Switek’s article is a good counter-example of balanced coverage with clear application of critical thinking.

This is trivially different from other non-controversies in palaeontology such as whether birds evolved from a subgroup of theropod dinosaurs and hence are dinosaurs by virtue of descent (consensus = yes). So it is reflective of a broader problem of not calling a spade a spade.

And it’s embarassing, to a scientist, as my quote above expressed, to see dead controversies trotted out again and again, feeding the public perception that they are not dead.

That’s what leaves me frustrated. When do the shennanigans end?

I am reminded of a quote from a Seinfeld episode:

“Breaking up is like knocking over a Coke machine. You can’t do it in one push. You gotta rock it back and forth a few times, and then it goes over.”– Jerry, from the episode “The Voice”.

But this predator/scavenger relationship-from-hell leaves me, as a specialist working in this general area, feeling like I am trapped under that fridge. Help!

That’s why I started off this long post talking about feeling deflated, or disappointed, when asked this question. I do feel that way. I have to admit, I sometimes even feel that way when a sweet young kid asks me that question. Deep inside, I wish they wondered about something else. I wish that science had reached them with a deeper, more contemporary question. But when a journalist asks me how I feel about a new paper that revisits the “controversy”, I feel embarassed for palaeontology. Can’t we get past this? It makes us look so petty, mired in trivial questions for decades. But we’re not like that. This is a dynamic, exciting, modern field, but every news story about non-issues in palaeontology just perpetuate bad elements of palaeontology’s image.

To the scientists— why don’t we put our foot down more and say enough is enough, this is a dead issue? We have a role not only in peer review, but also in communicating our views about published work to the media when asked (AND when not asked, as in this blog post). But if you call them on it, do they listen? Which brings me to…

To the media (science/general journalists etc; I know this is a huge category and please don’t think I am blaming 100% of journalists or assuming they are all the same; they are not!)– if scientists tell you that a “controversy” is not such, at what point do you accept their judgement and kill the story, or at least use that quote? Does that ever happen? In what way are you at the mercy of senior editors/others in such issues? What power do you have? Is a shift in the balance of editorial power needed, or even achievable, in your case or in good exemplar cases? I’d really like to hear your experiences/thoughts. I am sure there is a lot I am not understanding, and I know many journalists are in a tough situation.

To the public— You’re often being misinformed; you are the losers in this issue. How do you feel about all it? (While this post focuses on a very tiny issue, the T. rex scavenger/predator unending drama, it is also about a broader issue of how the media perpetuates controversies in science after they have already gone extinct.)

What did this post have to do with freezers? Nothing. I’m just (H)ornery. Although I was once filmed for a planned Discovery Channel film about scientists who find a frozen tyrannosaur in polar regions and have to decide what to do with it before it slips into a chasm and is lost forever. Probably better that this never aired; it was cancelled. Segue to this post.

The Berkeley cast of the Wankel (MOR555) specimen of T. rex. Will we ever see the end of the predator/scavenger non-issue?

The Berkeley cast of the Wankel (MOR555) specimen of T. rex. Will we ever see the end of the predator/scavenger non-issue?

Read Full Post »

This is a rant, but stick with me and this rant might have a silver lining toward the end, or at least a voice of reason within the roiling cloud of bitter blog-scowling. And there are pictures of cats.

My little tiger.

My little tiger, Karmella.

Like probably almost anyone in the 21st century that does research in a field of biology, I grew up watching nature documentaries on TV, and that influenced me to become a scientist. Doubtless it remains a powerful influence on other people, despite the massive de-science-ification of certain cable channels ostensibly, or at least potentially, dedicated to communicating science and nature (Animal Planet and History Channel, we’re looking at you).

But now I’ve seen behind the curtain. There’s still magic to behold there (e.g. working with early episodes of Inside Nature’s Giants), to be sure. However, some of my experiences have led me to become increasingly discontented with the relationship between TV documentaries and scientists.

Black leopard with motion capture markers on it, and glowing eyes; from our past studies.

Black leopard with glowing motion capture markers and eyes; eerie image from our past studies.

Here’s a common flow of events, and how they sometimes veer into frustration or worse:

Once a month or so, especially concentrated around this time (May-June-ish), I get a call or email from a documentary producer or researcher who is fishing for expert advice as they build a proposal for a documentary. I’m always very happy to talk with them and direct them to the best researchers to speak to, or papers to read, or to aspects of my own work that fit in with their idea for a documentary. Sometimes their idea is a bad one and I’m not afraid to tell them that and try to steer them toward a better idea; on occasion that seems to work, but more often they have their plan already and are reluctant to deviate from it.

About 3/4 of the time, I either never again hear from these nascent documentaries or else hear back maybe one more time (even to meet for coffee or give them a tour of our campus)– presumably, the proposal fails at that stage as it doesn’t excite executives. I’ve easily grown to accept this status quo after some initial disappointments. Much like in science, some ideas just don’t pass the muster of “peer review”, and documentary makers are operating under more of a market economy than science tends to be. Sifting is inevitable, and the time I spend helping people at this stage is quite minimal, plus it’s fun to see the sausage being made in its earliest stages. All fair so far…?

Alexis and technician setting up gear for one of our past studies of how cats move.

Alexis and technician setting up gear for one of our past studies of how cats move.

The frustration naturally ramps up the more one invests in helping documentaries through their gestation period. I’m sure it’s very frustrating and stressful for TV makers, too, to spend days or months on a project and then have the rug pulled out from under them by those on high. Hopefully they are getting paid for their time; all I can speak to is my experience. My experience is that all this early input I regularly provide is pro bono.

I used to mention that my time is not cheap, and I had a policy (after a few disappointments and lost time) that I should get paid around £100/hour for my time, even at the early consulting stage. That fee went straight into my research funds to help send grad students to conferences or buy small consumables; it was definitely worth my effort and felt very fair. Since the 2008 economic downturn, I’ve rapidly abandoned that policy, because it seems clear to me that documentary makers of late tend to be working on more austere budgets. I’m sympathetic to that, and the payoff for a documentary that gets made with my input is often quite substantial in terms of personal satisfaction, PR/science communication, happy university/grant funders, etc. On rare occasions, I still do get paid for my time (albeit essentially never by the BBC); Inside Nature’s Giants was generous in that regard, for example.

How the leopard got glowy spots: motion capture markers from our past studies.

How the leopard got glowy spots: motion capture markers from our past studies.

But at some point a line needs to be drawn, where the helpful relationship between scientists and documentary makers veers from mutualism into parasitism, or just careless disregard. I’ve been featured in roughly eight different TV documentaries since 2004, but there were almost as many (six or so) other documentary spots that went beyond the proposal stage into actual filming (easily 8+ hours of time) and never aired; either being cancelled entirely or having my scenes cut. All too frequently, I don’t hear about this cutting/cancellation until very late and after my inquiries like “Any news about the air date for your programme?”

Several times I’ve heard nothing at all from a documentary after filming, only to watch the programme and reach the end credits to find no sign of me or my team’s research (in one embarrassing case that really soured my attitude, the RVC had broadcast to the college to watch the show to see me in action, and upon watching we found out I was cut. Ouch!). At that point I really do wonder, is it all worth it? Hours or days invested in calls, emails, paperwork, travel, arranging and replicating an experiment, repeating filmed scenes and lines, working to TV producers’ scripts and demanding timetables. All that is totally worth it if the show gets made. But if the odds are ~60/40 or so that I get cut, I think I have cause to do more than shrug. The people I’ve worked with on documentaries can be wonderfully kind and full of thanks and other approbations, and they often impress me with their enthusiasm for the programme and their very hard, tenacious work making it all happen. It is jarring, then, to find out “Oh, you’ve been cut from the show, I’m so very sorry, the executives made that decision and it was a bitter pill for us to swallow, believe me– take care and I hope we can work together again.”

Above: Performance art illustrating what it’s like to have your science filmed for a documentary, then cut; graciously acted out by a cat (R.I.P.).

My aggravation has resurfaced after filming with BBC Horizon’s new documentary on “The Secret Life of the Cat,” airing right now. Alan Wilson’s team, from our lab, is featured prominently there, so that is fantastic for the Structure & Motion Lab (also check out his purrfectly timed Nature paper on cheetah agility vs speed, also from this week!). It’s hopefully going to be a nifty show; I’ve seen some of the behind-the-scenes stuff develop. (EDIT: I’ve seen it now and it was pretty good in terms of imagery and showing off Alan’s team’s technology, but the science was pretty weakly portrayed– even laypeople I’ve spoken to said “Cats avoid each other… duh!” and the evolutionary storytelling didn’t convince me as much as I’d like; it came across as arm-waving, which is a shame if the two featured cat researchers actually have built a scientifically reasonable case for it. One could not tell if the “changes” in 1 village’s cats evidenced by 1 week’s observation were happening within a cat’s lifetime or were truly evolutionary and recent. I don’t think I’ll watch the 2nd segment.)

I was filmed for a segment which probably would have been in the 2nd part of the show airing on Friday night, but I found out last week that it got cut with a week left before airing. I will be watching the show anyway, of course. I’m not that bitter. The segment featuring my team’s research was about how cats of different sizes do not do what other land mammals do, which is to straighten their legs as size increases across evolutionary spans. This helps support their body weight more effectively, but I explained in the filming segment that in cats, the lack of a change of posture in size may have other benefits despite the cost in weight support: it can make them more stealthy, more agile/maneuverable (segue to the cheetah paper cited above!), or even better able to negotiate rough terrain. Hence a domestic cat is in a biomechanical sense in many ways much more like a tiger than it should be for a “typical mammal”– an athlete, specialized for the hunt. And smaller cats are relatively much more athletic than bigger ones because they don’t suffer from the reduced ability to support body weight that bigger cats do. This may be, for example, why cheetahs are not very large compared with tigers or lions; they are at a “happy medium” size for agility and speed. But this all got cut, I am told.

Random cat that sidled up to us during some research into cat movements; so meta!

Random cat that sidled up to us during some research into cat movements; so meta!

For my would-be-part in the show, we recreated experiments that I did with then-postdoc Alexis Wiktorowicz Conroy and others (a paper yet to be published, but hopefully coming very soon) that showed how cats large and small use such similar mechanisms in terms of postures as well as forces and moments (rotational forces). In these recreations, I got an RVC clinician to bring her cat Rocket (?IIRC) Ricochet over to be filmed walking over forceplates with high-speed video recording it. The cat didn’t do much for us; it probably found our huge lab a bit overwhelming; but it did give us at least one good video and force trace for the programme. Next we did the same thing with two tigers at Colchester Zoo, and got some excellent footage, including a tiger launching itself out of its indoor enclosure to come outside, while rapidly making a turn past the camera. The latter tiger “ate” (well, ripped to shreds, literally) the rubber mat that covered my pressure pad, too, which was mostly funny — and the film crew has reimbursed me for that as well as for the drive to/from the zoo. The filming experience was good; the people were nice; but the end result was a bummer.

Advantage of visiting Colchester Zoo: meeting a baby aardvark (not a cat).

Advantage of visiting Colchester Zoo for research:  going behind the scenes and meeting a baby aardvark (that’s not a cat).

My segment, as far as I could tell, had cool footage and added a nice extra (if intellectual) context to the “secret life of cats” theme, so it’s a shame that it got cut. I heard that famed Toxoplasma-and-cat-behaviour researcher Prof. Joanne Webster‘s segment also got cut, so at least I’m in good company. I don’t have those cool videos of slo-mo cats and tigers with me now but will put them up early next week on my Youtube channel; stay tuned. They won’t ever show up on a documentary anyway; typically when footage gets cut it just vanishes into TV-land’s bowels.

So I’m not happy. Not at all. Bitter? Yeah, a bit. Spoiled brat scientist? I’d say that would be an overly cynical perspective on it. I do recognize that I am lucky that the research I do has a strong public appeal sometimes; many scientists will never be in a documentary or get much PR of any kind. But I think anyone has a right to examine their situation in life and ask, applying basic logic, whether it is fair treatment under the circumstances. Hence I have become disillusioned and angry about the relationship of documentary makers and scientists. Not just me, but us scientists in general. We’re unpaid actors playing sizeable roles and with major expertise. We give documentaries some sci-cred, too, simply by appearing onscreen with “Professor Snugglebunny from Smoochbridge University” in the caption. Supposedly, and often truly, we get good PR for it, when our segments don’t get cut or are not edited to obliterate the context or due credit. But it’s those latter instances that raise the question of fairness. If the segment gets cut, we simply have wasted our time. And to a busy scientist, that is like jabbing me with a hot poker.

Serenity now!

Serenity now!

[Aside: I’m waiting to hear what has happened to another documentary I was filmed for, and again spent ~2 days on, Channel 5’s “Nature Shock: Giraffe Feast” which should be airing soon… no word yet if I’ve made the final cut but the show’s airing has been delayed; hopefully not a bad sign. I am crossing my fingers… it seemed like a great show with a cool idea, and my segment raised some fun anatomical and biomechanical issues about giraffes.]

I know I’m not alone. I’m going to end my rant and see what feedback it draws.

But don’t get me wrong— it’s not all sour grapes, not by any means. I’ve still had eight-ish pretty good TV documentary experiences (cough, Dino Gangs, cough!).  I’ve had great experiences working with documentaries; indeed, Inside Nature’s Giants was one of the best experiences of my career to date. And I’m sure many other scientists have had positive experiences. In answer to my provocative “Why bother?” in the headline, there are plenty of good reasons to bother working with documentaries if you are a scientist whose research they want to feature… but only if you have some assurance that it will be worth your while, perhaps? How much of a gamble should we be bothering with? That brings me to my main point, a general query–

But what about the bad? And is it all worth it, in your views, given the risks of wasting time? Do we deserve some scientists’ bill-of-media-rights or something; a documentary-actor-scientists’ guild (90% joking here)? What should our rights be and should we push harder for them? Or do we just sit back and take the good with the bad, biting our lips? (I’m obviously not the type…)

I’d like to hear from not only the seasoned veterans who’ve experienced various ups and downs, but also from anyone that has views, anecdotes; whatever. I’m not aware of anyone collecting horror stories of documentary mishaps and mistreatments experienced by scientists, but that could start here. Please do share; even if you just got a call wondering if you’d want to help a documentary and then never heard back. Who knows where it would lead, but I think it’s helpful to bring these issues to the fore and discuss them openly.

Read Full Post »

« Newer Posts