Feeds:
Posts
Comments

Archive for the ‘RANT’ Category

This is a follow-up post to my earlier one and also weaves into my post on “success” (with a little overlap). I am sharing my thoughts on this topic of research management, because I try to always keep myself learning about doing and managing research, and this blog serves as a set of notes as I learn; so why not share them too? I tried editing the old post but it clearly was too much to add so I started a new post. It’s easy to just coast along and not reflect on what one is doing, caught up in the steady stream of science that needs to get done. Mistakes and mis-judgements can snowball if one doesn’t reflect. So here are my personal reflections, freshly thawed for your consideration, on how I approach doing research and growing older as I do it, adapting to life’s changes along the way.

Stomach-Churning Rating: 0/10, just words and ideas.

I realized that a theme in these rant-y posts on my blog is to Know Yourself, and, in the case of mentoring a team, Know Your Team. That knowledge is a reward from the struggles and challenges of seeking whatever one calls success. I critique some traits or practices here that I’ve seen in myself (and/or others), and perhaps managed to change. And I seek to change my environment by building a strong team (which I feel I have right now!) and by finding the best ways to work with them (which I am always learning about!). I also realized a word to describe a large part of what I seek and that is joy. The joy of discovery in the study of nature; the joy from the satisfaction of a job well done; the joy of seeing team members succeed in their careers and broader lives. I want to know that multifarious joy; the ripening of fulfilment.

We’re all busy in one way or another. Talking about being busy can just come across as (very) boring or self-absorbed or insecure. Talk about what you’re doing instead of how much you’re juggling. That’s more interesting. Avoid the Cult of Busy. I try to. It’s any easy complaint to default with in a conversation, so it takes some alertness… which keeps you busy. 🙂  I remember Undergrad-Me sighing wistfully to my advisor Dianna Padilla “I’m SO busy!” and her looking at me like I was an idiot. In that moment I realized that I was far from the only (or most) busy person in that conversation. Whether she was truly thinking that I was naïve, my imaginary version of her reaction is right. It was a foolish, presumptuously arrogant thing for me to declare. There surely are more interesting things to talk about than implied comparisons of the magnitudes of each other’s busy-ness. And so I move on…

Don’t count hours spent on work. That just leads to guilt of too much/too little time spent vs. how much was accomplished. Count successes. A paper/grant submitted is indeed a success, and acceptance/funding of it is another. A handy rule in science is that everything takes so much more time than you think it does that even trying to predict how long it will take is often foolish and maybe even time that could be better spent on doing something that progresses your work/life further.

Becoming older can slow you down and make you risk-averse, so you have to actively fight these tendencies. Ageing as a researcher needn’t always mandate becoming slower or less adventurous. But life will change, inevitably. One has to become more efficient at handling its demands as life goes on, and force oneself to try new things for the sake of the novelty, to think outside the box and avoid slipping into dogma or routine. We don’t want to be that stereotype of the doddering old professor, set in their ways, who stands in the way of change. The Old Guard is the villain of history. Lately I’ve been examining my own biases and challenging them, potentially re-defining myself as a scientist. I hope to report back on that topic.

The tone of life can darken as one becomes a senior researcher and “grows up”, accumulating grim experiences of reality. Some of my stories on this blog have illustrated that. In an attempt to distract me from that gloaming on the horizon, I try to do things at work that keep it FUN for me. This quest for fun applies well to my interactions with people, which dominate my work so much– I am seemingly always in meetings, less often in isolation at my desk. The nicer those meetings are, the happier I am. So I try to minimize exposure to people or interactions that are unpleasant, saving my energy for the battles that really matter. This can come across as dismissive or curt but in the end one has little choice sometimes. These days, nothing to me is more negatively emotive than sitting in an unproductive meeting and feeling my life slipping away as the clock ticks. I cherish my time. I don’t give it away wantonly to time-vampires and joy-vandals. They get kicked to the kerb– no room (or time) for them on this science-train. Choo choo!

Moreover, the No Asshole Rule is a great principle to try to follow at work. Don’t hire/support the hiring of people that you can’t stand socially, even if they are shit-hot researchers with a hugely promising career trajectory. Have a candidly private moment with someone who knows them well and get the inside scoop on what they’re like to work with. Try to get to know people you work with and collaborate more with people that you like to work with. Build a team of team-players (but not yes-men and yes-women; a good team challenges you to know them and yourself; so there must be some tension!). That can help you do better science because you enjoy doing it more, and you prioritize it more because of that, and you have more energy because of all that. Hence your life gets better as a result. I prefer that to a constant struggle in tense, competitive collaborations. One of the highest compliments I ever got was when someone described me to their friend as a “bon vivant”. I felt like they’d discovered who I was, and they’d helped me to discover it myself.

I wondered while writing this, would I hire 2003-Me, from when I was interviewing for my current job 12 years ago? I suppose so, but I’d give myself a stern scolding on day one at the job. “Chill the fuck out,” I’d say. “Focus on doing the good science and finding the other kinds of joy in life.” I like the more mellowed-out, introspective, focused, compassionate 2015-Me, and I think 2003-Me would agree with that assessment.

There is a false dichotomy in a common narrative about research mentoring that I am coming to recognize: a tension between the fortunes of early career researchers and senior research managers. The dichotomy holds that once one is senior enough, ambition wanes and success is complete and one’s job is to support early career researchers to gain success (as recompense for their efforts in pushing forward the research team’s day-to-day science), and to step back out of the limelight.

The reality, I think, is that all these things are linked: early career researchers succeed in part because their mentors are successful (i.e. the pedigree concept; good scientists arise in part from a good mentoring environment), and research-active mentors need to keep seeking funding to support their teams, which means they need to keep showing evidence of their own success. Hence it never ends. One could even argue that senior researchers need to keep authoring papers and getting grants and awards and other kinds of satisfaction and joy in science that maintain reputations, and thus their responsibility to themselves and their team to keep pushing their research forward may not decrease or even may intensify. Here, a “team” ethos rather than an “us vs. them” mentality seems more beneficial to all—we’re in this together. Science is hard. We are all ambitious and want to achieve things to feel happy about. I don’t think the “it never ends” perspective is gloomy, either—if the false dichotomy were true, once one hit that plateau of success as a senior researcher, ambition and joy and personal growth would die. Now that’s gloomy. Nor does the underlying pressure mandate that researchers can’t have a “life outside of work”. I’ve discussed that enough in other posts.

Trust can be a big issue in managing research. If people act like they don’t trust you, it may be a sign that they’ve been traumatized by violated trust before. Be sensitive to that; gently inquire? And get multiple sides of the story from others if you can… gingerly. But it also might be a warning sign that they don’t deserve trust themselves. Trust goes both ways. Value trust, perhaps above all else. It is so much more pleasant than the lack thereof. Reputation regarding trustworthiness is a currency that a research manager should keep careful track of in themselves and others. Trust is the watchdog of joy.

Say “No” more often to invitations to collaborate as your research team grows. “Success breeds success” they say, and you’ll get more invitations to collaborate because you are viewed as successful — and/or nice. But everyone has their limits. If you say “Yes” too much, you’ll get overloaded and your stock as a researcher will drop– you’ll get a reputation for being overcommitted and unreliable. Your “Yes” should be able to prove its value. I try to only say “Yes” to work that grabs me because it is great, do-able science and with fun people that I enjoy collaborating with. This urge to say “No” must be balanced with the need to take risks and try new directions. “Yes” or “No” can be easy comfort zones to settle into. A “Yes” can be a longterm-noncommittal answer that avoids the conflict that a “No” might bring, even if the “No” is the more responsible answer. This is harder than it seems, but important.

An example: Saying “No” applies well to conference invitations/opportunities, too. I love going to scientific conferences, and it’s still easy enough to find funding to do it. Travel is a huge perk of academic research! But I try to stick to a rule of attending two major conferences/year. I used to aim for just one per year but I always broke that rule so I amended it. Two is sane. It is easy to go to four or more annual conferences, in most fields, but each one takes at least a week of your time; maybe even a month if you are preparing and presenting and de-jetlagging and catching up. Beware the trap of the wandering, unproductive, perennial conference-attendee if doing science is what brings you joy.

This reminds me of my post on “saying no to media over-coverage“– and the trap of the popularizer who claims to still be an active researcher, too. There is a zero-sum game at play; 35 or 50 hour work week notwithstanding. Maybe someday I’d want to go the route of the popularizer, but I’m enjoying doing science and discovering new things far too much. It is a matter of personal preference, of course, how much science communication one does vs. how much actual science.

The denouement of this post is about how research teams rise and fall. I’m now often thinking ahead to ~2016, when almost all of my research team of ~10 people is due to finish their contracts. If funding patterns don’t change — and I do have applications in the works but who knows if they will pan out — I may “just” have two or so people on my team in a year from now. I could push myself to apply like mad for grants, but I thought about it and decided that I’ll let the fates decide based on a few key grant submissions early in the year. There was too little time and too much potential stress at risk. If the funding gods smile upon me and I maintain a large-ish team, that’s great too, but I would also truly enjoy having a smaller, more focused team to work with. I said “No” to pushing myself to apply for All The Grants. I’ll always have diverse external collaborations (thanks to saying “Yes” enough), but I don’t define my own success as having a large research group (that would be a very precarious definition to live by!). I’m curious to see what fortune delivers.

Becoming comfortable with the uncertainty of science and life is something I’m finding interesting and enjoy talking about. It’s not all a good thing, to have that sense of comfort (“whatever happens, happens, and I’m OK with that”). I don’t want my ambition to dwindle, although it’s still far healthier than I am. There is no denying that it is a fortunate privilege to feel fine about possibly not drowning in grant funds. It just is what it is; a serenity that I welcome even if it is only temporary. There’s a lot of science left to be written about, and a smaller team should mean more time to do that writing.

Will I even be writing this blog a year from now? I hope so, but who knows. Blogs rise and fall, too. This one, like me, has seen its changes. And if I am not still writing it, it might resurface in the future anyway. What matters is that I still derive joy from blogging, and I only give in to my internal pressure to write something when the mood and inspiration seize me. I hope someone finds these words useful.

Read Full Post »

For about 3 years now I’ve used the #WIJF (i.e. acronym for What’s In John’s Freezer) hashtag to organize my social media efforts on this blog. Over that time I became aware that “wijf” in Dutch can be taken as a derogatory term for women. And indeed, these days I do see people tweeting derogatory things with the #wijf hashtag, along with other, tamer uses like mine. I’ve come to the decision, albeit gradually and with much internal debate, to stop using that hashtag so I can avoid association with the sexist Dutch word. This post is about why, and what’s next.

Stomach-Churning Rating: Debatable, but 0/10 by the standard of the usual gory things on this blog; no images.

I don’t speak Dutch, but 25 million or so people do. This is a blog about morphological science, and the Dutch have had (and continue to have) a disproportionately strong influence on that field. I’m not claiming to be perfect when it comes to feminist issues, but I listen and I try and I care. My undergraduate tutelage in science was almost exclusively driven by female scientists– I never thought about that before but it’s true; at least 5 different major faculty influences at the University of Wisconsin! I work at a university where ~85% of the students are female (common today in vet schools). My research team has featured 9 out of 16 female postgraduate staff and students since 2004, and a lot of my collaborators and friends are scientists or science afficionados who happen to be female. I have good reason to care, and social media has helped to raise my awareness of important matters within and outside of science that I do care a lot about.

So, while I tend to hate to abandon words (or hashtags), preferring to fight for alternative meanings (e.g. the word “design” in evolutionary biology), and I am a stubborn git, the #WIJF hashtag and acronym are different, I’ve decided, and it’s time to use something else. Admittedly, #WIJF hasn’t been that important to this blog as hashtag or acronym– mainly just I use it, and any “brand name recognition” or other things surely arise more from the full name of the blog. So abandoning #WIJF is an inconvenience but not devastating to my blog. I see this move as (1) taking control of a situation where the benefits of staying with the hashtag/acronym are minimal and the harms, while of debatable magnitude, outweigh those minimal benefits in my view, and (2) demonstrating that I don’t tolerate or want to be associated with sexism or other discrimination. And I hope that this move might inspire others to reflect similarly on their own behaviour. Morphology, like any science, is for everyone, and this blog is meant to be a friendly place.

But a thing that has held me back, even though it is admittedly trivial in the grand scheme of things, is what hashtag/acronym to use henceforth? I turn that over to you, Freezerinos. I have no good ideas and so I am crowdsourcing. I need something short (not #Whatsinjohnsfreezer, probably– too long), something associated with the title of the blog, but also something dissimilar to the naughty word “wijf” and thus inoffensive… ideally inoffensive in the ~7000 languages of the world (!?!?). That might not leave many options! What should be in John’s blog’s hashtag?

Read Full Post »

If you’ve been working in science for long enough, perhaps not very long at all, you’ve heard about (or witnessed) scientists in your field who get listed as co-authors on papers for political reasons alone. They may be an uninvolved but domineering professor or a fellow co-worker, a friend, a political ally, an overly protective museum curator, or just a jerk of any stripe. I read this article recently and felt it was symptomatic of the harm that bad supervisors (or other collaborators) do to science, including damage to the general reputation of professors and other mentors. There are cultural differences not only between countries (e.g. more authoritative, hierarchical cultures probably tolerate behaviour like this more) but also within institutions because of individual variation and local culture, tradition or other precedent. But this kind of honorary co-authorship turns my stomach—it is co-authorship bloat and a blight upon science. Honorary co-authorship should offend any reasonable scientist who actually works, at any level of the scientific hierarchy. So here’s my rant about it. Marshmallows and popcorn are welcomed if you want to watch my raving, but I hope this post stimulates discussion. A brief version of this did do that on my personal Facebook account, which motivated me to finish this public post.

Stomach-Churning Rating: 0/10 but it may provoke indigestion if you’ve been a victim of co-author bloat.

At its root, honorary co-authorship (HONCO) shows disdain for others’ efforts in research. “I get something for nothing, unlike others.” It persists because of deference to pressures from politics (I need to add this co-author or they’ll cause me trouble), other social dynamics (this person is my buddy; here’s a freebie for them), careerism (oneself/ally/student needs to be on this paper to boost their CV and move up in their career; or else), or even laziness (a minimal publishable unit mentality- e.g. any minor excuse for being a co-author is enough). All of these reasons for tolerating it, and apathy about the status quo, keep the fires of HONCO burning. My feeling from my past 20 years of experience in academia is that, as science is getting increasingly complex and requiring more collaborators and co-authors, the fire is raging to a point where it is visibly charring the integrity of science too often to just keep quiet about it and hope it doesn’t cause much damage.

There’s a flip side to HONCO, too– it’s not that, as some might take the article above to imply, we all need to boot senior authors off of papers. Senior authors, like other collaborators, have a reason for existing that encompasses — but is not limited to — boosting the careers of those they mentor. We scientists all want the satisfaction of doing science, even if the nature of our involvement in research evolves (and varies widely). Part of that satisfaction comes from publishing papers as the coup de grace to each project, and it’s a privilege that should be open to being earned by anyone qualified. Indeed, if adding HONCOs to papers is fraud, then removing worthy contributors from papers can be seen as a similar kind of fraud (unless a result of mutually agreed I’ll-help-you-for-nothing generosity). The broader point is, authors should deserve to be authors, and non-authors should not deserve to be authors.

On that latter issue, I think back to my grad school days and how my mentors Kevin Padian, Rodger Kram, Bob Full and others often gave me valuable input on my early papers (~1998-2002) but never earned co-authorship on them (exception: mentor Steve Gatesy’s vital role in our 2000 “abductors, adductors” paper). And frankly I feel a little bad now about that. Some of those mentors might have deserved co-authorship, but even when asked they declined, and just appeared in the Acknowledgements. It was the culture in my department at Berkeley, like many other USA grad schools at the time and perhaps now, that PhD students often did not put their supervisors on their papers and thus published single-author work. I see that less often today — but still varying among fields; e.g. in biomechanics, less single-authorship globally; in palaeontology and morphology, more single-authored work, but perhaps reducing overall. That is my off-the-cuff impression from the past >10 years.

I was shocked to see less (or often no) single-authored papers by lab colleagues once I moved to the UK to take up my present post– the prevalence of supervisors as senior authors on papers was starkly evident. On reflection, I now think that many of those multi-authored papers deserved to be such. It was not solo work and involved some significant steering, with key ideas originating from supervisors and thus constituting valid intellectual input. Yet I wondered then if it was a good thing or not, especially after hearing student complaints like waiting six months for comments from their supervisor on a manuscript. But this gets into a grey area that is best considered on a paper-by-paper basis, following clear criteria for authorship and contributions, and it involves difficulties inherent to some supervisor-supervisee relationships that I will not cover here. Much as supervisors need to manage their team, their team needs to manage them. ‘Nuff said.

Many institutions and journals have clear criteria for co-authorship, and publications have “author contributions” sections that are intended to make it clear who did what for a given paper – and thus whose responsibility any problems might be, too. HONCOs take credit without responsibility or merit, and are blatant fraud. I say it’s time we stand up to this disease. The criteria and contributions aspects of paper are part of the immune system of science that is there to help defend against academic misconduct. We need to work together to give that system a fighting chance.

There are huge grey areas in what criteria are enough for co-authorship. I have to wrestle with this for almost every paper I’m involved in– I am always thinking about whether I truly deserve to be listed on a paper, or whether others do. I’ve been training myself to think, and talk, about co-authorship criteria early in the process of research— that’s essential in avoiding bad blood later on down the line when it’s time to write up the work, when it’s possibly too late for others to earn co-authorship. This is a critical process that is best handled explicitly and in writing, especially in larger collaborations. What will the topic of any future paper(s) be and who will be involved as co-authors, or not? It’s a good agenda item for research meetings.

There are also grey areas in author contributions. How much editing of a paper is enough for co-authorship justification? Certainly not just spellchecking or adding comments saying “Great point!”, although both can be a bit helpful. Is funding a study a criterion? Sometimes– how much and how directly/indirectly did the funding help? Is providing data enough? Sometimes. In these days of open data, it seems like the data-provision criterion, part of the very hull that science floats upon, is weakening as a justification for co-authorship. It is becoming increasingly common to cite others’ papers for data, provide little new data oneself, and churn out papers without those data-papers’ authors involved. And that’s a good thing, to a degree. It’s nicer to invite published-data-providers on board a paper as collaborators, and they can often provide insight into the nature (and limitations or faults!) of the data. But adding co-authors can easily slide down the slippery slope of hooray-everyone’s-a-co-author (e.g. genetics papers with 1000+ co-authors, anyone?). I wrote up explicit co-authorship criteria here (Figshare login needed; 2nd pdf in the list) and here (Academia.edu login needed) if you’re curious how I handle it, but standards vary. Dr. William Pérez recently shared a good example of criteria with me; linked here.

In palaeontology and other specimen-based sciences, we get into some rough terrain — who collected the fossil (i.e. was on that field season and truly helped), identified it, prepared and curated it, published on it, or otherwise has “authority” over it, and which of them if any deserve co-authorship? I go to palaeontology conferences every year and listen over coffee/beers to colleagues complain about how their latest paper had such-and-such (and their students, pals, etc.) added onto the paper as HONCOs. Some museums or other institutions even have policies like this, requiring external users to add internal co-authors as a strong-arm tactic. An egregious past example: a CT-scanning facility I used once, and never again, even had the guff to call their mandatory joint-authorship policy for usage “non-collaborative access”… luckily we signed no such policy, and so we got our data, paid a reasonable fee for it, and had no HONCOs. Every time I hear about HONCOs, I wonder “How long can this kind of injustice last?” Yet there’s also the reality that finding and digging up a good field site or specimen(s); or analogous processes in science; takes a lot of time and effort and you don’t want others prematurely jumping your claim, which can be intellectual property theft, a different kind of misconduct. And there is good cause for sensitivity about non-Western countries that might not have the resources and training of staff to earn co-authorship as easily; flexibility might be necessary to avoid imperialist pillaging of their science with minimal benefit to their home country.

Yet there’s hope for minimizing HONCO infections. A wise person once said (slightly altered) “I’d rather light a candle than curse the darkness.” Problems can have solutions, even though cultural change tends to be agonizingly slow. But it can be slower still, or retrograde, if met with apathy. What can we do about HONCOs? Can we beat the bloat? What have I done myself before and what would I do differently now? I’ll take an inward look here.

Tolerating HONCOs isn’t a solution. I looked back on my experiences with >70 co-authored papers and technical book chapters since 1998. Luckily there are few instances where I’d even need to contemplate if a co-author was a HONCO. Most scientists I’ve worked with have clearly pulled their weight on papers or understood why they’re not co-authors on a given paper. More about that below. In those few instances of possible HONCOs, about five papers from several years ago, some colleagues provided research material/data but never commented on the manuscripts or other aspects of the work. I was disgruntled but tolerated it. It was a borderline grey area and I was a young academic who needed allies, and the data/specimens were important. Since then, I’ve curtailed collaborations with those people. To be fair, there were some papers where I didn’t do a ton (but did satisfy basic criteria for co-authorship, especially commenting on manuscripts) and I got buried in Middle-Authorland, and that’s fine with me; it wasn’t HONCO hell I was in. There were a few papers where I played a minor role and it wasn’t clear what other co-authors were contributing, but I was comfortable giving them the benefit of the doubt.

One anti-HONCO solution was on a more recent paper that involved a person who I had heard was a vector of HONCO infection. I stated early on in an email that only one person from their group could be a co-author on the resulting paper, and they could choose who it was and that person would be expected to contribute something beyond basic data. They wrote back agreeing to it and (magnanimously) putting a junior student forward for it, who did help, although they never substantially commented on the manuscript so I was a little disappointed. But in the grand scheme of things, this strategy worked in beating the HONCO bloat. I may have cost myself some political points that may stifle future collaborations with that senior person, but I feel satisfied that I did the right thing under the constraints, and damn the consequences. Containment of HONCO has its attendant risks of course. HONCO-rejects might get honked off. Maybe one has to pick their battles and concede ground sometimes, but how much do the ethics of such concessions weigh?

Another solution I used recently involved my own input on a paper. I was asked to join a “meta-analysis” paper as a co-author but the main work had already been done for it, and conclusions largely reached. I read the draft and saw places where I could help in a meaningful way, so with trepidation I agreed to help and did. But during the review process it became clear that (1) there was too much overlap between this paper and others by the same lead author, which made me uncomfortable; and (2) sections that I had contributed to didn’t really meld well with the main thrust of the paper and so were removed. As a consequence, I felt like a reluctant HONCO and asked to be removed from the paper as a co-author, even though I’d helped write sections of the main text that remained in the paper (but this was more stylistic in my view than deeply intellectual). I ended up in the Acknowledgements and relieved about it. I am comfortable removing myself from papers in which I don’t get a sense of satisfaction that I did something meriting co-author status. But it’s easier for more senior researchers like me to do that, compared to the quandary that sink-or-swim early-career researchers may face.

More broadly in academia, a key matter at stake is the CVs of researchers, especially junior ones, which these days require more and more papers (even minimal publishable units) to be competitive for jobs, awards and funding. Adding HONCOs to papers does strengthen individuals’ CVs, but in a parasitic way from the dilution of co-author contributions. And it’s just unethical, full stop. One solution: It’s thus up to senior people to lead from the front, showing that they don’t accept HONCOs themselves and encouraging more junior researchers to do the same when they can—or even questioning the contributions that potential new staff/students made to past papers, if their CV seems bloated (but such questions probe dangerous territory!). Junior people, however, still need to make a judgement call on how they’ll handle HONCOs with themselves or others. There is the issue of reputation to think about; complicity in the HONCO pandemic at any career level might be looked upon unfavourably by others, and scientists can be as gossipy as any humans, so bad ethics can bite you back.

I try to revisit co-authorship and the criteria involved throughout a project, especially as we begin the writing-up stage, to reduce risks of HONCOs or other maladies. An important aspect of collaboration is to ensure that people that might deserve co-authorship get an early chance to earn it, or else are told that they won’t be on board and why. Then they are not asked for further input unless it is needed, which might shift the balance and put them back on the co-author list. Critically, co-authorship is negotiable and should be a negotiation. One should not take it personally if not on a paper, but should treat others fairly and stay open-minded about co-authorship whenever possible. This has to be balanced against the risk of co-authorship bloat. Sure, so-and-so might add a little to a paper, but each co-author added complicates the project, probably slows it down, and diminishes the credit given to each other co-author. So a line must be drawn at some point. Maybe some co-authors and their contributions are best saved for a future paper, for example. This is a decision that the first, corresponding and senior author(s) should agree on, in consultation with others. But I also feel that undergraduate students and technicians often are the first to get the heave-ho from co-author considerations, which I’ve been trying to avoid lately when I can, as they deserve as much as anyone to have their co-author criteria scrutinized.

The Acknowledgements section of a paper is there for a reason, and it’s nice to show up there when you’ve truly helped a paper out whether as quasi-collaborative colleague, friendly draft-commenter, editor, reviewer or in other capacities. It is a far cry from being a co-author but it also typically implies that those people acknowledged are not to blame if something is wrong with the paper. I see Acknowledgements as “free space” that should be packed with thank-you’s to everyone one can think of that clearly assisted in some way. No one lists Acknowledged status on their CVs or gets other concrete benefits from them normally, but it is good social graces to use it generously. HONCOs’ proper home, at best, is there in the Acknowledgements, safely quarantined.

The Author Contributions section of a paper is something to take very seriously these days. I used to fill it out without much thought, but I’ve now gotten in the habit of scrutinizing it (where feasible) with every paper I’m involved in. Did author X really contribute to data analysis or writing the paper? Did all authors truly check and approve the final manuscript? “No” answers there are worrying. It is good research practice nowadays to put careful detail into this section of every paper, and even to openly discuss it among all authors so everyone agrees. Editors and reviewers should also pay heed to it, and readers of papers might find it increasingly interesting to peruse that section. Why should we care about author contribution lists in papers? Well, sure, it’s interesting to know who did what, that’s the main reason! It can reveal what skills an individual has or lacks, or their true input on the project vs. what the co-author order implies.

But there’s a deeper value to Author Contributions lists that is part of the academic immune system against HONCOs and other fraud. Anyone contributing to a particular part of a paper should be able to prove their contribution if challenged. For example, if a problem was suspected in a section of a paper, any authors listed as contributing to that section would be the first points of contact to check with about that possible problem. In a formal academic misconduct investigation, those contributing authors would need to walk through their contributions and defend (or correct) their work. It would be unpleasant to be asked how one contributed to such work if one didn’t do it, or to find out that someone listed you as contributing when you didn’t, and wouldn’t have accepted it if you had known. Attention to detail can pay off in any part of a research publication.

Ultimately, beating the blight of HONCO bloat will need teamwork from real co-authors, at every career level. Too often these academic dilemmas are broken down into “junior vs. senior” researcher false dichotomies. Yes, there’s a power structure and status quo that we need to be mindful of. Co-authorships, however, require collaboration and thus communication and co-operation.

It’s a long haul before we might see real progress; the fight against HONCOs must proceed paper-by-paper. There are worse problems that science faces, too, but my feeling is that HONCOs have gone far enough and it’s time to push back, and to earn the credit we claim as scientific authors. Honorary co-authorship is a dishonourable practice that is very different from other “honorary” kudos like honorary professorships or awards. Complex and collaborative science can mean longer co-author lists, absolutely, but it doesn’t mean handing out freebies to chums, students needing a boost, or erstwhile allies. It means more care is needed in designing and writing up research. And it also means that science is progressing; a progress we should all feel proud of in the end.

Do you have abhorrent HONCO chronicles of your own (anonymized please; no lynch mobs here!) or from public record? Or ideas for handling HONCO hazards? Please share and discuss.

Read Full Post »

When does a science story “end”? Never, probably. Science keeps voyaging on eternally in search of truth, and few if any stories in science truly “end”. But as science communicators of any stripe, we routinely have to make decisions about when a certain story has run its course; when the PR ship has sailed and the news cycle has ended. As scientists, we’re lucky if we have to consider this and should be grateful if and when our science even attracts media/science communication attention. But the point of today’s post; perhaps an obvious one but to my mind worthy of reflection on; is that scientists are not slaves to the PR machine– as a flip side to the previous self/science-promotion post, at some point we may have to say “This story about our research is done (for now).”

I routinely reflect on this when the media covers my research; I always have. My recent experience with New Yorker and BBC coverage of our penguin gait research (with James Proffitt and Emily Sparkes as well as Dr. Julia Clarke) got me thinking about this issue a lot, and talking about it quite a bit with James. This morning, over coffee, this blog post was born from my thoughts on that experience.

Stomach-Churning Rating: 7/10 for some mushy penguin specimens; PR officers might also get queasy.

I was waiting for a call from BBC radio one night almost three weeks ago, to do a recorded interview about our penguin research-in-progress, when I woke up surrounded by paramedics and was whisked off to the hospital. I never did that interview or any further ones. I won’t go into what went wrong but it relates to this old story. I’m OK now anyway. But for me, the penguin story had mostly ended before it began. However, I’d already agreed with James that we’d try to avoid doing further media stories beyond the New Yorker one and the BBC one, which was due out the next day and for which James (fortuitously instead of me!) was doing a live appearance on BBC Breakfast (TV). I got a few emails and calls about this story while recuperating in my hospital bed, including the one below, and turned down interview invitations for obvious reasons, with no arguments from anyone– at first.

For Jerry, the story never should have started, apparently. We all have our opinions on what stories are worth covering.

For Jerry, the story never should have started, apparently. We all have our opinions on what stories are worth covering. A “kind” email to receive in one’s hospital bed…

Then, after I recovered and got back to work, we kept getting a trickle of other interview/story invitations, and we declined them. Our PR office had suggested that we do a press release but we had already decided in advance not to, because we saw the story as just work-in-progress and I don’t like to do press releases about that kind of thing– except under extraordinary circumstances.

Finally, over a week after the BBC story aired, a major news agency wanted to film an interview with me about the story, which would get us (more) global coverage. They prefaced the invitation with the admission that they were latecomers to the story. Again I firmly said no; they could use existing footage but I could not do new interviews (these would inevitably take a half day or so of my time and energy). They wrote back saying they were going to go forward with the story anyway, and the journalist scolded me for not participating, saying that the story would have been so much better with a new film sequence of me in it. Maybe, but (1) I felt the story had run its course, (2) I’d had my hospitalization and a tragic death in the family, and (3) I was just returning, very jetlagged, from a short trip to the USA for other work. Enough already! I had other things to do. I didn’t follow up on what happened with that story. Maybe it didn’t even get published. I wasn’t left feeling very sympathetic.

Above: The BBC story

I kept thinking about being pressured and scolded by journalists, once in a while, for not joining in their news stories when they contradicted my own threshold for how much media coverage is enough. This reaching of a personal threshold had first happened to me 13 years ago when I published my first big paper, in Nature, on “Tyrannosaurus was not a fast runner.” After ~3 weeks of insane amounts of media coverage, I was exhausted and pulled the plug, refusing more interviews. It felt good to exert control over the process, and I learned a lot from learning to wield that control. I still use it routinely.

But… I am of course passionate about science communication, I feel it is a great thing for science to be in the public eye, and I actually love doing science communication stories about research-in-progress– too much science is shown as an endpoint, not a process. Indeed, that’s why I do this blog and other social media, most of which is science-in-progress and my thoughts about it. So I was and still am thrilled that we got such positive, broad, good quality media attention for our penguin work, but it was plenty.

Penguin bodies awaiting dissection for our latest work. Unfortunately, years of formalin, freezers and thawing cycles had rendered most of the soft tissues useless for our work. Photos here and below are of Natural History Museum (Tring) specimens from the ornithology collection; most collected in Antarctica ~50 yrs ago.

More sphenisciform science in progress: Penguin bodies awaiting dissection for our latest work. Unfortunately, years of formalin, freezers and thawing cycles had rendered most of the soft tissues useless for our work. Photos here and below are of Natural History Museum (Tring) specimens from the ornithology collection; most collected in Antarctica ~50 yrs ago.

Probably to many seasoned science communicators and scientists, my post’s message is blindingly obvious. Of course, scientists have rights — and responsbilities– in deciding how and when their research is covered. This is a negotiation process between their research team, their university, PR officers, journalists/media, funders and others involved– including the public. But less experienced scientists, and perhaps the public, might not realize how much control scientists do have over the amount of media attention they get. It’s easy to get caught up in a media frenzy surrounding one’s science (if you’re lucky enough to generate it at all) and feel the wind in one’s sails, thereby forgetting that you’re at the helm– you can decide when the journey is over (just be sure you communicate it diplomatically with others involved!).

This penguin did not survive the preservation process well; for whatever reason it had turned to mush, fit only for skeletonization. Gag. Its journey was definitely over.

This penguin did not survive the preservation process well; for whatever reason it had turned to mush, fit only for skeletonization. Gag. Its journey was definitely over.

As scientists, we have to balance enormous pressures and priorities: not just science communication and PR, but also our current main research, teaching, admin, personal lives, health, and so on. So we have to make hard decisions about how to balance these things. We should all reflect on what our dynamically shifting thresholds are for how much attention is enough, what priority level a given story has in our lives, and when the timing is right for any media attention. And as collaborative teams; more and more the norm in science; we should be discussing this issue and agreeing on it before it’s too late for us to exert much control.

One of our penguin chicks, in a better state of preservation than the adults. Photo by James Proffitt.

One of our penguin chicks from the Natural History Museum, in a better state of preservation than the adults. Photo by James Proffitt.

Penguin chick's right leg musculature in side view, exposing some nice muscles that gave us some useful data. Photo by James Proffitt.

Penguin chick’s right leg musculature in side view, exposing some decent muscles that gave us some useful data. Photo by James Proffitt.

Much like an over-played hit song, it’s not pretty when a science story gets over-milked and becomes too familiar and tedious, perhaps drawing attention away from other science that deserves attention. And we all will have our opinions on where that threshold of “too much attention” is. If we, as scientists, don’t think about those thresholds, we may end up rudderless or even wrecked on lonely islands of hype. I’ve seen scientists ostracized by their peers for over-hyping their work. It’s not fun. “Hey everybody, John is having a celery stick with peanut butter on it!” Celebrity culture doesn’t mean that everything scientists do deserves attention, and any amount of attention is deserved and good.

A great thing about science is that, in principle, it is eternal– a good science story can live forever while other science is built upon it. Each chapter in that story needs an ending, but there’s always the next chapter waiting for us, and that’s what keeps science vital and riveting. As scientists, we’re all authors of that story, with a lot of power over its narrative. We can decide when to save parts of that narrative for later, when the time is right. With our penguin story, we’ve only just begun and I’m incredibly excited about where it goes next.

How about other scientists, journalists and other afficionados of science? What examples of scientists taking charge of how their research gets covered do you find particularly instructive?

Read Full Post »

I am reposting a blog post that I co-authored with Anne Osterrieder in 2012. I’ve always liked this post and been proud that we did it. A colleague brought it up to me yesterday, and I was sad to hear that the blog had been killed by hackers, with the original post lost, but Anne and I reconstructed it and I’ve decided to put it up on my blog, as I still feel strongly about its main points and Anne concurred.

Stomach-Churning Rating: 1/10; just words and ideas.

This blog is about freezer-promotion.

This blog is about freezer-promotion.

Here we present two views on public engagement (PE) or public relations (PR) and the thorny issue of “self promotion” in scientific research, from two scientists who might on the surface seem to be as different as scientists can be in regards to PE/PR. Yet we hope to convey the common ground that lies between these “extremes” and use it to explore, and spark discussion in, what self-promotion is and when it is a good vs. bad thing for scientists. Similar points came up in another blog post at around the same time, linked here.

Professor John R. Hutchinson (here, simply John will do!) does research on dinosaurs and elephants and other “celebrity species” (well, some of them anyway; some others aren’t so sexy but he doesn’t care). Thus getting PE/PR is often all too easy. It is often said that “dinosaur” (or fossil) is among the “holy trinity” of media story subjects; space and health being two others. That status lubricates the gears of a science PE/PR machine. Sometimes, even, the problem is keeping a lid on the “sexy” research until it is “thoroughly cooked” and ready for PE/PR, rather than releasing it prematurely. A flip side to this issue is that this easy success with PE/PR means that almost everyone is doing it, albeit with varying aplomb. So it takes some extra effort to achieve relative excellence at PE/PR in John’s line of research, but he’s not complaining. In contrast, many (indeed, most!) scientists might not have it so easy getting PE/PR and hence need to actively engage in it to draw audiences in. However, when they are successful at PE/PR it might be easier for them to then stand out from the crowd.

Dr. Anne Osterrieder (again, let’s stick with Anne for short) is a Research and Science Communication Fellow, doing research on plant cells – – hello? Hello?! Are you still there? Nine out of ten people will react to this revelation with the question: ‘Why do you work on plants? Plants are boring, they don’t really do anything, do they?’ Most plant scientists agree that the apathy or even contempt displayed towards our poor plants stems from a lack of proper engagement, starting with the way plants are taught in schools. As such plant scientists need to make a conscious effort to engage the public with current plant research and highly topical issues such as food security or plant pathology. Cells have a higher ‘fascination potential’, as the huge success of BBC’s ‘The Hidden Life of the Cell’ showed. Communicating current cell biology becomes more challenging however the deeper we go.

 

With those introductions done, let’s see what our two scientists think about self-promotion and PE/PR:

BBC

You might have spotted John and collaborator James Proffitt on the BBC or in the New Yorker lately, engaging in penguin-promotion.

John:

While self-promotion among scientific researchers could be a slippery slope that leads to a spiral of egomaniacal aggrandizements and delusions of grandeur, how justifiable is this seemingly common perception? In extreme instances, namely the stereotyped – but perhaps relatively rare– “media whore” or “press hound” committing the faux pas of science-by-press-release, perhaps it is. But more commonly among scientists it may just be healthy behaviour. Almost every scientist probably does research because it brings them profound joy and satisfaction, indulging their curiosity. Is it selfish to share that positive, personal message? By turning the issue around like this, one might instead wonder, what’s the problem? Put it all out there, fly your science banner high! Screw the cynics.

But as in much of life, there probably is a happy medium of moderation: a middle ground, because both selfish and generous reasons might underlie “self promotion”. Such reasons can and probably do coexist not only in perfectly non-pathological, but highly PE/PR-committed, researchers, but perhaps even in most scientists. The problem is, self-promotion has taken on bad connotations to some, or even many, scientists. It can frequently be seen couched as “shameless self-promotion” when a person promotes their science, as if to apologize for the promotion and commit it in one fell swoop. Why apologize? Just do it?! If you’re having fun with it, someone else probably will too, and that’s reason enough.

And a second issue is what kind of self-promotion is being performed– is it about the individual and their self-perceived, self-appointed glory? Or is it about the science, even in a detached third person view? Or is not even self-promotion, but team-promotion, if we consider that so many scientists these days are vital parts of a team, not lone wolves? Such a distinction of self “vs.” science is too artificial a dichotomy because scientists, as human beings, tend to feel personally enmeshed in their research. Without it, they would lack the drive to do it, even though every good supervisor is “supposed” to warn us to stay objective as researchers. And the subtext behind that “stay objective” is to stay impersonal; i.e. detached, inhuman, drained of character, passive voice and all that. Boring! But there is still some merit in considering both (and other?) sides of the matter, because it is not unreasonable to predict that the first kind of promotion (selfish; aggrandizing) is more dangerous than the second (generous; celebratory), because it is the ego taking the stage rather than the science. At the same time, we need both sides: the human, fallible, witty, emotive ego and the dry, objective, methodical, taciturn science. Without the former; warts and all; science could be too frigid to be fun.

Many researchers probably find it healthy to reflect on how much self-promotion is too much, whatever the reasons (and to some degree the reasons may not matter!). But it is not just the promoters who deserve introspection about their own practice. Those perceiving others’ “self-promotion”, especially in a negative light, could benefit from scrutiny of their own perceptions. What makes them presume that the motivation behind self-promotion is a malignant one, or not? And is the reasoning behind their judgement as sound as they’d apply to other scientific judgements they make on a daily basis– what behaviour are they reading into and how?

Alternatively, why worry about it? Isn’t a good scientist one who celebrates good science, yours, your team’s, or someone else’s? Again, this comes back to how much self-promotion is too much, but from an external perspective. Researchers are likely to judge others’ promotional activities by their own standards, not those of the promoter. They may be making value judgements with no objective basis, or (with colleagues that are not well known to the individual, all too common on the internet) no empirical evidence to go by except a brief press release, blog post, tweet or news article. Indeed, a case could be made that there is no objective basis to such a value judgement, by definition. Semantics and slippery slopes toward postmodernism aside, perhaps there is even no point to judging others’ self-promotions– and why does one wish to judge? An inward look at our own motivations for judging others’ can be salutary.

A major point here is: it is easy to conflate or confuse selfish promotion and unselfish sharing-the-joy-of-science, and to a degree it does not matter. This is because inevitably it is what is presented that matters: the content, not so much as the intent, in addition to the feedback one gets from engaging the public with research. That content-with-feedback is what almost everyone outside of academia says we should be doing—who are we to argue? Maybe we should try harder to put self esteem and other internal issues aside, and enjoy good science promotion for what it is, not what we might fear it could be. Whether a scientist is a lone wolf or team wolf, there’s no big bad wolf’s huffing and puffing to fear from good self-promotion of science. Let’s focus on building a strong house of science, brick by brick; one that lasts, and one that people hear of and care about.

Anne’s great Vacuole Song; plant organnelle-promotion!

Anne:

Whenever I write something about science communication, I feel like I am treading on an extra-slippery slope. Science communication, outreach, public engagement, PR and promotion all can have very different meanings depending on who you talk to. When I was a full-time researcher, I’d never even have thought about that they could mean different things. To me they all were synonyms of ‘Hey, let’s tell the world how amazing our research and science is!’ Since I became involved in science communication, I have realised that promoting our research isn’t necessarily the same as engaging non-expert audiences. While promotion certainly has its place and benefits (for example institutions highlighting their groups’ research achievements in external newsletter and online), real engagement is not so much broadcasting but two-way communication. I would like to point to an excellent article by Steve Cross, Head of Public Engagement at University College London in a recent issue of British Science Association magazine ‘People & Science’.  Steve writes: ‘I don’t tell members of the public that ‘science is fun’ or that ‘science has the answers’. I don’t even treat science as one great big unified thing. Instead I help researchers to share what they do. The message is less ‘We’re great!’ and more ‘Here’s what we’re doing. What do you think?’

Participating in this dialogue-centred way of public engagement means however that, invariably, our specific research project will be the centre of attention. Most likely our person would be as well, since science isn’t (yet) carried out by autonomously working nano-robots. I would be very surprised if our audience saw such activities as self-promotion. I predict that they’d rather appreciate researchers ‘stepping out of the tower’ into the public and interact with non-experts. Would our peers see it as self-promotion? Probably not. What if we promoted our activities beforehand on Twitter and other online or offline channels? What if we wrote a summary of the event and reflections on it afterwards? What if we posted links to our content at different times during the day to make sure that different audiences saw it? What if we had several projects running in parallel and did this for all of them? The problem becomes apparent now and I am certain that at this point some peers would drop cynic remarks about ‘self-promotion’ or ‘attention whores’.

So, self-promotion is frowned upon. But if you think about it, our wole current academic system is based on self-promotion. When we submit a manuscript, we need to state in the cover letter why our research is novel and interesting. Even though scientific conferences are supposed to be about disseminating scientific results and initiate collaborations, they also serve the purpose of self-promotion. I don’t recall many talks with mainly negative, confusing or boring results (except maybe if a well established principal investigator was talking about their newest project and asking for feedback). Most early-career scientists would rather not submit an abstract if they haven’t got good data and wait until they can show nice results. Fact is, conferences are a big job interview for PhD students and post-docs. What about grants? Each proposal has dedicated sections for promoting yourself, your research group and your institute to increase your chances of getting a grant. Early-career researchers quickly have to learn how to write these bits, as otherwise they quickly will be at a disadvantage compared to those who can sell themselves well. I believe that there is a certain double standard around the issue of self-promotion in academia. On the one hand researchers accept it as a necessity to climb up the career ladder. On the other hand they might sneer at peers who put all of their Nature and Science references on slides in their talk. ‘What a complete showoff!’

If I follow someone on Twitter whose work I admire, say science writer Ed Yong or blogger Prof. Athene Donald, or who does cool research I am interested in, I want to read everything they publish. I appreciate them linking to their articles and papers, repeatedly, since I am bound to miss it otherwise. I loved seeing John’s BBC clip of rhino foot pressure experiments because I wanted to learn more about his research – and I loved seeing him talk about it in ‘real life’ rather than only reading his words! But if someone at my professional level, who I am competing with for fellowships or grants, was constantly posting links to their achievements, I would probably be less tolerable of them. I’d roll my eyes and think “show-off”! But I admit honestly that this would be based on a less-than-noble notions: envy, feeling threatened and insecurity about my own achievements being sufficient to succeed.

When I talked about Twitter and enhancing your online profile at our departmental Away Day someone said: “Our generation has been brought up as being humble, as not showing off, as not shouting out our achievements. So where is the border between self-promotion and being a complete d***?”  I don’t think that this is a generational thing, as many senior academics have no difficulties promoting themselves. At that time I bounced the question back to the audience and asked: ‘What do the younger ones think?’ There was silence and one PhD student said: ‘I think it’s OK. You have to do it – who else would do it otherwise?’ I suspect that being willing and able to sell yourself might be a personality rather than an age thing and that the line between ‘selling yourself’ and ‘showing off’ subjectively lies in the eye of the beholder. Whatever you think, times have changed and academic positions are getting scarce. Maybe we need another motto next to ‘publish or perish’ – ‘self-promote or perish?’ Having a decent publication record won’t guarantee a research job anymore, as the competition is fierce. ‘Getting your name out there’, enhancing your profile, building a network and being engaged however will make you stand out of the crowd – as long as your self-promotion activities build upon solid achievements and not on hot air. In that case, you might deserve eye-rolling.

Self-promotion is often frowned upon in academic circles. Generally it seems to be all right to promote ‘science’ or a whole field. Numerous times I have seen blogging scientists state – defend themselves! – that in many years of writing they never blogged about their own paper. But why not? If we follow the two-way model of public engagement described above, it would be perfectly fine to write a non-expert summary about one’s latest publication and say: ‘This is what I just published, and the story behind it. What do you think?’ Similarly, the benefit of open access papers embedded in a social media site structure is that it allows discussions with non-experts. This will work significantly quicker and efficient if the authors alert and direct potential audiences to their paper through as many communication channels as possible- an act that again can be seen as self-promotion. Is our academic culture with its subtle or open contempt of self-promotion maybe inadvertently hindering effective engagement?

What do you think? Chime in on the poll below.

If the poll does not show up above in your browser, click the link here to go directly to it (new window):

http://poll.fm/57siz

Conclusion:

Some context, first. As we finished this post together, Anne and John reflected on what got us working on it, back in August 2012:

Anne: “You wrote that you had these thoughts on self-promotion after you returned from the [British] Science Festival. Was there a specific incident that raised these thoughts, or just general thinking?

John: “I often think about what I tweet and the amount of it, and whether “me-tweeting” is such a bad thing as some on Twitter say it is. I was me-tweeting a bunch of responses to my BSF talk and I thought I should, much as I do the same when people post stories about my research papers etc. But in particular this BSF event, which was heavy PE, got me thinking on the train ride home about why some people would (cynically, in my view) see that as PR and shameful self-promotion.”

While the two views we presented above are from different backgrounds and perspectives and such, our thoughts reveal many elements common to both. Perhaps these commonalities apply to most scientists, but, but… There is a hulking science-gorilla in the room: cultural similarities and differences. We cannot neglect the HUGE issue of Western scientific culture that John and Anne and others have in common! In other cultures, self-promotion might be seen very differently; indeed in UK it seems to sneered at more than in the USA, as Brits tend to be less comfortable tooting their own horn (easy, now!). Some other cultures might have no problem with it at all. Others might find it abominable. However, how culture factors into self-promotion and PE/PR perceptions is a huge kettle of fish that we’re not quite ready to tackle, so we will turn that over for discussion in the comments here! How does your culture, whether very local (department?) or very broad (country/ethnicity) factor into this?

Or, if you prefer, please contribute your thoughts on how you handle or perceive the self-promotion vs. science-promotion (false) dichotomy as a scientist, science communicator and/or layperson? How do you determine what is a tolerable level of promotion?

P.T. Barnum said: “Without promotion something terrible happens… Nothing!

 

Read Full Post »

In the Name of Morphology

Stomach-Churning Rating: 8/10 don’t look at the gooooaaaaaaaaaaaat!!!! Too late.

Goat morphology is cool! (from work with local artist)

Goat morphology is cool! (from work with local artist)

Morphology in biology, to me, is about the science of the relationship of anatomical form to function (including biomechanics), evolution, development and other areas of organismal biology. It thus encompasses the more descriptive, form-focused area of anatomy. But in common parlance I use the two terms interchangeably, because many scientists and the general public do know what anatomy is but get confused by the word “morphology”. Not wishing to wage a semantic skirmish or get into what linguistic or other morphology is, I shall move on. But as the title betrays, this post is about morphology and how we should be proud of it as scientists who study it. This is a companion post to my earlier post on Anatomy, which was aimed at a more general audience than at my colleagues. Yet general audience, stick around. You might find this interesting.

I’m a morphologist at heart. What interests me most about organisms is how their form is not only beautiful and amazing itself but tells us profound things about other aspects of biology, as I stated in the first sentence above. I tend to call myself an evolutionary biomechanist, but morphology is in there too, at the heart of what I do, and biomechanical evolutionary morphologist — while more accurate — just does not roll off the lingual apparatus. I’ll dodge that semantic minefield of branding issues now. I’ll instead move on to my more important point that many (but not all) morphologists go through a phase in their career in which they have some strong feelings of being looked down on by other biologists/scientists as doing outmoded or inferior science. I explained in my Anatomy post that this “inferiority” is not the case today, moreso than ever; that the field is in a dynamic renaissance; so if you want some talking points go there. Regardless, these feelings of being almost stigmatized can exacerbate Imposter Syndrome, especially early in a scientific career.

Lizard morphology is cool! And museums exist to house morphological specimens like these.

Lizard morphology is cool! And museums exist to house morphological specimens like these.

I can think of one such case of bad feelings in my not-too-distant memory: at a conference dinner, one colleague sitting to my right said to my colleague to my left “What do you think about anatomy? Should students even do any research on it?” and went on with a bit of diatribe about the why-bother-ness of anatomy relative to other areas such as biomechanics. They both knew of my interests in this area, I’m quite sure, so it was as if I was not there sitting in between them. I was so appalled I was stunned into silence, but seething, and the colleague to my left didn’t defend the field either, even though they did a fair amount of research in it. It took a long time for me to cool down, and I still feel a bit offended and shocked that my colleague would say something so awkward and obliquely confrontational. Similar situations occurred during my PhD work at Berkeley, where biomechanics was having a heyday and anatomy was just beginning to rise from the ashes. It’s odd to me when biomechanists devalue morphology, because so much of mechanics depends on and relates to it, but to each their own. In many biological fields there are reductionist schisms that think they can divorce organisms from other aspects of their biology without losing something, so I’m not surprised, but maybe I am falling into my own trap of condescension here…

Anyway, I had those feelings of being on the receiving end of collegial condescension for a long time myself, and maybe that’s part of why I settled on calling my speciality something other than morphology. Shame on me, and double shame for getting back to that branding issue. But maybe not– maybe it IS important to talk about branding. I’ve been thinking a lot about my career and morphology in recent years, and keep returning to the thought that I need to embrace morphology in an even tighter love-hug. This blog has long been intended as a step in that direction (my Pinterest “Mucho Morphology” page is another step), but I could do more. Speaking of morphologists generally, perhaps we all could. Morphology still has some PR issues, most of us would probably agree, despite its arguable renaissance.

Fetal whale morphology is cool! (at Queen Mary UofL)

Fetal whale morphology is cool! (at Queen Mary UofL)

Thus my point of this post is simple: let’s try using the words morphology or anatomy more often in our scientific communications. Put those words out there and say them with pride. Let’s keep name-dropping morphology everywhere we can, within reason, and defending its value if challenged. To do this, we’ll need to know how we individually feel about morphology, and ensure we’re well informed to defend it. So think about those things, too, if you join this cause. By waging a PR battle against the forces of anti-morphology condescension, be they waxing or waning, we can get others to give our field its due credit. Fly that flayed banner of morphology high.

See a cool picture of an animal and want to post it on social media? Emphasize that it doesn’t just look cool but has amazing anatomy. Publish a cool new paper showing how a novel adaptation evolved? Remind readers of the morphological (or at least phenotypic) basis of that adaptation and how it interacts with the environment. Summarizing your research interests and discipline to a colleague or on a website/CV? Put morphology in there. Stand up straight when you do, too. Morphology, morphology, morphology. Learn to love that word and it will serve us all well. Branding and PR are only part of the struggle that needs to happen, but much as they may be to our distaste they can help. Doing great morphology-based science is the most important thing, but as social human beings the PR issue cannot be ignored.

Cat shoulder morphology is cool! (RVC teaching collection)

Cat shoulder morphology is cool! (RVC teaching collection)

This was a shortish post for me but it’s something I feel strongly about. My feelings have been magnified by taking on the role of Chair-Elect of the Division of Vertebrate Morphology at SICB, assisting the awesome current Chair Dr. Callum Ross and wise past-Chair Alice Gibb in addition the the rest of the committee and division, and as an Executive Committee member in the International Society of Vertebrate Morphology. I now have some extra responsibility to do something. Complaining about the state of affairs doesn’t help much– doing something can. If you’re a vertebrate morphologist, you should join these professional societies/divisions, attend their superb meetings and join their increasing presence on social media like Facebook (and soon Twitter?). Speak up and join in, please, these societies exist to help you and morphology!

Did you notice I didn’t use the title of the post as a lead-in to altered lyrics from a certain hit U2 song? Well I did. Maybe you’ll appreciate me resisting the temptation here. My Xmas song about our three new morphology papers didn’t exactly evoke angelic choruses.

What do you think, morphologists and non-morphologists? I am sure there are analogous situations in other fields. I’m curious how other morphologists or fields deal with or have struggled with this kind of image problem before. Especially under situations where the science itself is vigorous and rigorous, but the perception may be otherwise.

Read Full Post »

How do I manage my team of 10+ researchers without losing my mind <ahem> or otherwise having things fall apart? I’m often asked this, as I was today (10 December; I ruminated before posting this as I worried it was too boring). Whether those undesirable things have truly not transpired is perhaps debatable, but I’m still here and so is my team and their funding, so I take that as a good sign overall. But I usually give a lame answer to that question of how I do it all, like “I have no secrets, I just do it.” Which is superficially true, but…

Today was that time of year at the RVC when I conduct appraisals of the performance and development of my research staff, which is a procedure I once found horridly awkward and overly bureaucratic. But now that it focuses more on being helpful by learning from past missteps and plotting future steps in a (ideally) realistic fashion than on box-ticking or intimidation, I find the appraisals useful. The appraisals are useful at least for documenting progress and ensuring that teammates continue to develop their careers, not just crank out data and papers. By dissecting the year’s events, one comes to understand what happened, and what needs to happen in the next year.

The whole process crystalizes my own thoughts, by the end of a day of ~1 hour chats, on things like where there needs to be different coordination of team members in the coming year, or where I need to give more guidance, or where potential problems might arise. It especially helps us to sort out a timeline for the year… which inevitably still seems to go pear-shaped due to unexpected challenges, but we adapt and I think I am getting better myself at guessing how long research steps might take (pick an initial date that seems reasonable, move it back, then move it further back, then keep an eye on it).

Anyway, today the appraisals reminded me that I don’t have a good story for how I manage my team other than by doing these appraisals, which as an annual event are far from sufficient management but have become necessary. And so here I am with a post that goes through my approaches. Maybe you will find it useful or it will stimulate discussion. There are myriad styles of management. I am outlining here what facets of my style I can think of. There are parallels between this post and my earlier one on “success”, but I’ve tried to eliminate overlap.

Stomach-Churning Rating: 0/10 but no photos, long-read, bullet points AND top 10 list. A different kind of gore.

Successfully managing a large (for my field) research team leaves one with fewer choices than in a smaller team– in the latter case, you can be almost anywhere on the spectrum of hands-off vs. hands-on management and things may still go fine (or not). In the case of a large (and interdisciplinary) team, there’s no possibility to be heavily hands-on, especially with so many external collaborations piled on top of it all. So a balance has to be struck somewhere. As a result, inevitably I am forced into a managerial role where, over the years, I’ve become less directly in touch with the core methods we use, in terms of many nitty-gritty details. I’ve had to adapt to being comfortable with (1) emphasizing a big picture view that keeps the concepts at the forefront, (2) taking the constraints (e.g. time, technology and methods, which I do still therefore have to keep tabs on) into account in planning, (3) cultivating a level of trust in each team member that they will do a good job (also see “loyalty” below), and (4) maintaining the right level of overall expertise within the group (including external collaborators) that enables us to get research done to our standard. To do these things, I’ve had to learn to do these other things, which happen to form a top 10 list but are in no order:

  1. Communicate regularly– I’m an obsessive, well-organized emailer, in particular. E-mail is how I manage most of my collaborations within and outside my team, and how I keep track of much of the details. (Indeed, collaborators that aren’t so consistent with email are difficult for me) We do regular weekly team meetings in which we go around the table and review what we’re up to, and I do in-person chats or G+/Skype sessions fairly frequently to keep the ball rolling and everyone in synch. I now keep a notebook, or “memory cane” as I call it, to document meetings and to-do lists. Old school, but it works for me whereas my mental notebook started not to at times.
  2. Treat each person individually- everyone responds best to different management styles, so within my range of capabilities I vary my approach from more to less hands-off, or gentler vs. firmer. If people can handle robust criticism, or even if they can’t but they need to hear it, I can modulate to deliver that, or try to avoid crushing them. While I have high expectations of myself and those I work with, I also know that I have to be flexible because everyone is different.
  3. Value loyalty AND autonomy– Loyalty and trust matter hugely to me as a manager/collaborator. I believe in paying people back (e.g. expending a lot of effort in helping them move their career forward) for their dedicated work on my team, but also keeping in mind that I may need to make “sacrifices” (e.g. give them time off for side-projects I’m not involved in) to help them develop their career. I seek to avoid the extremes: fawningly helpless yes-men (rare, actually) or ~100% selfish what’s-in-it-for-me’s (not as rare but uncommon). Any good outcome can benefit a research manager even if they’re not a part of it, but also on a big team it’s about more than what benefits the 1st author or the senior author, but everyone, which is a tricky balance to attain.
  4. Prioritize endlessly– for me this means trying to keep myself from being the rate-limiting step in research. And I try to say “no” to new priorities if they don’t seem right for me. Sometimes it means getting little things done first to clear my desk (and mind) for bigger tasks; sometimes it means focusing on big tasks to the exclusion of smaller ones. Often it depends on my whims and energy level, but I try to keep those from harming others’ research. I make prioritized to-do lists and revisit them regularly.
  5. Allow chaos and failure/imperfection– This is the hardest for me. My mind does not work like a stereotypical accountant’s- I like a bit of disorder, as my seemingly messy office attests to. Oddly within that disorder, I find order, as my brain is still usually good at keeping things organized. I do like a certain level of involvement in research, and I get nervous when I feel that sliding down toward “uninvolved”– loss of control in research can be scary. Some degree of detachment, stepping aside and allowing for time to pass and people to self-organize or come ask for help to avoid disaster (or celebrate success), is necessary, though, because I cannot be everywhere at once and nothing can be perfect. And of course, I myself fail sometimes, but with alertness comes recognition and learning. Furthermore, too much control is micromanagement, which hurts morale, and “disorder” allows the flexibility that can bring serendipitous results (or disaster). And speaking of disaster, one has to be mentally prepared for it, and able to take a deep breath and react in the right way when it comes. Which leads to…
  6. Think brutally clearly – Despite all the swirling chaos of a large research team and many other responsibilities of an academic and father and all that, I have taught myself a skill that I point to as a vital one. I can stop what I’m doing and focus very intensely on a problem when I need to. If it’s within my expertise to solve it, by clearing my head (past experience with kendo, yoga and karate has helped me to do this), I usually can do it if I enter this intensely logical, calm, objective quasi-zen-state. I set my emotions aside (especially if it is a stressful situation) and figure out what’s possible, what’s impossible, and what needs to be done, and find what I think is the best course of action quite quickly, then act on that decisively (but without dogmatic inflexibility). In such moments, I find myself thinking “What is the right thing to do here?” and I almost instinctively know when I can see that right thing. At that moment I get a charge of adrenaline to act upon it, which helps me to move on quickly. From little but hard decisions to major crises, this ability serves me very well in my whole life. I maintain a duality between that singleminded focus and juggling/anarchy, often able to quickly switch between those modes as I need to.
  7. Work hardest when I work best (e.g. good sleep and caffeination level, mornings)- and let myself slack off when I’m not in prime working condition. I shrug aside guilt if I am “slacking”– I can’t do everything and some things must fall by the wayside if I can’t realistically resolve them in whatever state of mind I’m in. The slacking helps me recharge and refresh– by playing a quick video game or checking social media or cranking up some classic Iron Maiden/modern Menzingers, I can return to my work with new gusto, or even inspiration, because…
  8. Spend a lot of time thinking while I “slack off”, in little bursts (e.g. while checking Twitter). I let my brain process things that are going on, let go of them when I’m not getting anywhere with them, and return to them later. This is harder than it sounds as I still stubbornly or anxiously get stuck on things if they are stressing me out or exciting me a lot. But I am progressively improving at this staccato-thinking skill.
  9. Points 7+8 relate to my view that there is no “work-life balance” for me—it is all my life, and there’s still a lot of time to enjoy the non-work parts, but it’s all a blend that lets me be who I am. I don’t draw lines in the sand. Those just tend to make one feel bad, one way or another.
  10. Be human– try to avoid acting like a distant, emotionless robotic manager and cultivate more of a family-like team. Being labelled with the word “boss” can turn my stomach. “Mentor” and “collaborator” are more like what I aim for. Being open about my own flaws, failures, and life helps.

Long post, yeah! 1 hour on a train commute lets the thoughts flow. I hope that if you made it this far you found it interesting.

What do you do if you manage a team, what works for you or what stories do you have of research management? Celebrations and post-mortems are equally welcome.

 

Read Full Post »

Let's play find-the-spandrel!

Let’s play find-the-spandrel!

We just passed the 35th anniversary of the publication of Gould and Lewontin’s classic, highly cited, highly controversial essay (diatribe?), “The spandrels of San Marco and the Panglossian paradigm: a critique of the adaptationist programme.” The 21st of September 1979 was the fateful date. Every PhD student in biology should read it (you can find pdfs here— this post assumes some familiarity with it!) and wrestle with it and either love it or hate it- THERE CAN BE NO MIDDLE GROUND! With some 5405 citations according to Google Scholar, it has generated some discussion, to put it lightly. Evolutionary physiologists and behaviourists who were working at the time it came out have told me stories of how it sent (and continues to send) shockwaves through the community. Shockwaves of “oh crap I should have known better” and “Hell yeah man” and “F@$£ you Steve,” more or less.

I am among those who love “The Spandrels Paper“. I love it despite its many flaws that people have pointed out to seemingly no end- the inaccurate architectural spandrel analogy, the Gouldian discursive (overly parenthetical [I’m a recovering victim of reading too much Gould as an undergrad]) writing style, the perhaps excessive usage of “Look at some classic non-scientific literature I can quote”, the straw men and so on. I won’t belabour those; again your favourite literature search engine can be your guide through that dense bibliography of critiques. I love it because it is so daringly iconoclastic, and because I think it is still an accurate criticism of what a LOT of scientists who do research overlapping with evolutionary biology (that is, much of biology itself) do.

The aspects of The Spandrels Paper that I still think about the most are:

(1) scientists seldom test hypotheses of adaptation; they are quick to label something that is useful to an animal as an adaptation and then move on after rhapsodizing about how cool of an adaptation it is; and

(2) thus alternatives to adaptation, which might be very exciting topics to study in their own right, get less attention or none.

True for #2, evo-devo has flourished by raising the flag of constraint (genetic/developmental/other factors that prevent evolution from going in a certain direction, or even accelerate it in less random directions). That’s good, and there are other examples (genetic drift, we’ve heard about that sometimes), but option #1 still often tends to be the course researchers take. To some degree, labelling something as an adaptation is used as hype, to make it more exciting, I think, in plenty of instances.

Truth be told, much as Gould and Lewontin admitted in their 1979 paper and later ones, natural selection surely forges lineages that have loads of adaptations (even in the strictest sense of the word), and a lot of useful traits of organisms are thus indeed adaptations by any stripe. But the tendency seems to be to assume that this presumptive commonality of adaptations means that we are justified to quickly label traits as adaptations.

Or maybe some researchers just don’t care about rigorous tests of adaptation as they’re keen to do other things. Standards vary. What I wanted to raise in this post is how I tend to think about adaptation:

I think adaptations are totally cool products of evolution that we should be joyous to imagine, document, test and discover. But that means they should be Special. Precious. A cause for celebration, to carefully document by scientific criteria that something is an adaptation in the strictest sense, and not a plesiomorphy/exaptation (i.e. an adaptation at a different level in the evolutionary hierarchy; or an old one put to new uses), spandrel/byproduct, or other alternatives to adaptation-for-current-biological-role.

But that special-ness means testing a hypothesis of adaptation is hard. As many authors waving the flag of The Modern Comparative Method (TMCM) have contended, sciencing truth-to-adaptationist-power by the rules of TMCM takes a lot of work! George Lauder’s 1996 commentary in the great Adaptation book (pdf of the chapter here) outlined a lengthy procedure of  “The Argument from Design“; i.e., testing adaptation hypotheses. At its strictest implementation it could take a career (biomechanics experiments, field studies, fitness measurements, heritability studies, etc.) to test for one adaptation.

Who has time for all that?

The latter question seems maladaptive, placing cart and horse bass-ackwards. If one agrees that adaptations are Special, then one should be patient in testing them. Within the constraints of the practical, to some degree, and different fields would be forced to have different comfort levels of hypothesis testing (e.g. with fossils you can’t ever measure fitness or other components of adaptation directly; that does not mean that we cannot indirectly test for adaptations– with the vast time spans available, one would expect palaeo could do a very good job of it, actually!).

I find that, in my spheres of research, biomechanists in particular tend to be fast to call things they study adaptations, and plenty of palaeontologists do too. I feel like over-usage of the label “adaptation” cheapens the concept, making the discovery of one of the most revered and crucial concepts in all of evolutionary biology seem cheapened and trite. Things that are so easy to discover don’t seem as precious. When everything is awesome, nothing is…

I’ve always hesitated, thanks in part to The Spandrels Paper’s indoctrination, from calling features of animals adaptations, especially in my main research. I nominally do study major ?adaptations? such as terrestrial locomotion at giant body sizes, or the evolution of dinosaurian bipedalism. I searched through my ~80 serious scientific papers lately and found about 50 mentions of “adapt” in an adaptationist, evolutionary context. That’s not much considering how vital the concept is (or I think it is) to my research, but it’s still some mentions that slipped through, most of them cautiously considered– but plenty more times I very deliberately avoided using the term. So I’m no model of best practice, and perhaps I’m too wedded to semantics and pedantry on this issue, but I still find it interesting to think about, and I’ve gradually been headed in the direction of aspect #2 (above in bold) in my research, looking more and more for alternative hypotheses to adaptation that can be tested.

I like talking about The Spandrels Paper and I like some of the criticism of it- that’s healthy. It’s a fun paper to argue about and maybe we should move on, but I still come back to it and wonder how much of the resistance to its core points is truly scientific. I’m entering into teaching time, and I always teach my undergrads a few nuggets of The Spandrels Paper to get them thinking about what lies beyond adaptation in organismal design.

 What do other scientists think? What does adaptation mean (in terms of standards required to test it) to you? I’m curious how much personal/disciplinary standards vary. How much should they?

For the non-scientists, try this on for size: when our beloved Sir David Attenborough (or any science communicator) speaks in a nature documentary about how the otter is “perfectly adapted” to swim after prey underwater, do you buy into that or question it? Should you? (I get documentaries pushing me *all the time* to make statements like this, with a nudge and a wink when I resist) Aren’t scientists funny creatures anyway?

Read Full Post »

I’ll let the poll (prior post) run for a while but as it winds down I wanted to explain why I posted it:

In the past, I’ve often run into scientists who, when defending their published or other research, respond something like this:

“Yeah those data (or methods) might be wrong but the conclusions are right regardless, so don’t worry.”

And I’ve said things like that before. However, I’ve since realized that this is a dangerous attitude, and in many contexts it is wrong.

If the data are guesses, as in the example I gave, then we might worry about them and want to improve them. The “data are guesses” context that I set the prior post in comes from Garland’s 1983 paper on the maximal speeds of mammals– you can download a pdf here if this link works (or Google it). Basically the analysis shows that, as mammals get bigger, they don’t speed up as a simple linear analysis might show you. Rather, at a moderate size of around 50-100kg body mass or so, they hit a plateau of maximal speed, then bigger mammals tend to move more slowly. However, all but a few of the data points in that paper are guesses, many coming from old literature. The elephant data points are excessively fast in the case of African elephants, and on a little blog-ish webpage from the early 2000s we chronicled the history of these data– it’s a fun read, I think. The most important, influential data plot from that paper by Garland is below, and I love it– this plot says a lot:

Garland1983

I’ve worried about the accuracy of those data points for a long time, especially as analyses keep re-using them– e.g. this paper, this one, and this one, by different authors. I’ve talked to several people about this paper over the past 20 years or so. The general feeling has been in agreement with Scientist 1 in the poll, or the quote above– it’s hard to imagine how the main conclusions of the paper would truly be wrong, despite the unavoidable flaws in the data. I’d agree with that statement still: I love that Garland paper after many years and many reads. It is a paper that is strongly related to hypotheses that my own research seeks out to test. I’ve also tried to fill in some real empirical data on maximal speeds for mammals (mainly elephants; others have been less attainable), to improve data that could be put into or compared with such an analysis. But it is very hard to get good data on even near-maximal speeds for most non-domesticated, non-trained species. So the situation seems to be tolerable. Not ideal, but tolerable. Since 1983, science seems to be moving slowly toward better understanding of the real-life patterns that the Garland paper first inferred, and that is good.

But…

My poll wasn’t really about that Garland paper. I could defend that paper- it makes the best of a tough situation, and it has stimulated a lot of research (197 citations according to Google; seems low actually, considering the influence I feel the paper has had).

I decided to do the poll because thinking about the Garland paper’s “(educated) guesses as data” led me to think of another context in which someone might say “Yeah those data might be wrong but the conclusions are right regardless, so don’t worry.” They might say it to defend their own work, such as to deflect concerns that the paper might be based on flawed data or methods that should be formally corrected. I’ve heard people say this a lot about their own work, and sometimes it might be defensible. But I think we should think harder about why we would say such things, and if we are justified in doing so.

We may not just be making the best of a tough situation in our own research. Yes, indeed, science is normally wrong to some degree. A more disconcerting situation is that our wrongs may be mistakes that others will proliferate in the future. Part of the reasoning for being strict stewards of our own data is this: It’s our responsibility as scientists to protect the integrity of the scientific record, particularly of our own published research because we may know that best. We’re not funded (by whatever source, unless we’re independently wealthy) just to further our own careers, although that’s important too, as we’re not robots. We’re funded to generate useful knowledge (including data) that others can use, for the benefit of the society/institution that funds us. All the more reason to share our critical data as we publish papers, but I won’t go off on that important tangent right now.

In the context described in the latter paragraph and the overly simplistic poll, I’d tend to favour data over conclusions, especially if forced to answer the question as phrased. The poll reveals that, like me, most (~58%) respondents also would tend to favour data over conclusions (yes, biased audience, perhaps- social media users might tend to be more savvy about data issues in science today? Small sample size, sure,  that too!). Whereas very few (~10%) would favour conclusions, in the context of the poll. The many excellent comments on the poll post reveal the trickier nuances behind the poll’s overly simplistic question, and why many (~32%) did not favour one answer over the other.

If you’ve followed this blog for a while, you may be familiar with a post in which I ruminated over my own responsibilities and conundrums we face in work-life balance, personal happiness, and our desires to protect ourselves or judge/shame others. And if you’ve closely followed me on Twitter or Facebook, you may have noticed we corrected a paper recently and retracted another. So I’ve stuck by my guns lately, as I long have, to correct my team’s work when I’m aware of problems. But along the way I’ve learned a lot, too, about myself, science, collaboration, humanity, how to improve research practice or scrutiny, and the pain of errors vs. the satisfaction of doing the right thing. I’ve had some excellent advice from senior management at the RVC along the way, which I am thankful for.

I’ve been realizing I should minimize my own usage of the phrase “The science may be flawed but the conclusions are right.” That can be a more-or-less valid defence, as in the case of the classic Garland paper. But it can also be a mask (unintentional or not) that hides fear that past science might have real problems (or even just minor ones that nonetheless deserve fixing) that could distract one away from the pressing issues of current science. Science doesn’t appreciate the “pay no attention to the person behind the curtain” defence, however. And we owe it to future science to tidy up past messes, ensuring the soundness of science’s data.

We’re used to moving forward in science, not backward. Indeed, the idea of moving backward, undoing one’s own efforts, can be terrifying to a scientist– especially an early career researcher, who may feel they have more at risk. But it is at the very core of science’s ethos to undo itself, to fix itself, and then to move on forward again.

I hope that this blog post inspires other scientists to think about their own research and how they balance the priorities of keeping their research chugging along but also looking backwards and reassessing it as they proceed. It should become less common to say “Yeah those data might be wrong but the conclusions are right regardless, so don’t worry.” Or it might more common to politely question such a response in others. As I wrote before, there often are no simple, one-size-fits-all answers for how to best do science. Yet that means we should be wary of letting our own simple answers slip out, lest they blind us or others.

Maybe this is all bloody obvious or tedious to blog readers but I found it interesting to think about, so I’m sharing it. I’d enjoy hearing your thoughts.

Coming soon: more Mystery Anatomy, and a Richard Owen post I’ve long intended to do.

Read Full Post »

A short post that guest-tweeting at the  Biotweeps account on Twitter got me thinking about– featuring a poll.

Imagine this: two scientists (colleagues, if you’re a scientist) are arguing thusly. Say it’s an argument about a classic paper in which much of the data subjected to detailed statistical analyses are quantitative guesses, not hard measurements. This could be in any field of science.

Scientist 1: “Conclusions are what matter most in science. If the data are guesses, but still roughly right, we shouldn’t worry much. The conclusions will still be sound regardless. That’s the high priority, because science advances by ideas gleaned from conclusions, inspiring other scientists.”

Scientist 2: “Data are what matter most in science. If the data are guesses, or flawed in some other way, this is a big problem and scientists must fix it. That’s the high priority, because science advances by data that lead to conclusions, or to more science.”

Who’s right? Have your say in this anonymous poll (please vote first before viewing results!):

link: http://poll.fm/4xf5e

[Wordpress is not showing the poll on all browsers so you may have to click the link]

And if you have more to say and don’t mind being non-anonymous, say more in the Comments- can you convince others of your answer? Or figure out what you think by ruminating in the comments?

I’m genuinely curious what people think. I have my own opinion, which has changed a lot over the past year. And I think it is a very important question scientists should think about, and discuss. I’m not just interested in scientists’ views though; anyone science-interested should join in.

Read Full Post »

« Newer Posts - Older Posts »