I think a lot about where my ideas come from as a researcher and what a “new” idea really is, in addition to the “value” (in any sense) of scientific ideas. As a senior researcher, I find more and more that such evaluations of the merits of ideas are a huge part of my job. And I hear my colleagues talking about similar things all the time. Variably, the reflections and discussions boil down to something like these (falling somewhere in the multi-dimensional space illustrated by two extremes below):
- “Study by so-and-so claims that it shows something novel but it’s not; such-and-such said/showed that in year XXXX”, or
- “I came up with the idea for the paper/grant and that is the most important thing”.
The above extremes, and perhaps all points in between, could always be debatable. There is no across-the-board, seemingly profound statement that can encompass all possibilities, like the ironically trite “There’s nothing new under the sun”, or vast oversimplification “Ideas are easy to come by; data are hard.”
What is a new idea and what is one worth? Well, yes, that varies in science. I think it’s helpful to dissect these issues separately- the origin and evolution of ideas, and then their currency in science. And so here I will do that. These are not new ideas– even for me; I’ve been sitting on this post since 18 October 2015, waiting for the ideas to coalesce enough to post this!
Stomach-Churning Rating: 1/10; ruminations, some of which may be blindingly obvious. No images; just a long read.
It’s safe to say, and I know a lot has been written about this in the history and philosophy of science, that almost all “new” ideas in science are incremental. They tend to be little steps forward; not Kuhn-ian revolutions that blindside the community. Fans of Darwin and other science heroes are constantly reminded that even the geniuses’ ideas emerged mostly from the tangled skein of scientific society; coalescing from particles suspended in the scientific group-think. That doesn’t devalue science, as science is still making big strides– by (increasingly?) small, frequent steps across the scholastic landscape (see below).
It’s easy to take a shortcut and say, for example, that evolution was Charles Darwin’s big idea (or give him the lion’s share of credit) when of course that is a huge oversimplification and highly misleading– historical evidence shows beyond question that evolutionary ideas had been bounced around for decades (or centuries) and that Darwin had come across plenty of them, his grandfather Erasmus’s Zoonomia being an obvious one out of many influences. I was recently teaching my main undergrad class about this very topic and it got me thinking more about how, on the more standard scale of us non-genius scientists, ideas always have many common ancestors and lateral transfer of heritable material (to abuse the evolutionary metaphor). Saltationism/macromutation of ideas is rare, hence precious when it truly does happen. But hybridization (multidisciplinary syntheses; integrative science; all the rage these days– usually for good reasons) is a powerful force, probably today more than ever in science, able to generate and tackle big ideas.
It’s just as easy to default to the breathless “Wow, everything is new!” shortcut. The 24-hour news cycle takes a regular tongue-lashing from scientists and other science communicators from taking this shortcut too often. We might more reflexively forgive that cycle in the breath after cursing it, because memories and attention spans are short, hectic lives are only a bit longer, and thus in the latest science news story the headline or ~500-word article can’t regale us with the entire, nuanced history of a subject *and* explain within those tight constraints what incremental advance has been made, with due credit to all antecedents. Would we prefer less science news coverage overall, to save that breathlessness for the rare occasion when it is truly deserved? Or just more boring, toned-down, long-winded coverage (cough, this post, cough?) that attracts less interest in science overall? I’d be wary of such arguments.
Scientific journal articles, too, are becoming more complex because of the increasingly specialized, technical nature of many fields that have benefited from prior scientific advances. Online journal formats are helping to loosen the noose of word limits on those articles. But good mentors (and reviewers, and editors) remind young (and all other) scientists that overly long papers will raise the risk of fewer people reading them or spotting key phrases buried in them. “Moderation in all things.” Usually. “Exceptions to all things”, too, I admit– sometimes long papers are great!
Furthermore, much as journalists can’t cover, or be familiar with, the whole history of a field, so it is becoming harder even for specialists to follow scientific progress within a specialized field. Open access to literature and online papers or emailed pdfs are helping, with even many very old classic papers becoming digitized. Yet then while you’re reading through some of the old literature you’d missed, and doing teaching and research and admin and other tasks that life as a scientist demands of you, new papers are popping up. You see some of them, and others get missed because there are too many papers getting published to follow them all, and because there are so many journals (many of the online ones being very generalist, so a paper on a given topic could appear anywhere), and even the best literature-searching tools don’t find everything. Patience, to a degree, in tolerating missed references is thus important, although it can help to point them out diplomatically.
I find it exasperating trying to keep up with the fields I work in. Seriously, I frequently look at my folders of papers “to read” and I think “*@$*! I’ll never read all that now!” Ten years ago it was different. I felt like I could, and I think I mostly did, keep up with my interests. Furthermore, I care about reading others’ research. I love reading science and I feel proud to keep up with a topic, knowing that I’m doing my scholastic duty. I want to learn what others have learned, both in the far past and far-flung countries and in the recent cutting-edge studies. I have gotten where I am from doing that– the literature routinely inspires me to take new directions in research and many of my best papers/grants/projects have come directly from that inspiration. I worry that I am missing opportunities for new ideas by not reading all of the old ones. But no one can do everything.
Aha! I have reached one of my points! The literature is there to show us the way; show us where the knowns and unknowns are in science. The peaks of knowledge where science has climbed to new heights of understanding! The valleys of ignorance where a bit of research effort or luck might get you far in making “new” discoveries! Or you can slog it on the slopes and try to conquer the peaks on that scholastic landscape (Sewall Wright fans, take note); show that your disciplinary Mt. Everest is taller than anyone thought it was. We all have our favoured routes as researchers. The point is to discover something “new” to science. It is all new, if it is worth doing, as a scientific researcher. And maybe 99.99% of that newness ascends from base-camps on older, lower landscapes.
But there is new (tiny steps), and then there is NEW (quantum leaps), and we must be wary of that’s-not-even-new-at-all (previously charted territory, or even plagiarism). The aspect of “new”-ness here that interests me is the subjective judgement we make in assessing that originality. As an example from my own research in vertebrate palaeontology, I’ve published around 12 papers that orbit the topic of whether a big theropod dinosaur such as Tyrannosaurus rex could run quickly if at all. This all began with my 2002 paper in Nature, which at the time was a “new” application in palaeontology of methods that were well over 30 years old then (inverse dynamics analysis of musculoskeletal mechanics), and owed a lot to simpler approaches by RMcNeill Alexander and others, but probably was published (and gained me some notoriety/infamy) because it answered a tough question in a clever, basic and reproducible way.
My (and coauthors’) papers in 2004, 2005, 2007 and onwards fleshed out this topic more and showed some of the nuances overlooked in that 2002 study. They were all “new”, even though that question “Was T. rex a fast runner?” was gradually beaten to death by them to the point where even I am tired of it now, although I can still see areas where I’m not satisfied with my own answers. I guess the 2002 paper was NEW in its own moderate way and the later papers, even though some of them were much fancier (e.g. using 3D imaging and cutting edge computer modelling; not just simple equations and sketches), were incrementally new in terms of the answers they gave, even if methodologically NEW-ish. We could debate the finer details of the “new-ness” there until the heat death of the universe, but I doubt it would be of more than of very niche (read: tediously nerdy and semantic/subjective) interest. Debating whether something is new or not quickly gets boring. It’s a dull criticism to level at a new study, because most studies (at least in my field) are conducted and published for a good reason and probably are new in some way; the ways they are not new are far less interesting. It’s maybe even harder to accurately delineate the “new-ness” of a study than it is to berate it for its old-ness; the latter is the knee-jerk retort too often on social media, perhaps, and easily fuelled by scientific self esteem issues.
Returning to point 1 above, then, sure. That study in year XXXX by so-and-so probably does have some relationship to the latest studies in a related field. And it behooves us as scholars to be aware of those homologies and homoplasies that are the history of any scientific discipline’s intellectual evolution. But giving the authors or the news media a tongue-lashing for talking about (incrementally) new research probably is more often wasted breath than otherwise; boiling down to debate over which hairs have been split and by whom and when. There are plenty of cases of excessive spin and hype, my personal punching-bag being the humdrum T. rex “scavenger” nonsense, but I usually find it more rewarding to look for the value in scientific ideas and data than excoriate the excesses of how they are presented to the public.
What’s more interesting, to me, is how weaving together old research to allow new ascents up scholastic landscapes moves science forward, sometimes in surprising ways. Old research provides data and ideas that are ancestors of new ideas and eventually new data. Indeed, this reticulating phylogeny of data and ideas muddies the waters between “data” and “ideas” in some cases. We need both, and different researchers fall into different positions along a spectrum. I see some scientists who take an “r selection” approach to ideas, throwing them out in a shotgun approach (sometimes with little or no peer review to control their quality) and hoping that some stick, adhering to supportive data. In contrast, other scientists fall closer to the “K selection” extreme, slowly nuturing ideas with cautious care, focusing on building up mountains of rigorous data to test those ideas with, until together they are ready to leave the academic nest and be published.
The integration of data and ideas from old research plays a variable role in that evolution of data and ideas– some of those scientists (falling on any point along that r-K spectrum) rely more on careful reading of past scientific literature to give their work firm historical footing and inspiration, whereas others mostly pluck a few references that they need to cite once they write up their work, not so keen on spending their time keeping up with the literature and thus focusing more on their own internal thought processes or other sources of inspiration. Different strokes for different folks…
What I’d like to close with, as a roughly second point of this post, is to question the inherent value of scientific ideas. I emphasize that I am unable to provide any easy answers here. What is the value of a good idea that needs testing by some kind of data? The source of inspiration may be immaterial to that evaluation; where one got one’s ideas may not matter here, it’s more about the value of the idea at hand– be it a hypothesis, a general question, a “what’s up with that?” (my personal favourite kind of research question); whatever.
For example, I can think of many cases in my career where a certain paper or grant owed hugely to an idea I had; without that idea, which wasn’t initially obvious, we’d still be stuck at some lower scientific base-camp, and big papers or grants or whatever would not have happened, and careers might not have blossomed the way they did (who knows!). My job as a senior researcher is often to “give away” ideas to those I mentor and collaborate with, and I love doing that. It’s seldom one-sided, with me playing the parthenogenetic parent and that’s it; normally these processes are intensely collaborative and thus multiparental hybrids. But I can usually trace back where the lineages of ideas came from and weigh their merits accordingly, and sometimes as scientists we have to do that.
However, it’s not just about ideas, either– a great scientific idea can be wonderfully valuable, but until it is tested its value might only be speculated upon. It takes the infamously time-consuming and technically challenging procedure of scientific data collection and analysis to test most ideas, and different collaborators may play lesser or greater roles in that process vs. the ancestral idea-generating process(es). Along the way, we must think of ideas for how to test the main idea itself: what methods might work, what has or hasn’t been tried before to tackle similar problems, and is the method we’ve chosen even working as the scientific work proceeds or must we switch approaches? That gets messy; ideas and data begin to become entangled, and contributions of individuals intermingled, but that’s how science works.
This leads to the flip side of the value of scientific ideas, that in many cases they aren’t worth that much— they may be dead-ends for one reason or another: just foolish ideas; or untestable with current tools/data; or so obvious that anyone could have come up with them; or boring and not really worth trying to test. I’ve found it common to publish a paper and then hear, at some point before or after publication, another researcher say (in reference to some major or minor aspect of the paper) something like “Hey I mentioned that idea in this paper/book/blog post!” More often than not, I don’t want to say it in retort but my reaction is “Well, duh. It’s a pretty obvious idea”, and/or “That’s great, but you didn’t test it; that’s the hard bit”. Cheap ideas by definition aren’t worth much fuss. To abuse Shakespeare, “The science is the thing; wherein we’ll catch which idea in science is king.” (sorry!)
A common example I run across that falls within this theme of cheap ideas is to encounter a colleague (e.g. a new student, maybe one with lax supervision) who describes their new research project in which they apply some sort of fancy technique like computer modelling/simulation to an animal, such as a nice dinosaur fossil, doing what some previous study/studies had done with other species but applied to a new species. Uncomfortably often, when asked their justification for applying that method to that animal is because they can, and because they happen to have that animal accessible, rather than because there is an urgent, exciting question that must be answered for which that method and specimen are ideally suited to testing. It’s not worthless, but… more emphasis on the value of ideas and less on climbing Mt. Everest because it’s there might have been rewarding?
Returning to the main thrust of this post conveyed by the title, then, it’s not easy evaluating what the value of an idea is in science, but it’s something that we all have to learn to do as researchers, and it can bring out the best and worst of our humanity as scientists; perhaps leading to conflict; or it can even just end up with an unsatisfyingly muddled answer. So tread carefully on that scholastic landscape, and think about how you choose your way across it– there are many routes, but I think we can generally agree that the prize of discovery (whether incrementally small or uncommonly large) is a big part of why we dare the journey.
I’d love to hear your thoughts, your stories, and other insights here– it’s a very broad topic and lots of room for discussion!
[…] are a dime a dozen. How do we navigate to the good ones? Excellent perspective, by John Hutchinson. Read of the […]
Mixing metaphors (as you do) seems a pretty good model for how new scientific ideas often originate. A thing I sometimes notice is that someone mis-states what they’re trying to explain, or the hearer/reader misinterprets what’s been said/written, and thus an idea is born in the spark-gap between terminals, without either person being responsible. (Whenever two people converse, there are at least two conversations.) This isn’t restricted to science but is seen in the origin of jokes, poetic metaphors etc.
Well put- and this is one of the justifications for the increasing trend to have multidisciplinary research/teaching programs, schools, funding, etc! “Hybrid vigour” would be an evolutionary metaphor? https://en.wikipedia.org/wiki/Heterosis