Much of science is a kind of puzzle solving activity. You, the scientist, are presented with a phenomenon whose causes and underlying mechanisms we do not yet understand — and your task is to elucidate them. That this succeeds at all inspires awe. That it succeeds fairly regularly and efficiently requires an explanation.
There are two issues to be understood, broadly speaking: (1) how we can tell that a scientific hypothesis is probably true (this is usually called “justification”) and (2) how we come up with hypotheses in the first place (usually called “discovery”). Both stages are crucial. The best tester of hypotheses is helpless if she has nothing to test. And the most creative hypotheses are of limited use if we cannot assess their truth. Importantly, the efficiency of science must depend to a large extent on discovery: on the fact that candidate hypotheses can be produced quickly and reliably.
Not so long ago, philosophers of science believed that discovery is mostly intractable: a matter of happy guesses and creative intuitions. In recent decades, however, it has been argued that systematic insight into scientific hypothesis generation is possible. A particularly nice and approachable example of this type of thinking in the philosophy of biology is given in a recent book by Carl Craver and Lindley Darden (based on their earlier research). They argue that scientists invent new mechanisms by using three main strategies: (1) they transfer mechanisms schemata from related problems (schema instantiation); (2) they transfer mechanism components from related problems (modular subassembly); (3) they investigate how known components or interactions can link up (forward/backward-chaining). A somewhat broader and more historical (but less problem oriented) perspective is given by Jutta Schickore in the Stanford Encyclopedia of Philosophy.
In a new paper, I and my co-author Kärin Nickelsen present our own contribution to the discovery debate. Our work is in the Craver/Darden tradition, but we look in detail at two historical cases — Oxidative Phosphorylation and the Calvin-Benson cycle — to advance the state of the art a bit (by about a paper’s worth). We focus on three areas:
First, we consider “hard cases” of discovery from the history of sciences. By this we mean achievements of acknowledged originality that no one would describe as mere extrapolations of previous knowledge. If a particularly spectacular scientific discovery can be explained in terms of a certain set of discovery strategies, then this speaks to the usefulness and power of these strategies: less complex cases should present no problem to them. So hard cases help our claim that much of scientific creativity is ultimately explicable in terms of the skillful and diligent use of basic heuristics.
Second, we are interested in whether discovery strategies are “substrate neutral” or “domain specific”. Are there general rules for discovering scientific hypotheses, or do strategies only apply to particular fields of inquiry — or even to particular kinds of empirical problems within disciplines? We think that the truth is for once in the middle: discovery strategies seem to be somewhat general, but they need to be applied to highly domain-specific previous knowledge. We discuss instances of this in the paper.
Third, the existing literature does not pay enough attention to the way in which the space of possible hypotheses is explored systematically. In one of our cases, for instance, a particularly interesting scientific hypotheses was arrived at — in part — by simple causal combinatorics. It was known that two types of events, A and B, were correlated. This allowed the following (exhaustive) set of hypotheses to be explored: Does A cause B? Does B cause A? Or do A and B have a common cause? While this procedure may sound simple, its results are anything but.
The paper has just appeared in History and Philosophy of the Life Sciences, and our penultimate draft is available on Pitt’s PhilSci archive.