“Words, words, words, I’m so sick of words!”

Many scientists seem to agree with a favorite adage of the best film directors: “show, don’t tell”. A look at the latest articles in Nature will often reveal that half of the available space is devoted to pictures and diagrams rather than text. Supplemental materials may even consist exclusively or almost exclusively of diagrams.

While historians of science have long paid attention to visual representations, philosophers have by and large ignored them. But this is slowly beginning to change. In the last decade or so there have been recurrent bubbles of philosophical interest in diagrams, including a project directed by William Bechtel and Adele Abrahamsen at UCSD under the lovely acronym Worgods (Working Group on Diagrams in Science). While I was at the Pittsburgh Center with the two of them, I quickly recognized not only how important diagrams are in scientific practice, but also that they had figured prominently in much of my previous research. I had just never stopped to consider them as objects of inquiry in their own right.

Take this diagram as an example:

Spot the difference figures
A figure from Fire et al. in Nature, 1998, volume 391, p. 808.
The figure appeared in a 1998 paper in Nature by Fire et al. It shows fluorescence micrographs of green fluorescent protein (GFP) in C. elegans. In a and b, GFP is expressed in a larva and in an adult, respectively. In d and e, the expression is suppressed. In g and h, the expression is suppressed in the nucleus, but not in mitochondria. What is the point of figures like this one?

In a forthcoming paper, I give a pretty straightforward answer. Many of the diagrams in your routine scientific publication depict what I call “causal contrasts”. They show what happens to a particular outcome variable if a specific intervention is performed, comparing this to a control in which the intervention is not performed. In the diagram above, the point is to show that an intervention with double-stranded RNA can suppress the expression of sequence-homologous genes (compare a and d, b and e). What is more, the diagram shows the specificity of this effect: if the double-stranded RNA is targeted only against the nuclear GFP gene, then the expression of mitochondrial GFP remains unaffected (compare a and g, b and h). For their demonstration of this extremely effective technique for gene suppression, the authors received the 2006 Nobel Prize in physiology or medicine.

I argue in the paper that many diagrams show causal contrasts, even though they differ significantly on the surface. Causal contrasts appear in many guises, some more obvious than others. They also appear in many scientific contexts, from the experimental to the observational to the purely theoretical.

Causal contrast diagrams are philosophically significant. They are a window into one of the key practices of scientific epistemology: causal inference. I suggest that this goes far in explaining why scientists, when reading a paper, turn to the diagrams first. A study’s key results can often be found there. Intriguingly, diagrams are often much more than merely a preferred representational tool for causal inferences. Diagrams themselves often constitute evidence: think of the ubiquitous photographs of electrophoresis gels in molecular biology, or the fluorescence micrograph shown above.

I call the paper “Spot the difference: Causal contrasts in scientific diagrams”. A preprint is available on the PhilSci archive, and the finished paper is about to come out in Studies in History and Philosophy of Biological and Biomedical Sciences.

Towards a methodology for integrated history and philosophy of science

It has been claimed that the integration of history and philosophy of science is nothing but a marriage of convenience. I think this is wrong — it is really a passionate romance, and I argue why in a recent co-written paper. Beyond a discussion of what is to be gained by integrated HPS in principle, we focus particularly on the methodology of integration in practice: how should we relate philosophical concepts to historical cases, and vice versa? Our penultimate draft is now on the PhilSci Archive.

The paper is forthcoming in a collected volume titled The Philosophy of Historical Case Studies, which was co-edited by Tilman Sauer and myself and will appear in the Boston Studies in the Philosophy and History of Science.

How to think new thoughts

Much of science is a kind of puzzle solving activity. You, the scientist, are presented with a phenomenon whose causes and underlying mechanisms we do not yet understand — and your task is to elucidate them. That this succeeds at all inspires awe. That it succeeds fairly regularly and efficiently requires an explanation.

There are two issues to be understood, broadly speaking: (1) how we can tell that a scientific hypothesis is probably true (this is usually called “justification”) and (2) how we come up with hypotheses in the first place (usually called “discovery”). Both stages are crucial. The best tester of hypotheses is helpless if she has nothing to test. And the most creative hypotheses are of limited use if we cannot assess their truth. Importantly, the efficiency of science must depend to a large extent on discovery: on the fact that candidate hypotheses can be produced quickly and reliably.

Not so long ago, philosophers of science believed that discovery is mostly intractable: a matter of happy guesses and creative intuitions. In recent decades, however, it has been argued that systematic insight into scientific hypothesis generation is possible. A particularly nice and approachable example of this type of thinking in the philosophy of biology is given in a recent book by Carl Craver and Lindley Darden (based on their earlier research). They argue that scientists invent new mechanisms by using three main strategies: (1) they transfer mechanisms schemata from related problems (schema instantiation); (2) they transfer mechanism components from related problems (modular subassembly); (3) they investigate how known components or interactions can link up (forward/backward-chaining). A somewhat broader and more historical (but less problem oriented) perspective is given by Jutta Schickore in the Stanford Encyclopedia of Philosophy.

In a new paper, I and my co-author Kärin Nickelsen present our own contribution to the discovery debate. Our work is in the Craver/Darden tradition, but we look in detail at two historical cases — Oxidative Phosphorylation and the Calvin-Benson cycle — to advance the state of the art a bit (by about a paper’s worth). We focus on three areas:

First, we consider “hard cases” of discovery from the history of sciences. By this we mean achievements of acknowledged originality that no one would describe as mere extrapolations of previous knowledge. If a particularly spectacular scientific discovery can be explained in terms of a certain set of discovery strategies, then this speaks to the usefulness and power of these strategies: less complex cases should present no problem to them. So hard cases help our claim that much of scientific creativity is ultimately explicable in terms of the skillful and diligent use of basic heuristics.

Second, we are interested in whether discovery strategies are “substrate neutral” or “domain specific”. Are there general rules for discovering scientific hypotheses, or do strategies only apply to particular fields of inquiry — or even to particular kinds of empirical problems within disciplines? We think that the truth is for once in the middle: discovery strategies seem to be somewhat general, but they need to be applied to highly domain-specific previous knowledge. We discuss instances of this in the paper.

Third, the existing literature does not pay enough attention to the way in which the space of possible hypotheses is explored systematically. In one of our cases, for instance, a particularly interesting scientific hypotheses was arrived at — in part — by simple causal combinatorics. It was known that two types of events, A and B, were correlated. This allowed the following (exhaustive) set of hypotheses to be explored: Does A cause B? Does B cause A? Or do A and B have a common cause? While this procedure may sound simple, its results are anything but.

The paper has just appeared in History and Philosophy of the Life Sciences, and our penultimate draft is available on Pitt’s PhilSci archive.

How much work can Mill’s method of difference do?

I have a new paper coming out in the European Journal for Philosophy of Science, and here’s a link to a preprint on the PhilSci archive.

One of the basic ideas in scientific methodology is that in experiments you should “vary one thing at a time while keeping everything else constant”. This is often called Mill’s method of difference due to John Stuart Mill’s influential formulation of the principle in his System of Logic of 1843. Like many great ideas (think of natural selection), the method of difference can be explained to a second grader in two minutes – and yet the more one thinks about it, the more interesting it becomes.

The late Peter Lipton in his 1991 book on inference to the best explanation (IBE) made the descriptive claim that the method of difference is used widely in much of science, and this seems correct to me. But he also argued that the method is actually much less powerful than we think. In principle, we would like to vary one factor (and one factor only), observe a difference in some outcome, and then conclude that the factor we varied is the cause of the difference. But of course this depends on some rather steep assumptions.

First, we need to be sure that only one factor has changed — otherwise the inference does not succeed and this happens. But how do we ever know that there is only one difference? This is what Lipton called the problem of multiple differences.

Second, we may sometimes wish to conduct experiments where the factor which varies is unobserved or unobservable. For instance, John Snow inferred in the 19th century that local differences in cholera outbreaks in London were caused by a difference in the water supplied by two different companies. However, Snow could not actually observe this difference in the water supply (what we now know was a difference in the presence of the bacterium Vibrio cholerae). So Snow inferred causality even though the relevant initial difference was itself only inferred. This is what Lipton called the problem of inferred differences.

Lipton proposed elegant and clever solutions to both problems. He argued that the method of difference is to some extent mere surface action. Beneath the surface, scientists actually judge the explanatory power of various hypotheses, and this is crucial to inferences based on the method of difference. So Snow may not have known that an invisible agent in part of the water supply caused cholera, or that this was the only relevant difference between the water supplies. But he could judge that if such an agent existed, it would provide a powerful explanation of many known facts. In order to make it easier to discuss such judgments about the “explaininess” of hypotheses, Lipton introduced the “loveliness” of explanations as a technical term. Loveliness on his account comprises many common notions about explanatory virtues: for instance, unification and mechanisms. Snow’s explanation is lovely because it would unify multiple known facts: that cholera rates correlate with water supply, that those who got the bad water at their houses but didn’t drink it didn’t get sick, that the problematic water supply underwent less filtration, and so on. An invisible agent would moreover provide a mechanism for how a difference in water supply could cause a difference in disease outcomes, which would again increase the loveliness of Snow’s explanation. Ultimately, Lipton would argue, Snow’s causal inference relied on these explanatory judgments and not on the method of difference “taken neat” (to use Lipton’s phrase).

I have great sympathy for Lipton’s overall project. But I am also convinced that in many experimental studies there are ways to handle Lipton’s two problems that do not rely on an IBE framework. In my paper, therefore, I take a closer look at his main case study — Semmelweis on childbed fever — to find out how the problems of multiple and inferred differences were actually addressed. The result is that multiple differences can be dealt with to some extent by understanding control experiments correctly; and inferred differences become less of an issue if we understand how unobservables are often made detectable. The motto, if there is one, is that we always use true causes (once found) to explain, but that explanatory power is not our guide to whether causes are true. The causal inference crowd will find none of this particularly deep: but within the small debate about the relationship between the method of difference and IBE, these points seemed worth making.