How much work can Mill’s method of difference do?

I have a new paper coming out in the European Journal for Philosophy of Science, and here’s a link to a preprint on the PhilSci archive.

One of the basic ideas in scientific methodology is that in experiments you should “vary one thing at a time while keeping everything else constant”. This is often called Mill’s method of difference due to John Stuart Mill’s influential formulation of the principle in his System of Logic of 1843. Like many great ideas (think of natural selection), the method of difference can be explained to a second grader in two minutes – and yet the more one thinks about it, the more interesting it becomes.

The late Peter Lipton in his 1991 book on inference to the best explanation (IBE) made the descriptive claim that the method of difference is used widely in much of science, and this seems correct to me. But he also argued that the method is actually much less powerful than we think. In principle, we would like to vary one factor (and one factor only), observe a difference in some outcome, and then conclude that the factor we varied is the cause of the difference. But of course this depends on some rather steep assumptions.

First, we need to be sure that only one factor has changed — otherwise the inference does not succeed and this happens. But how do we ever know that there is only one difference? This is what Lipton called the problem of multiple differences.

Second, we may sometimes wish to conduct experiments where the factor which varies is unobserved or unobservable. For instance, John Snow inferred in the 19th century that local differences in cholera outbreaks in London were caused by a difference in the water supplied by two different companies. However, Snow could not actually observe this difference in the water supply (what we now know was a difference in the presence of the bacterium Vibrio cholerae). So Snow inferred causality even though the relevant initial difference was itself only inferred. This is what Lipton called the problem of inferred differences.

Lipton proposed elegant and clever solutions to both problems. He argued that the method of difference is to some extent mere surface action. Beneath the surface, scientists actually judge the explanatory power of various hypotheses, and this is crucial to inferences based on the method of difference. So Snow may not have known that an invisible agent in part of the water supply caused cholera, or that this was the only relevant difference between the water supplies. But he could judge that if such an agent existed, it would provide a powerful explanation of many known facts. In order to make it easier to discuss such judgments about the “explaininess” of hypotheses, Lipton introduced the “loveliness” of explanations as a technical term. Loveliness on his account comprises many common notions about explanatory virtues: for instance, unification and mechanisms. Snow’s explanation is lovely because it would unify multiple known facts: that cholera rates correlate with water supply, that those who got the bad water at their houses but didn’t drink it didn’t get sick, that the problematic water supply underwent less filtration, and so on. An invisible agent would moreover provide a mechanism for how a difference in water supply could cause a difference in disease outcomes, which would again increase the loveliness of Snow’s explanation. Ultimately, Lipton would argue, Snow’s causal inference relied on these explanatory judgments and not on the method of difference “taken neat” (to use Lipton’s phrase).

I have great sympathy for Lipton’s overall project. But I am also convinced that in many experimental studies there are ways to handle Lipton’s two problems that do not rely on an IBE framework. In my paper, therefore, I take a closer look at his main case study — Semmelweis on childbed fever — to find out how the problems of multiple and inferred differences were actually addressed. The result is that multiple differences can be dealt with to some extent by understanding control experiments correctly; and inferred differences become less of an issue if we understand how unobservables are often made detectable. The motto, if there is one, is that we always use true causes (once found) to explain, but that explanatory power is not our guide to whether causes are true. The causal inference crowd will find none of this particularly deep: but within the small debate about the relationship between the method of difference and IBE, these points seemed worth making.

Against what method? (Or: Feyerabend in context)

In December 2012 the Blogosphere and Twitterverse became agitated about an opinion piece by Brian Cox and Robin Ince in the New Statesman: Politicians must not elevate mere opinion over science. Cox and Ince were immediately criticized by members of the science studies community such as Rebekkah Higgith and Jack Stilgoe. I don’t wish to get into the many issues of this debate, but I have a straightforward point to make about one aspect of it. It concerns the term “scientific method”.

Cox and Ince refer to the “scientific method” (without going into much detail) and are taken to task for this. For instance, Higgith writes:

[T]here are many scientific methods and many, when studied in detail, are not particularly methodological.

If Twitter is any indication, mentioning the “scientific method” is considered a sign of naiveté in the science studies scene. Jon Butterworth summarizes this nicely (also in the Guardian) when he says that talking about scientific method is “apparently not the done thing”. True! But where in the technical literature do we find the roots of the apparently deeply held belief that there is no such thing as scientific method, or that it is in any case “not particularly methodological”?

My best guess is that the denial of scientific method traces back in some way to Paul Feyerabend’s famous Against Method. Popularly associated with the slogan “anything goes”, Feyerabend’s book used historical cases to argue that a number of mid-20th-century philosophical beliefs about scientific method are wrong, or at least do not hold universally. These include: the belief that there is one single method that regulates all scientific epistemology; the belief that falsification plays a key role in the progress of science; the belief that ad hoc hypotheses are condemned and rarely occur in good science; the belief that replacing theories always have more empirical content than replaced theories.

Without a doubt Feyerabend’s book was an important milestone. It pointed out many serious problems with widespread mid-20th-century views of scientific epistemology. But it is important to understand that Feyerabend’s argument was more local than his title suggests. The book was not some ingenious, grand reductio argument that showed that no such thing as scientific method can possibly exist. It mostly showed that old proposals of scientific method — then dominant, but now largely abandoned — don’t hold water. Since these old conceptions were largely the product of non-naturalistic, ahistorical armchair philosophizing, this should not be too surprising.

So Feyerabend’s Against Method does not license sweeping claims against scientific method, and if seen in context it should not give comfort to social constructivists. The best explanation of the massive success of the empirical sciences remains the assumption that its theories have some special relationship with nature. In brief, scientists are great at epistemology! The hard problem, however, is to describe and understand the process. Peter Lipton summarized this nicely in his 2004 Medawar Lecture:

It is one thing to be expert at distinguishing grammatical from ungrammatical strings of words in one’s native tongue; it is something quite different to be able to specify the principles by which this discrimination is made. The same applies to science: it is one thing to be a good scientist; it is something quite different to be good at giving a general description of what scientists do. Scientists are not good at the descriptive task. This is no criticism, since their job is to do the science, not to talk about it.

Lipton’s next remark is so good as to deserve special emphasis:

Philosophers of science are not very good at describing science either, and this is more embarrassing, since this is their job.

But the difficulty of the task is no indication of its hopelessness. That would be a bit like denying that organisms grow on the grounds that the causes and mechanisms of developmental biology are incompletely understood.

Hedgehogs and foxes in scientific epistemology

My paper titled “Modeling Causal Structures: Volterra’s struggle and Darwin’s success” recently appeared in the European Journal for Philosophy of Science. The paper was co-authored with Tim Räz. (A draft version is available on the PhilSci archive.)

In the past, philosophical analyses of how the sciences gain theoretical knowledge have tended toward the monistic. This is most easily visible in authors like Hempel or Popper, who suggested that the entire methodological diversity of science ultimately reduces to just one principle (hypothetico-deductivism in the case of Hempel and falsificationism in the case of Popper).1 On the spectrum laid out by the ancient Greek fragment which says that “the fox knows many things, but the hedgehog knows one big thing”, most philosophers of science have lived on the hedgehogs’ side. This is true even for more recent and on the whole more pluralistic authors such as Peter Lipton, who offered inference to the best explanation as at least potentially an explication of all inductive practices in science.2 And if my impression is correct, then modern Bayesians are among the most committed hedgehogs of them all.

In a stimulating 2007 paper in the British Journal for Philosophy of Science, titled “Who is a Modeler?“, Michael Weisberg asks us to adopt a more fox-like stance. Perhaps the reason why philosophers of science have been unsuccessful in offering a monistic analysis of scientific epistemology is that we must distinguishing between several different inductive practices. Perhaps we can say something philosophically and historically insightful about each of them separately.

As a starting point, Weisberg suggests “modeling” and “abstract direct representation” (or ADR) as two different ways of developing scientific theories. His basic idea is that a modeler investigates the world indirectly by constructing a model, exploring its properties, and checking how they relate to the real-world target system. Weisberg’s main example of this is a famous instance of model-based science: The Lotka-Volterra predator-prey model. In what Weisberg calls ADR, by contrast, scientists engage the world directly, without the intermediation of a model. He thinks that Mendeleev’s periodic table of the elements is of this type. Mendeleev did not start out with a constructed model: He simply arranged the elements according to various properties. He thereby gained theoretical knowledge about them, but without using a model. Weisberg concludes his paper by asking why scientists would choose modeling over ADR or other strategies (or more succinctly, having asked “who is a modeler?”, he concludes by asking “why be a modeler?”).

In our paper, we follow Weisberg’s pluralistic approach to scientific epistemology, but we find his distinction between practices unsatisfactory, and so we suggest a different one. Moreover, we give an answer to the question of why scientists choose the strategy of modeling.

A main problem of Weisberg’s paper is that the concept of ADR remains ill-defined. Perhaps it is a useful category for describing some scientific work, but the case remains to be made. We suggest that the more natural counterpart of modeling is causal inference. We argue for this by looking closely at the original publications relating to Weisberg’s main example: the predator-prey model. In particular, we look at a previously unexamined methodological preface to the Italian mathematician Vito Volterra’s Les associations biologiques au point de vue mathématique, published in French in 1935.3 We find that Volterra’s preferred method for investigating the factors determining population sizes and fluctuations would have been the laboratory physiologist’s causal inference: vary one thing at a time and see what changes with it. But as Volterra explained at some length, various factors make causal inference in natural populations difficult: The populations are too large, the time intervals too long, the environmental conditions too changeable for the method to succeed. We summarize this as insufficient epistemic access for applying methods of causal inference. Volterra stated quite explicitly that this insufficient epistemic access is the reason why he chose the modeling strategy. Thus, the distinction between causal inference and modeling offers a possible answer to the question of why scientists model: They do so if causal inference is not possible.4

Our distinction also permits us to reevaluate Weisberg’s second example of “abstract direct representation”, which is Darwin’s explanation of the origin and distribution of coral atolls in the Pacific ocean. We argue that this, too, should be understood as an instance of modeling. We also use the example of Darwin’s corals to discuss how causal models can be empirically tested if straightforward causal inferences are not possible.

If you’re interested in the details, please go and read the paper. I will argue on another occasion that the distinction between modeling and causal inference can do a good bit of philosophical work. For example, the debate about scientific realism should probably pay more attention to the distinction, since arguments for inductive skepticism with regard to model-based science may not go through with regard to causal-inference-based science (I’ve started to develop the idea in this talk).

Since this is an ongoing project, I will attempt some crowdsourcing. Our thesis about the motivation for modeling – insufficient epistemic access for causal inference – would be challenged by episodes from the history of the sciences where causal inference is possible and modeling is nevertheless chosen as a strategy. If you can think of such episodes, please send them my way.

  1. Hempel’s views are accessibly summarized in his Philosophy of Natural Science, originally published in 1966 and still available. The best primary source for Popper’s views remains his Logic of Scientific Discovery, either the German edition of Logik der Forschung published by Mohr Siebeck or the English translation published by Routledge. Popper’s Conjectures and Refutations (also by Routledge) is another good point of entry. For a textbook-type introduction, I recommend chapters 2–4 in Peter Godfrey-Smiths’s Theory and Reality (2003 by The University of Chicago Press).
  2. Lipton’s Inference to the Best Explanation (1993/2004, Routledge) is a challenging but rewarding read.
  3. Philosophers of science have not yet paid much attention to Volterra’s explicit methodological discussion. This is probably explained by the fact that the relevant publications were written in French. This was after Volterra left Rome for Paris because of Mussolini’s rise to power.
  4. For an episode where causal inference dominates, see my Semmelweis paper.


My Semmelweis paper has appeared in SHPS

My paper on Semmelweis’s discovery of the cause of childbed fever has appeared in Studies in History and Philosophy of Science.

Semmelweis’s discovery has been used by philosophers of science for many decades as a a case study of scientific method. For example, Carl Hempel used Semmelweis as a “simple illustration” of the hypothetico-deductive method in his Philosophy of Natural Science (1966, p. 3). Peter Lipton used it as an extended case study of Inference to the Best Explanation in his book of the same name (1991). Donald Gillies has argued that the episode needs a Kuhnian (in addition to the Hempelian) reconstruction if we are to make sense of it. And this philosophical work on Semmelweis is merely in addition to the work  of medical historians, who have long been interested in Semmelweis as a pioneer in the modern study of infectious diseases.

So what more is there to say about Semmelweis’s work? I show in the paper that the philosophical debate has neglected much material that is relevant to Semmelweis’s methods – and if we take this material into consideration, then a reconstruction of his methodology in terms of causal inference and mechanisms suggests itself very strongly.

The argument is partly historical. I show that the passages of Semmelweis’s Etiology of Childbed Fever (published in 1861) which relate to causal inference and mechanisms were omitted from the most widely available English-language edition of the book (K. Codell Carter’s otherwise excellent translation from 1983). This concerns mainly Semmelweis’s numerical tables and the description of his animal experiments.

However, the argument has a philosophical component. In the past decade, causal philosophies of science (for example of the mechanistic or interventionist type) have become prominent. One of the promises of these approaches is an accurate description of much work in biology and the biomedical sciences – but it is up to careful historical scholarship to find out how widely and how straightforwardly these new approaches can be used to make sense of actual science. In this context I find it very promising that one of the classical case studies of confirmation follows, on close inspection, such a clear causal and mechanistic logic.

On a meta-level, my paper raises a question which I think should receive more attention from the HPS community: On what grounds do we prefer one philosophical account of the case to another? After all, it would be a mere finger exercise for a philosopher to take my new historical material and incorporate it into an account of Semmelweis’s work in terms of hypothetico-deductivism, inference to the best explanation or what have you. So while it is clear that philosophers have not taken sufficient account of the historical material, historical scholarship on its own also cannot take us all the way to an understanding of the episode.