by Elena Rocca
In the early 19th century, the Hungarian physician Ignaz Semmelweis noticed from his clinical experience that antiseptic routines in healthcare reduced infections at childbirth. After carrying out some studies on the matter, he proposed that the practice of disinfecting hands in the obstetrician ward of the Vienna General Hospital, where he worked at the time, would have reduced the incidence of puerperal fever. However, for that time this seemed as an implausible suggestion. The germ theory of disease was still unheard of (Pasteur developed such theory only some decades later), and therefore there was no accepted understanding of how disease could be transmitted from one organism to the other. Semmelweis suggestion was therefore rejected by the medical community.
This historical anecdote is often quoted as a reminder that background knowledge and theoretical understanding of causal mechanisms can be at any time wrong and incomplete, and therefore it can hinder the correct causal inference. How to amend this fact, in modern medical research and practice?
The mainstream strategy comes from the evidence-based medicine (EBM) proponents. Since when we try to understand causation in medicine we risk to run into a lot of troubles, we should improve our ability of looking at correlation data without trying to understand phenomena, or causal mechanisms, underlying such correlations. What when statistical studies give conflicting results? In this case, we should trust the most unbiased experimental design. In other words, we are better off if we focus on judging the quality of the methods used to collect and analyse statistical data, and drop the attempts to understand infinitely complex biological phenomena underlying such data.
In a new CauseHealth paper, ‘The Judgements That Evidence-Based Medicine Adopts’, Elena Rocca objects this strategy by arguing that it is impossible to apply when complex evidence needs to be weighted. When different experimental designs yield conflicting results, we necessarily adopt our background, theoretical understanding of phenomena and causal mechanisms in order to judge which study is less biased. For instance, we need such background understanding to judge whether a trial is successfully randomised. The evaluation of any type of evidence, argues the paper, is based not only on that specific evidence that is being evaluated, but also on background knowledge. This is built by more general, previously accumulated evidence and by theoretical understanding of phenomena.
The paper demonstrates this thesis by looking at complex cases in which conflicting statistical evidence had to be evaluated, for instance the case of correlation between the exposure to the herbicide Glyphosate with higher incidence of lymphoma.
Clearly, background knowledge can be wrong and incomplete. When explanations are wrong, they will probably hinder, rather than favour, the correct causal evaluation. However, as this article attempts to demonstrate, such explanations are irreducibly embedded in the medical sciences. This fallibility, concludes the author, is therefore ‘a motivation for increasing our enquiry on causal explanations, rather than for dismissing it’.