How efficient is it really to exclude individual variability, context sensitivity and plurality of causes from lab models?
by Elena Rocca
Faith in medical research has decreased recently. Despite all the expectations of ‘personalized medicine’ and ‘tailored drug research’ since the dawn of the post genomic era, pre-clinical medical research has remained pretty much the same. Voices of discontent have been raised both in peer reviewed and popular scientific literature, pointing out puzzling truths, such as that 90% of the drugs in preclinical and clinical trials do not make it to the market and that it takes an average of 13 years and up to over 2.5 billion USD to launch a new drug. We might ask how this is possible, given the great advances of basic research, especially genetics, of the last decade.
It has been suggested that the new discoveries in genetics have not yet been properly incorporated in pharmacological research. That, once this is done, we will get more potential drug targets and more focused clinical trials. This might seem reasonable. However before that, since we are talking about insufficient translatability from medical research to the real world, it also seems reasonable to question the way the latter is modelled within a lab experiment.
Disease modeling or any type of animal model in the lab is based on extreme context isolation and complexity reduction. This is part of the basics: lab animals are so strictly controlled with respect to their weight, state of health, environmental conditions and genetics, that it makes it easy to repeat the experiment many times and to obtain clear results. It gives us high controls on the variables and how they change. This is essential if we want to establish a causal link between a single variation and an outcome.
The question is: what is this telling us about human biology? Or even, about the biology of a common field rat? Of even, in many cases, about the biology of a lab rat of a different inbred genetic strain?
The more we standardize an experiment, the more we make it difficult to translate the result to a different context. It happens all the time. To take one example out of many: 300 molecules cure Alzheimer disease (AD) in mouse models. None of these work in humans. A study indicates that the cure should be initiated earlier in the onset of the disease. This might be true, but before investing even more resources to check this hypothesis, it seems reasonable to question what these lab animals are “models” for. Some researchers in AD are calling for a shift of paradigm: it is reasonable at this point to question whether the disease is the result of a single protein accumulation (the one modeled in lab mice). There is plenty of evidence that other factors could contribute: neurological inflammation processes, accumulation of iron in specific brain regions, bioenergetic dysfunction, exposure to specific types of radiations.
How can such plurality of causes, individual variety and context sensitivity be tested in total isolation and by context simplification?
On the other hand, there is the question I am often asked when talking about this with scientists (and that I often ask myself): how else can we possibly do it?
Paradigm shifts are not easy. Things are the way they are for a good reason. However, acknowledging the weaknesses of the present system and questioning theoretical premises are needed for many crucial advances.
“Just as the purity of the chemical assures the pharmacist of the proper filling of the doctor’s prescription, so the purity of the mouse stock can assure a research scientist of a true and sure experiment. The use of in-bred genetic material is just as necessary as the use of anti-septic precautions in surgery” (Dr C.C.Little)
This sentence was written in 1937, about 25 years before Watson and Crick discovered the structure of DNA. Is it perhaps time to open the discussion again?