I think of In the Pipeline’s Derek Lowe as a small ‘m’ marketarian. He has more confidence in the “invisible hand” of markets than I, but he’s not a believer in Rand’s Market Divine (the market that can do no evil, so long as government snakes are avoided). He combines critiques of big pharma CEOs with a robust defense of antibiotic development process.
Which may explain why he sort-off calls for more government funding of basic research — without quite getting there…
A Terrific Paper on the Problems in Drug Discovery | In the Pipeline
… Jack Scannell and Jim Bosley … “These kinds of improvements should have allowed larger biological and chemical spaces to be searched for therapeutic conjunctions with ever higher reliability and reproducibility, and at lower unit cost … in contrast many results derived with today’s powerful tools appear irreproducible; today’s drug candidates are more likely to fail in clinical trials than those in the 1970s … some now even doubt the economic viability of R&D in much of the drug industry [22] [23].
The contrasts ..between huge gains in input efficiency and quality, on one hand, and a reproducibility crisis and a trend towards uneconomic industrial R&D on the other, are only explicable if powerful headwinds have outweighed the gains [1], or if many of the “gains” have been illusory …
Shaywitz and Taleb wrote something similar about ten years ago (via Hensley, WSJ, emphases mine)…
… The molecular revolution was supposed to enable drug discovery to evolve from chance observation into rational design, yet dwindling pipelines threaten the survival of the pharmaceutical industry,” say consultant David Shaywitz and Nassim Nicholas Taleb, author of “The Black Swan: The Impact of the Highly Improbable.”
“What went wrong?” they ask in the opinion pages of the Financial Times. “The answer, we suggest, is the mismeasure of uncertainty, as academic researchers underestimated the fragility of their scientific knowledge while pharmaceuticals executives overestimated their ability to domesticate scientific research.”
When you get right down to it, Shaywitz and Taleb say, we still don’t understand the causes of most disease. Even when we think we do, because someone found a relevant gene, we’re not very good at turning the knowledge into a treatment. “Spreadsheets are easy; science is hard,” they tell Big Pharma.
I lived through this, including the 2nd failure of the genomic revolution. In retrospect the years from 1945 through the 1970s were a Golden Age of medicine. I did my medical science in 1982; for my generation the Golden Age was a baseline. We thought we understood so much …
By 2008 we all knew we had a problem. I’d been long out of practice and I was having to catchup on 7 years of medicine for my licensing exam. That turned out to be easier than expected. I wrote then about medications…
- Lots of new combinations of old drugs, maybe due to co-pay schemes
- Many new drugs have suicidal ideation as a side-effect.
- Lots of failed immune related drugs re-purposed with limited focal impact on a few disorders.
- Probably some improvements in seizure meds. Lots of new Parkinson’s and diabetes meds, but they’ve had limited value. (metformin was a home run, but that was more than 7 years ago).
- Really lousy progress in antibiotics; there are fewer useful therapies now than 7 years ago. Actually, fewer every year.
… this paper is also a great source for what others have had to say about these issues, too (and since it’s in PLoS, it’s open-access). But the heart of the paper is a series of attempts to apply techniques from decision theory/decision analysis to these problems …… Let’s all say “Alzheimer’s!” together, because I can’t think of a better example of a disease where people use crappy models because that’s all they have. This brings to mind Bernard Munos’ advice that (given the state of the field), drug companies would be better off not going after Alzheimer’s at all until we know more about what we’re doing, because the probability of failure is just too high…… I’ve long thought that a bad animal model (for example) is much worse than no animal model, and I’m glad to see some quantitative backup for that view. The same principle applies all the way down the process, but the temptation to generate numbers is sometimes just too high, especially if management really wants lots of numbers. So how’s that permeability assay do at predicting which of your compounds will have decent oral absorption? Not so great? Well, at least you got it run on all your compounds…… there’s no cure for the physical world, either, at least until we get better informed about it, which is not a fast process and does not fit well on most Gantt charts. Interestingly, the paper notes that the post-2012 uptick in drug approvals might be due to concentration on rare diseases and cancers that have a strong genetic signature …… in drug discovery, we have areas that where our models (in vitro and in vivo) are fairly predictive and areas where they really aren’t…
See also:
1 comment:
Thanks for this reference, as I am an economist with an interest in policies to boost public R&D...
Post a Comment