Author Archive

FDA: Steps Toward Quantitative Safety Assessment

In an article published in Nature last June, FDA’s Darrell Abernethy, Lawrence Lesko and Janet Woodcock outlined the agency’s desire to incorporate “mechanism-based drug safety assessment and prediction” into the regulatory approval process. FDA’s collaboration with MolecularHealth, a bioinformatics company, is a meaningful step in that direction.

In the past, FDA has used observation and statistical analysis to detect side effects during clinical development – and the spontaneous cataloging of such events through FDA’s Adverse Event Reporting System (AERS) after a drug’s approval – but new technologies offer an increasingly sophisticated look at how a drug candidate, or combination of drugs, is processed by the human body. Read more »

Where is Variability Coming From and What Have We Done to Minimize It?

This blog post was written by Lynn D. Torbeck

Textbooks and journal articles treat common cause variation as if it is an inevitable fact of nature and beyond our control: “In any production process, regardless of how well-designed or carefully maintained it is, a certain amount of inherent or natural variability will always exist. This natural variability or ‘background noise’ is the cumulative effect of many small, essentially unavoidable causes” [Emphasis added]. This attitude cuts off thoughts of trying to reduce variation. But, with some reflection, there are several ideas and techniques that can begin to help reduce common cause variation. Read more »