Blog
Mar 19

Survival analysis for LD50 and vaccine challenge assays

LD50 assays and challenge assays both measure the time to an event, often death, for each animal.  However the standard statistical approach to the analysis of the data treats the endpoint as binary –   alive / dead – at a given point in time.  This is inefficient use of the data.  In this blog we will recommend a technique called “survival analysis”, the gold standard for the analysis of these kinds of endpoints in clinical studies.

so what

Using a mathematically efficient approach for this type of analysis can reduce animal numbers required per assay by 30% or more, resulting in major cost and time savings. No change in laboratory techniques are required.

Our original research and methodology was published in BioScience (Yellowlees et al 2013) but this blog is intended as a brief overview to introduce how this technique can be applied.  There is a link to the full publication below should that be of interest.

LD50 assays are designed to measure the potency of a toxin, pathogen, or of radiation, by estimating the dose required to kill 50% of a population of animals.   In many cases, a reference standard is also tested for comparison.  Challenge assays are designed to measure the potency of a vaccine relative to a reference standard (the relative potency, or RP).  In both cases, groups of animals are treated with a series of doses of the toxin or vaccine .  In the challenge assay, the animals are then challenged with the target disease.

In both cases, death (or humane termination) is the endpoint.  Historically, the proportions of animals surviving to the end of the study in each dose group have been used to graph percentage survival versus dose and thus estimate the dose that is associated with 50% survival (i.e. the LD50). In mathematical terms, a logit or probit model is used, and the model can also be used to estimate the RP.

How can we improve the efficiency of the traditional approach?

The % survival at the end of an experiment for a dose group is actually just a sample from the overall distribution of survival of animals at that given dose.  The problem with this traditional snapshot method of estimation is that it does not provide any information about the shape of the underlying distribution and therefore gives a relatively poor estimate of the actual true value. But by recording the actual time of death, the underlying distribution can be taken into account and, for a given number of animals, a more precise determination of the true LD50 (or RP) is obtained.

The statistical technique known as survival analysis can be used to do this.  Survival analysis is the gold standard for the analysis of this kind of endpoint in clinical studies.

If the level of precision, or equivalently the width of the confidence interval for the reportable value, obtained with the traditional analysis is adequate, a survival analysis approach will allow the number of animals required for this level of precision to be reduced, usually significantly.  The approach will thus reduce costs as well as aligning with 3R objectives.

To illustrate the difference between the analysis techniques, we simulated a relative potency assay, based on real assay data, where the true RP was 1.0.  For animal group sizes ranging from 16 downwards, we simulated 100 assays and calculated the RP estimate twice for each simulated assay:  (i) using probit analysis and (ii) using survival analysis.

The two histograms below summarise the RP values: the blue curve shows the results from the traditional (probit) analysis, and the red curve shows the results from the survival analysis.

With 16 animals per group, the standard deviation of the RP values obtained using the survival model was 0.17; for the traditional model the standard deviation of the RP values was 0.28.  This shows that the survival analysis result has much better precision when the numbers of animals are the same.

This animation shows what happens as the number of animals per assay is reduced from 16. You will see that when the number of animals per assay reaches 6, the precision of the survival analysis matches that of the traditional analysis using 16 animals.

So survival analysis can increase the assay precision, or reduce the number of animals required (or a mixture of both).

Case studies of the effect of changing to survival analysis

Typically the only change to the laboratory aspects of the assay method would be to formalise the recording of the time of death:  in our experience this is usually already recorded, at least to within a few hours.  Changes would be required to the presentation of the assay results and the statistical analysis methodology.

A number of our clients have taken up this more efficient analysis.

Client A: This client reduced the number of animals from 16 to 12 per group and improved the precision of the RP estimate.

Client B:  This client was able to improve the precision of the RP estimate by a factor of approximately 2.

Client C: This client was able to improve the pass rate for their FDA-required test of parallelism (similarity) of the test samples versus the reference standard.  With a probit model the failure rate was 25%; with the survival model parallelism failures were reduced to 5%.

The Client C example illustrates a further benefit of using survival analysis: more precise estimates of the slope of the dose-response are obtained, impacting equivalence tests for parallelism.

Reducing animal numbers is not only humane, but also reduces laboratory housing requirements and costs and can increase throughput.

To learn about the specifics of running the analysis we recommend you read our Clinical Trial blog on Survival Analysis here or read the full BioScience publication on our publications page.

Of course if you are working on a challenge assay or LD50 assay where you feel Quantics can help you improve precision and/or reduce the numbers of animals required we would be very happy to speak with you further.

Follow Quantics on Social Media:

LinkedInFacebookTwitter

About the Author

  • Ann Yellowlees

    Company Founder and Director of Statistics – With a degree in mathematics and Masters in statistics from Oxford University, and a PhD in Statistics from Waterloo (Canada), Ann has spent her entire professional life helping clients with statistical issues. From 1991-93 she was Head of the Mathematics and Statistics section of Shell Research, then joined the Information and Statistics Division of NHS Scotland (ISD). Starting as Head and Principal Statistician of the Scottish Cancer Therapy Network within ISD, she rose to become Assistant Director of ISD before establishing Quantics in 2002. Ann has very extensive experience of ecotoxicology, medical statistics, statistics within a regulatory environment and bioassay.

    View all posts

About The Author

Company Founder and Director of Statistics – With a degree in mathematics and Masters in statistics from Oxford University, and a PhD in Statistics from Waterloo (Canada), Ann has spent her entire professional life helping clients with statistical issues. From 1991-93 she was Head of the Mathematics and Statistics section of Shell Research, then joined the Information and Statistics Division of NHS Scotland (ISD). Starting as Head and Principal Statistician of the Scottish Cancer Therapy Network within ISD, she rose to become Assistant Director of ISD before establishing Quantics in 2002. Ann has very extensive experience of ecotoxicology, medical statistics, statistics within a regulatory environment and bioassay.