Relative Potency Analysis

Relative Potency Analysis


Statistical Model Choice


Dose Group Optimisation


Bridging Studies





Bioassay Variability & DOE


Validation of Bioassays


Trend Analysis


Bioassay Development

Involving appropriate statistical support from the early stages of bioassay development can help to ensure a linear development pathway by optimising the experimental designs and thus  minimising  repeated work. Mathematical simulation can replace the need for some laboratory work altogether. The combination can save considerable time and expense over the course of a bioassay development process from bench to product.


Relative Potency

With small molecule chemical drugs the potency is fairly well related to how much drug is in the preparation, and that can be measured with good accuracy. With a biologic, the potency is related not so much to the amount of stuff in the preparation, but to the biological activity of the preparation, and that has to be measured in a biological system (a bioassay) that is itself variable.

Quantics are world-leading experts in this field and have been helping  clients with relative potency assays for 15 years.

In 2017, we launched QuBAS a feature rich statistical analysis software package, designed with Continuous real time Validation (CrtV) for regulatory use, with all of the commonly required statistical methods for relative potency assays.

Find out more!

Parallelism Testing

Parallelism testing is fast becoming a regulatory requirement. Quantics can advise on the best method to use.

If the test material is simply acting as a dilution of the reference material (i.e. it is a similar biological entity) the dose response curves will be parallel. One curve is just a horizontal shift of the other.

In practice the curves never are exactly parallel, and each curve is just a best fit to the data points and each has associated confidence limits, so how parallel is parallel enough?

A number of methods are available, and choosing the best one is not simple. Account must be taken of the data available, likely future data variability, the statistical model used and regulatory requirements.

Common testing methods are based on significance testing (e.g. the “F” test) or equivalence testing.

Choosing the wrong test or setting the pass fail criteria incorrectly may result in unnecessary assay failures.

See Quantics paper: Parallelism in Practice: Approaches to Parallelism in Bioassays, and associated flowchart to guide users to the best option for parallelism testing. Download HERE


A major pharmaceutical manufacturer had been running a bioassay for GMP batch release for some years without formal parallelism testing. The FDA had warned them that the method would need updating. Quantics was instrumental in:

• The development of a new analysis method that included formal parallelism testing
• Optimising the experimental design to reduce significantly the laboratory costs
• Establishing new system and sample suitability criteria for the new process
• Formally validating the new process
• Documenting all the analysis changes for FDA submission

The subsequent FDA submission was successful, allowing the company to continue production of the biologic.

4PL or 5 PL Model choice

Statistical Model Choice

Quantics have years of experience to help you choose the optimal statistical model. We can advise on data transforms, outlier management and management of LOQ values to ensure the model  fits the data well and is stable to data variability.

Why does model choice matter?

The statistical model has a major impact on assay accuracy, sensitivity, reliability, and cost. Some models are more likely to fail a test item if the RP is low, or the data are highly variable.

Common models for continuous response (e.g. optical density) are:

  • 4 Parameter Logistic (4PL)
  • 5 Parameter Logistic (5PL)
  • Linear*


*The linear model is mathematically simpler, but may result in (unnecessary) assay failures with low RP due to failure of parallelism tests.

Common models for quantal (“yes/no”) response bioassays are:

  • Probit
  • Logit

Assays involving dose-time-response measurement (e.g. challenge assays) can also use a survival analysis model. Survival analysis can result in a significant reduction in animal use per assay over conventional logit or probit models.

probit_scale_plot of Bioassay data


“Overall Quantics helped us minimise the number of assays required but also helped us understand the importance of the equivalence test and how it should be applied. The end result was that the FDA accepted the submission which was the goal. I will make sure to recommend Quantics if any further work comes up.”

Mark Kuy – Ipsen

Variability and DoE in Bioassays

Bioassays, both in vitro and, more particularly, in vivo, can be very variable. This variability impacts the response and hence the precision of the reportable value. Quantics can help you understand the contributions of the various factors to the overall variability, in order to design an assay with the required performance.

The variability inherent in bioassays arises from a range of sources. Discovering and controlling the important ones can be made much more efficient using  a formal Design of Experiments (DoE) approach which provides statistically efficient analysis of the data, optimising the speed of development and minimising resource and costs. In some situations, simulation can be used to reduce the laboratory work required.

Instead of studying one factor at a time, these techniques  provide an efficient multi-factor approach that generally requires fewer experiments to achieve optimisation,  and also provide insight into interactions of factors that affect bioassay performance.

Dose Group Optimisation

In Bioassay dose groups and required replicates are typically determined by experimentation.

Quantics can save you time and experimental costs by using simulation studies to determine the design that will optimise assay performance.

Simulation can explore the impact of dose group spread and distribution, numbers of replicates, and for in vivo studies, this may include unequal numbers of animals in the dose groups.

Dose-Group-Optimisation Chart

Bioassay Optimisation

In bioassay development, the final method is normally locked when adequate precision and accuracy have been achieved, formally documented as part of the statistical validation step. Once validation is complete, the assay may remain in use in the same format for many years.

During development, assays often evolve in a complex way. In the hunt for acceptable accuracy and precision many different designs are tried out, and there are many choices to be made.

Once the biology is stable and well characterized, it is time to re-evaluate the design – think about leaning your assay BEFORE validation. Stop, step back and review whether all the elements of your design are actually required before continuing.

Read more about Bioassay Optimisation

Bioassay Validation

Quantics can help ensure that your assay validation study follows the appropriate regulatory guidance in all its statistical aspects, both design and analysis.

What is bioassay validation?

Assay validation is the process of demonstrating and documenting that the performance characteristics of the procedure and its underlying method meet the requirements for the intended application and that the assay is thereby suitable for its intended use.

Find out more!

Reference Bridging and Method Transfer

Bridging and transfer studies aim to demonstrate that an old and new Method (or laboratory location) have equivalent performance. A number of factors may have to be considered including defining the important characteristics to equate for accuracy and precision.

Quantics can help design bridging studies to include
– types and numbers of samples
– lots / runs required
– establishing acceptance criteria

Several statistical approaches are available and Quantics will advise on the best approach for the particular circumstances taking into account known regulator preferences.

All work can be carried out to GMP.


Ongoing Monitoring and Statistical Process Control

Bioasssay Statistical-process-control

It is important to monitor the performance of an assay over time. Quantics can help you to implement a monitoring protocol for your bioassay.

Simple monitoring by plotting data over time may be adequate in development situations, but in GMP manufacturing regulators are starting to expect a more formal approach  known as statistical process control (SPC). This methodology  typically charts  suitable parameters of the Reference Standard response curve and QC samples or test samples, and has a number of statistically derived rules that trigger warring or action alarms if the assay is showing signs of shifting or drifting. SPC control limits are set based on an analysis of historical data, and Quantics can help you to implement a suitable SPC monitoring protocol for your bioassay. 

Statistical Review

We understand that statistics can be difficult, particularly in the highly variable and complex world of bioassay. That’s why we offer the opportunity for our team of expert statisticians to check statistical methods to ensure they are fit for purpose.

Quantics can provide a completely independent review of your study protocols to address any statistical concerns:

  • Is the overall study design appropriate?
  • Is the protocol sufficiently consistent and well specified to allow results to be calculated without ambiguity?
  • Is the analysis method in line with regulatory guidance?