Blog
Jan 17

Bioassay Optimisation: Save time, Save Money

|

Bioassays can be expensive, both in terms of time and money. It is, therefore, sensible to consider bioassay optimisation: ways to design assays which minimise resource use while maximising throughput. This derives from the principle of lean manufacturing: processes which reduce waste while simultaneously increasing productivity and maintaining effective procedures.

What is optimisation in bioassay?

In the context of bioassays, the goal of optimisation is to maximise the throughput of our assay while minimising waste. It is, of course, also important that the assay remains effective in its ultimate aim – to provide an accurate and precise estimate of the reportable value.

Priority one must be to ensure that our assay is effective at discriminating between good and bad samples using appropriate suitability criteria. Passing bad samples is to be avoided for obvious reasons, but failing samples inappropriately is also problematic. An inappropriately failed assay could mean an entire batch of product discarded, reagents wasted, and hours of laboratory work down the drain all for no reason.

Next on the list is ensuring that we are using plate real estate efficiently. We want to squeeze the maximum amount of information out of each sample and dilution. We clearly need enough samples for our assay to work, but using more dilutions or replicates than are strictly required can be a waste of resources.

Importantly, we do not want our assay leaning process to require additional labwork – that’s what we’re trying to minimise in the first place! The good news is that the ideas we’ll put forward here can be achieved through paper-based (or computer-based) exercises. That is, statistical experiments using historical data which can evaluate the prospects of leaning strategies without the need for more lab work.

How can we apply ideas about lean manufacturing to optimise bioassays? We’re going to run through a few simple strategies which could help you save time and money on your next assay.

Optimisation Strategy 1: Reduce Unnecessary Doses/Dilutions

For our first scenario, imagine we are running an assay consisting of a 10-point dilution series for each of the reference standard and  test sample. And the results are, well, great! As we can see in Figure 1 (Left), our assay returns a very well characterised 4PL curve, with points along both plateaus and the central portion of the curve.

But do we need 10 points in our dilution series? Perhaps we could achieve similar results using fewer dilutions. This would mean less reagent, reference, and sample stock is used for each assay.

An example of how this might be achieved is shown in Figure 1 (Right). The dilution series covers the same range as before, but is divided into 7 steps instead of 10.

Optimisation:
Reducing to a 7-point dilution series means we can easily include an additional sample on the plate.

We can see that the 4PL curves from a 7-point dilution series are similarly well-characterised. If we achieved similar precision in our results using the reduced number of dilutions – which could be determined using simulated assays – then this would be a successful leaning strategy for this assay.

Importantly, reducing dilutions can free up plate real estate which can allow for the inclusion of additional samples on a plate. In this example, we can double our throughput using the additional space freed up by using a 7-points rather than 10 in our dilution series.

Optimisation Strategy 2: Reduce Unnecessary Replication

Replication is a cornerstone of statistics as it can increase the precision of results (provided the replication strategy is appropriate) and allows for an assessment of the variability of an experiment. You can, however, have too much of a good thing. Consider the assay in Figure 2.

On the left, we have six replicates per dilution. This level of replication might be required in the case of an extremely variable assay, but we can see in the plot that the responses within each dose group are very similar. This indicates that the variability in the response is fairly low, so we might consider reducing the number of replicates. This is shown on the right of Figure 2.

We can see that the 4PL model fit we achieve is very close, if not visually identical, in the 4-replicate case when compared to the 6-replicate case. Once again, we would need to check that the precision of our results was still acceptable, but this represents another large resource saving. As in the previous example, we have space for an additional sample on the plate in the reduced-replicate case, meaning our throughput would be increased as well as resources saved.

Optimisation: Reduce replication
Figure 2: Left: An assay with 6 replicates per dilution. Right: The same assay with only 4 replicates per dilution

Optimisation Strategy 3: Remove Inappropriate Suitability Criteria

So far, we have mainly focused on ways the set-up of an assay can be optimised to conserve time and resources. Further efficiencies can be found at the other end of the assay process when we determine whether an assay has passed or failed. Namely, we can ensure that we are including appropriate suitability criteria. This is all about making sure that we are passing and failing assays correctly – as mentioned previously, we absolutely want to minimise the number of assays we’re failing unnecessarily.

The first thing to do is to make sure that we’re not including too many tests in our assay design. Now, it’s important to make sure that we’re checking properties which are critical for the performance of the assay, like parallelism for relative potency assays, but we also don’t want to test everything under the sun. Every suitability test has a chance of failing purely by chance.   so including more tests means that the likelihood of a chance failure – and failing an assay unnecessarily – is higher. It is, therefore, important to carefully consider which suitability tests are important to include in your design, and which might best be left out.

One way to narrow down your suitability criteria is to check if any are related to one another and therefore likely pass and fail at the same time. For example, if you were to set suitability criteria on both the upper and lower asymptotes of a 4PL assay and the assay range, we could be effectively measuring the same thing twice. If there is a change in one of the asymptotes this is likely to also affect the assay range, as the range is simply the difference between the asymptotes. So, an optimisation might be to choose to set a suitability criterion on either the assay range or the asymptotes themselves going forward.

Similarly, the choice of metric itself can be inappropriate. One check for parallelism might be to set equivalence limits on the ratio of model parameters between the test sample and reference standard. If this is done for an asymptote which approaches zero, then the ratio can be very large if the value we are dividing by is very small.

Figure 3 shows an example of this. Visually, the lower asymptotes appear to be very similar while there is a small amount of divergence in the upper asymptotes. So, if we were to perform a parallelism test using the ratios of parameters, we might expect a failure on the upper asymptote. However, this assay actually fails on the lower asymptote: the asymptotes may be close together, but they are also close to zero, meaning their ratio turns out to be large.

Close asymptotes
Figure 3: An assay which fails a parameter-ratio parallelism test due to an asymptote being close to zero.

In such a case, it might be prudent to set a criterion on the difference between the asymptotes rather than the ratio, since this tends to behave more predictably when considering very small – or indeed very large – numbers.

When to Optimise

We have covered a few different ways a bioassay might be optimised to minimise resource usage while maximising throughput. The question that remains is when in the assay lifecycle is the best time to make those optimisations? This choice is going to be informed by trading off three main factors:

Information collected: How much historical data have we collected about our assay which we can use to make decisions about optimisation?

Regulatory resistance: How easy will it be to gain approval for the changes to assay design?

Risk of failure: How likely is it that the product will make it to market?

Broadly, there are three times when we might consider leaning our assay:

During early development

At this stage, there is very little regulatory input, meaning it will be much easier to make changes to our assay design. On the other hand, we generally have little historical data from our assay meaning we may not have a strong understanding of its behaviour. Products also have a higher failure rate at this stage,.

Before validation

By the time an assay is ready to be validated we typically have more confidence that the associated product might reach the market, so an investment in optimising the assay is also more likely to pay off. We also have more historical information to inform our optimisations, and there will be little regulatory resistance since the assay has not yet been validated. In general, this stage is a good time to consider optimising an assay design.

During routine use

The biggest obstacle to optimisations at this stage is the regulators. As the assay has now been validated, any changes to the design might require a costly revalidation or bridging study. Conversely, the prevalence of historical data and the greater certainty of the product reaching market – or indeed the presence of the product on the market already – means that investment in optimisations at this stage can be worth it if the savings are large enough.

Save time, Save Money

Bioassays are always going to be an expensive undertaking, but that doesn’t mean that the financial firehose should be applied without thought. As we’ve seen, the careful application of resources and considerations of the subtleties of assay design can lead to outsized savings in both time and money. Whether by reducing dilutions or choosing appropriate replication strategies and suitability criteria, bioassay optimisation should be considered throughout the assay life cycle.

Even better is ensuring your design is optimised from the very start! At Quantics, we always recommend involving a statistician as early as possible in the assay development process. This can help make sure that your assay never strays too far from the optimal design, leading to fewer headaches and greater savings down the line.

Follow Quantics on Social Media:

LinkedInFacebookTwitter

About the Authors

  • Matthew Stephenson

    Matthew Stephenson is Director of Statistics at Quantics Biostatistics. He completed his PhD in Statistics in 2019, and was awarded the 2020 Canadian Journal of Statistics Award for his research on leveraging the graphical structure among predictors to improve outcome prediction. Following a stint as Assistant Professor in Statistics at the University of New Brunswick from 2020-2022, he resumed a full-time role at Quantics in 2023.

    View all posts
  • Jason Segall

    Jason joined the marketing team at Quantics in 2022. He holds master's degrees in Theoretical Physics and Science Communication, and has several years of experience in online science communication and blogging.

    View all posts

About The Author

Matthew Stephenson is Director of Statistics at Quantics Biostatistics. He completed his PhD in Statistics in 2019, and was awarded the 2020 Canadian Journal of Statistics Award for his research on leveraging the graphical structure among predictors to improve outcome prediction. Following a stint as Assistant Professor in Statistics at the University of New Brunswick from 2020-2022, he resumed a full-time role at Quantics in 2023.