As a slight change to our regular bioassay blog series where we last looked at the choice of statistical model when dealing with continuous response data. We thought we would give a brief overview to our paper that is published in Bioscience this month.
The paper discusses values below the limit of quantitation (“LOQs” – sometimes called “non-reportables”) which are a common issue with bioassay data, and specifically how to deal with them.
As discussed in the last bioassay blog, there are two different types of assay data: continuous and quantal. For quantal data, measuring the response is usually straightforward: for example if the response is alive/dead, we simply need to count the number of living and dead organisms. For quantitative assays, however, the measurement process can be more complicated, and when the response is very low it may not be possible to distinguish it from no response at all.
There are a couple of ways of stating the lowest response that can be measured: a limit of quantitation (LOQ) is the minimum response which can be reliably quantified, whereas a limit of detection is the minimum response which can be detected, without necessarily being able to attach a number to it. In both cases the result is the same: if a response is very low, we won’t know its actual value.
The substitution method
So how can we calculate a relative potency for an assay with LOQs? The simplest “solution” is just to ignore them. But this is clearly biased: we would end up ignoring low responses, but not high ones, so we would conclude that the response is higher than it actually is. A more sophisticated approach, and one that is commonly used in bioassays, is the “substitution method”. This consists of replacing the unknown values with a particular low value – with the same value being used for every unknown response. The value used is usually one half or one third of the limit of quantitation.
The substitution method is illustrated in the figure below. In this example the LOQ is 1. The reference standard responses, shown in black, are well above 1. However the response for the test sample, shown in grey, is generally lower, and nine of its responses are below 1. These have been substituted with the value 0.5 (half the LOQ) and are shown as triangles. When more than one LOQ value is recorded for a particular dose they are lined up horizontally.
At first sight, the substitution method seems reasonable: we know the unknown response must be somewhere between 0 and the LOQ, so why not choose the point half-way between? There are, however, two problems with this approach:
- We don’t expect the response in an assay to be the same at every dose – it should increase at higher doses. It would make more sense to substitute higher responses at higher doses. But how much higher?
- Bioassay responses are variable. It’s very unlikely that all the unknown responses are exactly the same, even within a dose group. It would make more sense to spread them out somehow – but how?
These are not just problems in principle, but in practice as well. Using the substitution method can lead to unnecessary parallelism failures. We can already observe the problem in the figure above: this assay was simulated to be exactly parallel but the result using this substitution method obviously isn’t. At the lowest dose of the test sample, 7 of the 10 replicates were below the LOQ, and were substituted with the value 0.5 – but in reality, they were all above 0.7, so this is an underestimate. This underestimate has the effect of making the best-fit line for the test (shown in grey) much steeper than it should be. Depending on the suitability criteria used, this might well lead to an incorrect parallelism failure, and the assay would have to be re-run.
Mathematically, an assay with LOQs is similar to clinical trials where the endpoint is survival, for example, studies of cancer treatments. In a bioassay, the exact value of a response which is an LOQ is unknown, but we do know that it is at most a particular value (the LOQ). Similarly, in a clinical trial the survival time of a patient who is still alive at the end of a trial is unknown, but we do know it is at least a particular value, the duration of the trial.
In clinical trials, a method called survival analysis is usually used to handle this issue. For bioassays we can use a method which is almost the same (mathematically) as survival analysis: this is called Tobit analysis, after the economist James Tobin.
Tobit analysis makes only two assumptions:
- The responses are normally distributed.
- They follow a dose-response curve – in the examples in this blog, this is a linear model, but it could just as well be a four-parameter logistic.
Note that these assumptions apply to all the responses – both those above and those below the LOQ – it’s just that the values of the responses below the LOQ are unknown. These assumptions also answer the two questions about the LOQs we asked above:
- How much higher should the responses be at higher doses? Answer: they should increase according to the dose-response curve.
- How should the LOQs be spread out? Answer: according to a normal distribution, with the same standard deviation as the other responses.
Given these assumptions, Tobit analysis then calculates the most likely values of the parameters – for a linear model, the slope and the intercept – given the responses above the LOQ, the doses at which the LOQs are present, and how many LOQs there are at each dose.
This usually gives much more accurate results than the substitution method. The figure below shows the same data as the first figure, but this time Tobit analysis has been used instead of the substitution method. The best-fit line for the test sample is not as steep now, and the assay would be more likely to pass parallelism.
In our paper published in BioScience…
… we used simulations to look into how the Tobit method compares to the substitution method. One of our more interesting results is shown in the figure below. This shows how many of the simulated assays passed a parallelism test. When the relative potency is near 100%, there are very few LOQs in the simulations, and all three methods give similar results: nearly all the simulated assays pass the parallelism test. But for less potent test samples, there is a clear difference between the methods: Tobit analysis gives a much higher pass rate for the parallelism test than the substitution method.
Our paper also discusses the effect of using Tobit analysis on the accuracy and precision of the relative potency estimate.
That’s all for this blog. Next time we will be back looking at the choice of statistical model discussing how they are optimised to get the best fit to the data.