Myself, and my colleague Brooke, attended the BEBPA US virtual meeting last week and reported back on the presentation and round table discussions on relative potency software.
The key driver for these sessions was the experience that many have had of different software and different versions of the same software giving different results, and the problems this causes with RP values, and system and sample suitability criteria when transferring assays to different labs.
The discussions focussed around the concept of bridging between software using test data sets. However, the reasons for the differences seen were not clearly explained and are in fact just mathematical. No “bridging” exercise, in the sense of testing a few data sets on both systems as suggested, is required. In fact, doing so may simply provide a false sense of security.
In light of these discussions, Brooke and I worked together with the team here at Quantics, to put together a series of 3 short blogs. These are structured to look at the key issues so that software users can understand what is required to “bridge” software mathematically, and why there may still be differences.
Blog 1. Equations and algorithms.
During the presentation there was a degree of confusion between the terms equation or formula and algorithms, but they are different and the difference is important to aid understanding. An equation or formula is a mathematical relationship or rule expressed in symbols. An algorithm is a finite sequence of well-defined instructions, typically used to solve a class of specific problems or to perform a computation.
The presenter stressed that there were several different formulae used for a 4pl and that these give rise to different results. We were told that:
- It was important that both sets of software use the same one
- The formulae chosen was the one that the regulators support.
This is incorrect.
It is true that the USP, PhEur and other mathematical references give examples of apparently different equations for the 4pl, but the key point is that they are all mathematically identical, and will therefore always give identical results. The formulae differ in how the parameters in the equation are defined. This is called parameterisation and the regulators will accept any parameterisation as it makes no difference to the results (RP and confidence intervals).
It is a bit like a recipe for your favourite brownies. In the formula (ingredients and temperatures) the quantities can be expressed in ounces or grams, the temperature in degrees centigrade or Fahrenheit. Once you know this, it is simple to “translate” from one to the other. You don’t have to do a bridging exercise making the brownies in both ways (unless you really like brownies.)
|Metric parameterisation||US parameterisation|
|Oven temperature||180oC||347 oF|
|Easter sugar||125g||4.41 oz|
|Dark soft brown sugar||125g||4.41 oz|
|Salt||0.4 (1 pinch)||0.014 oz|
|Plain flour||115g||4.051 oz|
|Vanilla extract||2 tsp||0.17 oz|
|Pecan nuts||To taste||To taste|
So, let’s start with the simple linear “dose response” model.
Actually, this is linear only when we consider response against log (dose).
So if y = response, a = intercept and x = log(dose)
If we wanted to express this using the raw dose – let’s label this z, we get:
Note that the symbols used are changed to reflect the changed definitions.
In this case to bridge between these two we use:
4 Parameter Logistic
The same goes for the 4pl but it is a bit more complicated!
The PhEur 5.3 section 3.4 has:
And the USP has:
The difference is that the PhEur expresses the formula in terms of x = ln(dose), whilst the USP expresses the formula in terms of z = dose.
So the “bridging” is simple algebra:
And there you have it – no bridging experiment is required. All you need is a mathematician to do the algebra. Once you have this it is a simple matter to convert all your suitability criteria to the new parameterisation, update the SOP, put the new values in the template, and you are done. Time for tea and brownies…
Whatever the parameterisation, there is zero impact on the model fit, the RP or its confidence interval.
During the presentations, some time was spent trying to relate the sign of the hill slope (B) to the slope as usually described for inhibition and absorption assays. Again this is just maths, there is no need to guess or try data sets to work this out, just differentiate the formulae and you get:
PhEUR parameterisation: Tangent to curve at midpoint (slope) =
USP parameterisation: Tangent to curve at midpoint (slope) =
It is now easy to work out whether the D and A are left, right, top, or bottom. So, for example, it was pointed out that Gen5 software (which uses the USP parameterisation) fixes B to be positive. Using the slop[e equation above for USP, this will mean that for a positive slope assay (D-A) must be positive so D is the upper asymptote and A the lower. For a negative slope assay it will be the other way round.
Some of the “different” results presented at BEBPA are entirely predictable and just a matter of maths. Bridging can be done as an exact transfer so there is no impact on validation of the assay.
However, in practise some differences in results are seen despite this direct mathematical conversion. So why is this?
The answer is that it is to do with the algorithms used to calculate the parameters for the curve fit and this is the subject of the next mini blog.
Get all our latest news delivered straight to your inbox.