At the forefront of medical development lies the world of cell and gene therapy. Unlike traditional, small molecule therapeutics, treatments based on biologics can treat, cure, or even prevent genetic conditions such as sickle cell anaemia and muscular dystrophy. They can also be tailored to individual patients, giving the potential for personalised treatments for certain cancers.
As with all medical treatments, cell and gene products require stringent testing, both to prove their safety and efficacy and to check potency for batch release. In the latter case, potency bioassays are typically used, but cell and gene products can often prove challenging subjects to use in a bioassay. Here, we’re going to explore some of these challenges, and outline some solutions we’ve encountered over Quantics’ time working with cell and gene therapies.
What are Cell and Gene Therapy?
While the two are often grouped together, cell therapy and gene therapy are actually two rather distinct techniques.
Cell therapy involves introducing viable cells. Often these are stem cells with the intention of repairing or restoring damaged tissues to working condition. An example of this might be to repair damaged heart tissues after a heart attack, or to reverse neurological degeneration due to spinal cord trauma. In other cases, the introduced cells can release factors which produce the therapeutic effect. These would include such things as T-cells introduced to find and kill cancer cells in an immunotherapy treatment.
By contrast, gene therapies aim to edit the genome of cells to produce a desired therapeutic effect, either before they are introduced to the body or within the body itself. This gene editing is typically either done using techniques such as CRISPR, or by delivery to a host cell via viral vectors. A famous example of a successful gene therapy is that for the treatment of sickle cell anaemia which was approved by the FDA in December 2023.
Bioassay Challenges
As biological therapeutics, the processes involved in producing cell and gene therapies can be extremely complex. There are several key challenges associated with testing cell and gene therapeutics in a bioassay setting. These include:
- Limited lot size/material for testing
Producing biologics is often time consuming and expensive. This means that it can often be economically impractical to produce large batches of product. This is especially true of products which are produced to treat rare diseases or those which are tailored to individual patients. This means that replication can be limited, and failed assays must be avoided at all costs due to the wasted materials.
- A lack of appropriate reference materials
In a similar way, it can often be difficult to identify and source appropriate reference materials for a cell or gene therapeutic. For relative potency to make sense, the test product must behave as a dilution of the reference, meaning it must exhibit a similar biological response. For biologics, such a reference might be difficult to acquire or simply might not exist, particularly for those products which are derived from cells from a single patient.
- Complex mechanism(s) of action
Many assay designs – particularly reporter assays – require a detailed knowledge of the mechanism of action (MOA) of the therapeutic: essentially how the product causes its therapeutic effect. The MOAs of many cell and gene therapy products are extremely complex, meaning a fully optimised assay design is difficult or impossible.
- Inherent variability in manufacturing
Most cell and gene products involve some form of biological process, such as culturing cells. These are, by their nature, variable processes which can be difficult to control. This means that the manufacture of biological therapeutics will almost always have a higher degree of inherent variability than that of other drug products.
These challenges mean that potency assays for cell and gene therapy products often exhibit far higher variability than assays for other types of products. Indeed, USP <1047> suggests that it is not uncommon to observe coefficients of variation of 30%-50% in such assays: a claim which is anecdotally supported by our experience here at Quantics.
The Impact of High Variability
Such high variability can lead to severe difficulties when regular batch release testing of products is required. Below, we list simulated probabilities of a relative potency assay result falling outside a range of 70%-143% by chance for different variabilities. We assume that we have an unbiased assay, and that the true RP of the sample is 100% (e.g. we are testing the reference against itself):
%GCV | Probability a Randomly Selected Result Outside (70%, 143%) |
15% | 1% |
30% | 17% |
50% | 38% |
We can see here that assay failure rates can get unmanageably high with the high assay variability associated with cell and gene therapy products. This is particularly problematic for cell and gene therapy due to the scarcity of reference materials and the complexity and expense of producing the products in the first place.
High variability also affects other aspects of running a potency assay. For example, the table below outlines the number of samples required in a well-characterised reference bridging study for different degrees of variability.
%GCV |
Number of Samples |
15% |
7 |
30% |
20 |
50% |
45 |
Once again, we see that high variability can lead to unwieldy reference bridging studies. These would be problematic enough in a scenario where the reference material was abundant, but the issue is exacerbated in the case of cell and gene therapy products. Scarce reference materials mean that reference bridging studies need to take place more frequently, which sets up a vicious cycle when those bridging studies require large amounts of reference material due to high variability!
Reducing variability
Clearly, then, finding ways to minimise variability in potency assays is vital for a successful cell or gene therapy product. We’re going to outline some strategies which represent the low-hanging fruit from our experience here at Quantics.
Ensure that the appropriate dose-response model is fit to the data
This might seem to be an obvious statement, but the choice of model extends beyond the class of model you’ve chosen. Even if you’re confident that you’re correct in fitting, say, a 4PL model, are there aspects of your analysis which you’ve neglected?
Consider, for example, the data shown in the plot below. This is an example of data which shows low variability at low doses, and high variability at high doses. As we have discussed on several occasions on this blog, this is a classic case where a response transformation can be useful.

Indeed, some form of transformation is arguably essential for ensuring homogeneity of variance, but it turns out that it is also a helpful tool for managing variability of the assay.
We have summarised relative potency estimates from two runs of this test sample in the table below. In both cases, we have compared the results from the raw response data to those when a log transform is first applied to the responses.
Note how the confidence intervals on the RP estimate are narrower when the responses are on the log scale. The easiest way to spot this is by looking at the precision factor – this is just the ratio of the upper confidence limit and the lower confidence limit. For assay 1, the precision factor is reduced from 1.184 on the raw scale to 1.148 on the log scale. Similarly, the precision factor decreases from 1.249 to 1.167 for assay 2. In both cases, the reduced precision factor – and, therefore, narrower confidence intervals on the relative potency estimate – is an indication of reduced variability.
Assay |
Response Transformation | Test Sample | ||
RP Estimate |
95% CI | Precision Factor | ||
1 |
Raw | 0.816 | (0.750, 0.888) | 1.184 |
Log transformed | 0.898 | (0.837, 0.962) |
1.148 |
|
2 |
Raw |
1.208 | (1.080, 1.350) |
1.249 |
Log transformed | 1.115 | (1.033, 1.205) |
1.167 |
|
% GCV Raw | 32.0% | |||
% GCV Log transformed |
16.5% |
This decrease in variability is also apparent between assays. The %GCV of the relative potency estimates between assays 1 and 2 on the raw scale is 32%: this is well into the troublesome range we saw causing problems earlier. On the log scale, however, the %GCV is reduced to just 16.5%: a substantial improvement.
Implement an efficient replication strategy
It is no secret that replication increases the precision of any experiment. Then height of a single high school student can be shown to be an unbiased (i.e. accurate) estimator of the average height across the entire class, but the estimate would be extremely variable across multiple measurements. If, instead, you measured the height of multiple students and found an average, then that average would be a lot less variable.
So, replication is, in general, a good idea. But not all replication is created equal. We want to replicate where the variability is largest. Imagine instead of high school students, we were measuring the height of basketball hoops at a gym. Since a basketball hoop is a standard height, measuring each one individually will be nearly as consistent as measuring them all and finding the average. In this case, replication does not increase the precision of the measurement noticeably.
The same applies principle applies for bioassays, and especially in the case of those for cell and gene therapies when making the most of every sample is vital. We can think of the overall variability of a bioassay result in two main ways: intra-assay variability, which encapsulates the variability of measurements within the same assay; and inter-assay variability, which is the variability observed between different assays.
Understanding which form most of the variability in your assay takes is important for determining an effective replication strategy. Consider the data for two simulated bioassays in the tables below:
Bioassay 1 | ||
Variance Source | SD (log scale) | % of Total variability |
Between Assay | 0.236 | 83.9% |
Within Assay | 0.103 | 16.1% |
Total | 0.257 | |
Intermediate Precision (%GCV) = 29.4% |
Bioassay 2 |
||
Variance Source | SD (log scale) | % of Total variability |
Between Assay | 0.103 | 16.1% |
Within Assay | 0.236 | 83.9% |
Total | 0.257 | |
Intermediate Precision (%GCV) = 29.4% |
While the total variability in both bioassays is the same, the breakdown is different in each. In bioassay 1, most of the variability happens between assays, while for bioassay 2, most of the variability is within assay.
For simplicity, let’s limit ourselves to two possible replication options:
Option 1: Test samples from a single lot multiple times within each assay, with test samples from different lots tested in separate assays.
Option 2: Include test samples from several lots once in each assay, and replicate over multiple assays.
With no change to replication strategy, the intermediate precision of both assays is 29.4%. The change in %GCV under each replication strategy is listed in the table below:
Replication Strategy |
%GCV (Format Variability) |
|
Bioassay 1 |
Bioassay 2 |
|
Option 1 (single test sample per assay run) |
27.6% |
16.0% |
Option 2 (all test samples in each assay run) |
16.0% |
18.6% |
For bioassay 1, with low intra-assay variability and high inter-assay variability, we can see that Option 1 – replicating within the assay – has very little effect on the precision, reducing the %GCV by only about 2%. However, Option 2 – replicating between assays – has a more noticeable effect, reducing %GCV to 16%.
Conversely, for bioassay 2, with high intra-assay variability and low inter-assay variability, Option 1 is the most effective, reducing the %GCV to 16%. Option 2 was less effective, reducing the %GCV to about 19%.
These results demonstrate that a well-thought-out replication strategy can mean a far more efficient use of scarce and expensive resources. Replication is not a numbers game: don’t just throw as many replicates as you can at the problem without first thinking about how they might best be utilised.
A Cell and Gene Therapy Future
With the number of cell and gene therapies on the market increasing, seemingly by the day, the need for effective and reliable potency assays for testing those products will increase in lock step. As we have seen, this can be difficult to achieve with cell and gene therapy products thanks to a variety of challenges inherent to the complex biology involved in such advanced technology.
That’s where statistics comes in: while these challenges cannot be directly overcome, their effects can be mitigated by strategic assay design and analysis. We generally recommend involving your statistician well before finalising your assay design: you could end up making significant resource savings in the long run.
Comments are closed.