Blog
Mar 22
Randomisation: Dice

Randomisation for Clinical Trials: A Guide

When James Lind (of Edinburgh, no less) first tested vitamin C as a cure for scurvy in 1747, he happened upon an experiment design which has persisted to this day: the clinical trial. A number of sailors were given the proposed treatment, others were given a control, and their outcomes were compared. However, Lind selected the population of his study groups by hand, which is one aspect of his procedure which has not survived to the modern day. Instead, randomisation is used to populate study groups, minimising the influence of the experimenter on the process.

Perhaps unsurprisingly, randomisation, along with blinding and control groups, forms one of the three methodological pillars of a Randomised Clinical Trial (RCT). Randomisation does what it says on the tin: it randomly assigns subjects to study groups—typically one or more treatment groups and a group which receives a placebo.

There are essentially two types of allocation we can make when forming clinical trial groups. The first is a random allocation, which, as you would expect, is unpredictable: the study group to which the subject is assigned is unknown before the process takes place. The second is formally known as deterministic, but we’re going to refer to it as predictable. That’s because it’s exactly that: a predictable allocation. You know which group the subject is going into before the process begins.

Why Randomisation?

There are several reasons why randomisation is so vital. The first is possibly the most obvious: it reduces the chances of selection bias in the study. This could be easily introduced if an experimenter allocated the study group by hand (this would be an example of deterministic allocation). For example, they could—unconsciously or otherwise—select the patients which they believe would benefit most from receiving the treatment rather than the placebo. In a scenario where, say, the sickest patients selected for the trial are preferentially assigned to the treatment arm, this bias could make the treatment seem less effective than it actually is.

Randomisation also helps those who have direct contact with study participants remain blinded to the group allocation. It is important that researchers do not know to which treatment group any individual trial participant has been assigned, as small differences in behaviour towards participants based on that knowledge could be enough to introduce bias into the trial. A predictable allocation method could make this information easy to access, as we will see later.

Another important reason to randomise allocation is to prevent biases due to known—and, crucially, unknown—confounding factors which might affect subject outcomes. If the performance of a treatment depends on a covariate factor such as, say, blood type, we would want both treatment and placebo groups to have a similar number of subjects of each blood type. This would be easy to ensure if the dependence was known. If it was not, however, randomisation means there’s a greater chance of appropriate blood type distributions within the groups than if they were allocated deterministically, provided the groups were large enough.

As well as reducing the effects of bias, randomisation can also play an important role in the statistical analysis of the study results. A common approach is to use the invoked population model, which assumes an infinite target population with the disease of whom n are selected randomly to form the study groups.

What does a good randomisation look like?

Put simply (and perhaps to state the blindingly obvious), a good randomisation is one which helps achieve the goals of the study. This includes ensuring biases are mitigated as far as possible, that the treatment effect (or lack thereof) can be easily determined, and that the procedure isn’t so complex as to risk frequent operator errors. Factors which feed into these aims include:

Balance and Randomness

As we’ve already discussed, we want our clinical trial allocation as random as possible. But, we also want it to be balanced. That is, we want the study groups to be of a similar size, or at least to ensure their size differs in a controlled way. Balance is usually discussed in reference to an allocation ratio: for example, a 1:1 allocation of treatment and control groups would mean those two groups should be equal in size. Other allocation ratios are possible depending on the design of the study. For a study testing two treatments against a control, a 2:2:1 allocation of treatment 1, treatment 2, and control groups could be used. This requires two people in each treatment group for each subject allocated to the control group. Since it is the most common design—and for the sake of simplicity—we’re going to focus on randomisation for a 1:1 allocation ratio for now.

Because study sizes are finite and often small, randomisation procedures are often a trade-off between a guarantee of the desired balance and randomness. An allocation where subjects are assigned to groups completely at random can be very unbalanced. For example, if you simply flip a coin to determine allocation, then there’s nothing stopping you getting heads every time and putting all the subjects in one group. That’s a perfectly random allocation, but not much use for a clinical trial.

Conversely, an allocation procedure that alternates group assignments guarantees the groups are perfectly balanced (or only differ by 1, if the total number of participants is odd), but comes at the cost of everyone knowing that the first, third, and fifth participants are in one group while the second, fourth, and sixth are in the other. This would make it very easy to work out to which group each participant was assigned based on when they joined the study, with all the associated problems we described earlier.

Validity and Efficiency

Validity and efficacy are aspects of a clinical trial which a randomisation is no small part in optimising. As with any statistical process, we want to ensure that our trial is valid, meaning it provides correct statistical inference. Particularly important in this case is the prevention of Type I errors, since a high false positive rate could make a treatment appear more effective than it is. Some randomisation procedures can, if implemented incorrectly, lead to an increased false positive rate.

Efficiency, on the other hand, is the statistical power of the trial. An efficient trial is able to effectively tell whether a treatment made any difference to patient outcomes, and to quantify that effect if it exists. Balance feeds into this: simple clinical trials are often most efficient when perfectly balanced, though this can vary with more complex trial designs.

Ok, but how do you do a randomisation?

Before we dive into the specifics of a few randomisation methods, it is worth mentioning that study group allocation doesn’t usually take the form of recruiting all the participants the study will ever need and then assigning them to a group, like choosing football teams on the playground. Instead, recruitment and randomisation are ongoing processes where each new subject is assigned a group when they join, rather than discrete events which take place only once per study. The difference is subtle, but it influences the optimisation of randomisation methods.

So, we have a patient (or, more realistically, their information) in front of us. How do we determine whether to assign them to the treatment group or the control group? For now, we’re going to assume that we want a 1:1 allocation ratio once randomisation is complete.

The simplest and most obvious choice is to simply flip a coin: heads, treatment; tails, control. The obvious solution, however, is often not the best solution. As we mentioned earlier, this method—while completely random and, therefore, immune from selection bias—can result in large imbalances in the size of the two groups, especially if the overall sample size is small.

There are a few coin-like solutions which give, if not a perfect balance, then at least a better balance in most situations. One such method is to simply use an “unfair coin”, that is, one which does not always give heads and tails the same probability. This unfairness is dynamic: for example, in Effron’s Biased Coin Design, the “coin” is fair (p(heads)=p(tails)=50%) until the groups become unbalanced, at which point the probability of assignment to the underrepresented group is increased by a fixed amount. This doesn’t guarantee a perfect balance, but it makes one more likely.

A simple “coin flip” method of randomisation can lead to large imbalances between the two study groups

Two methods which do—at least in theory—guarantee a perfect final balance are the Truncated Binomial Design (TBD) and the random allocation rule (Rand). Let’s assume we want a total study population of N subjects, with N/2 assigned to both treatment and control groups. For TBD, assignments are made by flipping a fair coin until one group has its full allocation of N/2 subjects, at which point all subsequent subjects are allocated predictably to the other group.

For Rand, a sequence of assignments is generated choosing, at random, a set of N/2 numbers from the first N natural numbers (i.e. 1, 2,…,N-1, N). If, say, 2 is chosen for the treatment group, then the 2nd patient recruited is assigned to that group, and so on until both groups are filled.

The Truncated Binomial Design (TBD) allocates subjects randomly until one group is full. After that, all further subjects are allocated to the underrepresented group.

Using Rand, subjects are assigned based on the order which they are enrolled into the study based on a pre-randomised list of numbers.

Now, while these methods will, in an ideal world, result in perfect balances at the end of the exercise, the world is sadly too messy a place for this to be guaranteed in reality. Subjects regularly drop out of clinical trials (by choice or otherwise). If our Rand-assigned 2nd patient subsequently drops out, then the treatment group would be one short as there could never be another 2nd patient in the sequence.

Imbalances at intermediate stages can also be problematic if there is time drift in a covariate (the most obvious being patient age) as this can lead to chronological bias. That is, if the population of one study group have been in the trial longer than that of the other, then this could influence outcomes in a way which obscures the effect of the treatment.

To solve this, randomisation methods exist which guarantee balance after every allocation is made. Notable among these is the Permuted Block Design (PBD). In this method, subjects are effectively “pre-randomised” within “blocks” of relatively small size before allocation. Instead of allocating subjects one-by-one, we could instead wait until, say, four are recruited and ensure that two from this block of four is assigned to each group.

The block size can be chosen freely (or, indeed, vary through the randomisation), but we’re going to stick with blocks of 4 here. There are six ways—or permutations—to split a block of 4 evenly between treatment and control groups:

(T, T, C, C),
(T, C, C, T),
(T, C, T, C),
(C, C, T, T),
(C, T, T, C),
(C, T, C, T)

For each block of 4, one of these permutations is randomly selected, and the subjects in that block are assigned accordingly based on when they joined the block. So, if the second permutation on our list is chosen, then the first and fourth people allocated to the block are assigned to the treatment group, and the second and third to the control group.

Here we see the trade-off between balance and randomness, as to ensure the perfect intermediate balance, we accept that at least some of our allocations will be predictable. For a block-size of two, the second allocation within a block can be predicted based on the allocation of the first. For larger block sizes, there will be fewer predictable allocations, but these have the downside of having to wait longer to fill the blocks and make an allocation. Nevertheless, PBD is one of the most frequently used randomisation techniques, and is referenced by regulators including the ICH.

Using Permuted Block Design (PBD), subjects are randomised an assigned in blocks, which ensures the two study groups are balanced at all intermediate stages.

An alternative block-based design are Maximally Tolerated Imbalance (MTI) procedures, which allow for some imbalance at intermediate stages, but not beyond a certain limit. Essentially, subjects in a block are assigned randomly, except when the imbalance reaches a pre-chosen threshold, at which point an allocation is made to the underrepresented group predictably. This results in fewer predictable allocations at the cost of less balanced groups through the process.

Beyond Balance

As we’ve elucidated here, randomisation is a key part of modern clinical trial design. It helps to mitigate selection bias, makes the treatment effect easier to detect by ensuring a more even mix of covariates between treatment groups, and can even form an important part of statistical analysis. We have also seen that balance in the size of treatment groups can result in more powerful clinical trials, but that balance can be difficult to achieve without sacrificing randomness to at least some degree. All the methods we’ve outlined here face a trade-off between randomness and balance, and the choice of which to favour will depend strongly on the nature of each clinical trial.

There are also randomisation methods which explicitly choose not to balance the size of their study groups. Balance can be difficult to achieve in some cases, for ethical or cost reasons for example, so study groups can be designed to be of differing sizes. Some trials also randomise with covariate factors explicitly in mind in what is known as stratified randomisation, and still others use adaptive randomisation, where allocation probabilities change dynamically depending on the position of the study. These will be the subject of our next blog about randomisation, so keep your eyes out for that!

The Quantics team have years of experience in a wide range of statistics for clinical trials, including randomisation, sample size calculations, and study design. Visit our services page to find out more.       

About The Author

Jason joined the marketing team at Quantics in 2022. He holds master's degrees in Theoretical Physics and Science Communication, and has several years of experience in online science communication and blogging.