Blog
Sep 11

The Confidence Interval II: Standard Error & Central Limit Theorem

|

In part 1 of our series covering the confidence interval, we examined some of the basic statistical concepts associated with a bioassay. In particular, we saw that a property of interest – we considered (and will continue to consider!) the EC50 of a test lot – follows a distribution defined by a mean (\mu) and a standard deviation (\sigma). Note again that we shall assume all EC50s are measured on the log scale, as in part 1.

Whenever we measure the EC50, the result is drawn from this distribution – we are more likely to observe EC50s close to \mu, with the likelihood of observing a value decreasing with distance away from µ in a way which is determined by \sigma.

We also saw that, as properties of the population, \mu and \sigma are fundamentally unknowable. Instead, we use the mean (\bar{x}) and standard deviation (s_x) of measured samples to infer those properties of the distribution from which the sample is drawn.

Here, we’re going to build on these foundations to think about the connection between the properties of our measured samples and the population. This is crucial, since it underpins the very concept of a confidence interval, the construction of which we’ll examine in full in part 3.

The Standard Error

Let’s consider again our example sample of 10 measured EC50s, which we first saw in part 1:

Sample No 1 2 3 4 5 6 7 8 9 10
EC50 (x) 1.80 2.29 2.07 3.04 2.86 3.03 3.04 2.39 3.96 3.30

We found that the mean of this dataset was \bar{x}=2.78. Now: if we were to repeat this experiment – measure 10 new values of our test lot EC50 – would we expect that the new sample mean would be identical? Of course not – this would be very surprising!

This tells us that the sample mean is variable as well as the individual measured values in the sample. We often refer to the variability of the sample mean as the standard error and denote it as .

How does the standard error behave? One property which is immediately apparent is that the standard error ought to depend on the sample size. For some intuition as to why this is the case, imagine you are tasked with determining the average height of a class of 20 high school students. If you were to use a small sample size – say two students – you would find that the sample mean would vary a lot depending on which two students, you picked. If, conversely, you were to use the biggest possible sample – i.e. all 20 students – you would find that the sample mean doesn’t vary at all since the sample would be the same every time.

For a measurement such as an EC50, you will never be able to form a sample which includes the whole population. Nevertheless, the standard error will still depend on the sample size according to:

    \[s_{\bar{x}}=\frac{s_x}{\sqrt{n}}\]

where n is the sample size and sx is the sample standard deviation. You will notice that the sample size is square-rooted in the denominator: we won’t go into the gory details (see this derivation) of why this is the case here. The important takeaway is that the larger the sample size, the smaller the standard error.

The Central Limit Theorem: A bridge to the Confidence Interval

Why do we care about the standard error? It turns out that it gives us the bridge we are looking for between the properties of our samples and the properties of the population and, therefore, the confidence interval. This follows from a fundamental statistical result known as the Central Limit Theorem (CLT). The derivation of this theorem is well beyond the scope of this blog, so we’ll just give an overview here. For a more complete picture, this series by the maths youtuber 3Blue1Brown comes with the Quantics seal of approval!

The CLT tells us that, under a set of assumptions we shall detail in a moment, we can treat a sample mean as if it were drawn from a normal distribution. Specifically, a normal distribution whose mean is the population mean and whose standard deviation is the standard error. Let’s just take a moment to think about how powerful this is: the CLT gives us a direct relationship between the underlying distribution of a population and the properties of a sample drawn from that population.

So, when does the CLT apply? There are two key hoops through which the data must jump:

Independence: Knowledge of the results of one experiment must tell us nothing about the result of any other. A coin flip is independent, for example: flipping a heads on one toss gives you no information about the result of the next toss. Independence can be a sticking point in bioassay when using pseudo-replicates – for more see our blog about pseudo-replicates.

Sample size: To use the CLT, our sample size, n, must be sufficiently large. An oft-cited rule of thumb is that n should be greater than 30, but there is nothing fundamental about this limit. Some cases will converge to a normal distribution with smaller sample sizes, while some will require larger sample sizes to appropriately apply the CLT.

Note that these assumptions say nothing about the underlying population distribution. The EC50 (or, more properly, the log(EC50)) is normally distributed, but this need not be the case to use the CLT. The result of a dice roll, for example, is drawn from a uniform distribution – all six possible results are equally likely.

Nevertheless, if you were to plot a histogram of the means of a large enough series of, say, five dice rolls the result would be a normal distribution! A simulation of this is shown in the series of plots below: as the sample size increases, the distribution of means observed more closely resembles a normal distribution, which is superimposed in orange on each plot.

confidence interval: 30 rolls
confidence interval: 100 dice rolls
confidence interval: 500 dice rolls

Interpreting the Standard Error

For an example, let’s consider an expanded sample of n=30 EC50s, still drawn from the same population with µ=3 and σ=0.5:

Sample No 1 2 3 4 5 6 7 8 9 10
EC50 (x) 3.03 2.81 4.16 2.99 2.91 3.44 4.07 3.31 3.22 2.46
Sample No 11 12 13 14 15 16 17 18 19 20
EC50 (x) 2.77 3.29 2.13 3.76 2.69 3.32 3.79 2.79 3.2 2.57
Sample No 21 22 23 24 25 26 27 28 29 30
EC50 (x) 3.59 3.32 3.14 3.14 1.84 2.92 3.33 4.08 3.29 3.09

From this, we can calculate:

Sample Mean: \bar{x}=3.15

Sample Standard Deviation: s_x=0.53

Standard error: s_{\bar{x}}=\frac{s_x}{\sqrt{n}}=\frac{0.53}{\sqrt{30}}=0.1

What does this mean? The standard error gives us similar information about the sample mean as does the standard deviation for individual data points. That is, given the data collected we can (roughly) interpret the standard error as the average difference between the sample mean and the population mean. In other words, the standard error gives us an estimate of the typical difference between the average result of our experiment and the “right” answer.

In the context of the CLT, the standard error tells us about the width of the distribution of sample means. We will always be more likely to observe sample means which are close to the population mean than those further away, but the scale of this difference will depend on the standard error. If the standard error of a sample is small, it is more likely that its mean will be close to the population mean – the distribution of means will be narrow. Conversely, a sample with a large standard error will have a mean which is less likely to be close to the population mean – the distribution of means will be wider.

This has several intuitive properties. For instance, recall that the larger the sample size, the smaller the standard error. So, the larger our sample size, the closer we can expect our sample mean to be to the right answer. This is among the most basic principles of science: larger sample sizes tend to give more accurate results. Indeed, if we were to somehow get our hands on an infinite sample, we would find that that the sample mean would be exactly the population mean, and the standard error would be zero.

Between the standard error and the Central Limit Theorem, we now have a way to connect our measured results from samples to the “true” EC50 we want to measure. As we will show in part 3, a good way to do this is to use a confidence interval. We’ll take a deep dive into that next time, so make sure to subscribe so you don’t miss out!

Determining an appropriate confidence interval based on a dataset is just one of the services Quantics can provide. You can find out more on our bioassay services page.

Follow Quantics on Social Media:

LinkedInFacebookTwitter

About the Authors

  • Matthew Stephenson is Director of Statistics at Quantics Biostatistics. He completed his PhD in Statistics in 2019, and was awarded the 2020 Canadian Journal of Statistics Award for his research on leveraging the graphical structure among predictors to improve outcome prediction. Following a stint as Assistant Professor in Statistics at the University of New Brunswick from 2020-2022, he resumed a full-time role at Quantics in 2023.

    View all posts
  • Jason joined the marketing team at Quantics in 2022. He holds master's degrees in Theoretical Physics and Science Communication, and has several years of experience in online science communication and blogging.

    View all posts

About The Author

Matthew Stephenson is Director of Statistics at Quantics Biostatistics. He completed his PhD in Statistics in 2019, and was awarded the 2020 Canadian Journal of Statistics Award for his research on leveraging the graphical structure among predictors to improve outcome prediction. Following a stint as Assistant Professor in Statistics at the University of New Brunswick from 2020-2022, he resumed a full-time role at Quantics in 2023.