Sample sizes are a source of confusion for many people. On one hand, some wonder how you can get any reliable information from a couple of focus groups or by interviewing 30 people. On the other hand, some people are skeptical that surveying a thousand people can truly represent the attitudes or behavior of the population. We researchers can’t win! 😉
In fact, both of those things are true — 30 interviews can sometimes be enough and 1,000 people can represent the entire population. The recommended sample size can vary dramatically depending on the research method you’re using. Generally speaking, quantitative research (e.g., surveys and experiments) involves more participants and qualitative research (e.g., one-on-one interviews and focus groups) involves fewer participants.
For both quant and qual studies, having more participants will give you more confidence in the results. For surveys, a larger sample size will decrease your margin of error (more on that below). For individual or group interviews, more people will help ensure that you uncover and corroborate all of the important themes that arise.
But there are diminishing returns when it comes to sample size. If you survey a thousand people (randomly), you can be pretty confident in the results. You can survey ten times that many people, but it will only make your research about three times as accurate. Once you get beyond a few focus groups or a few dozen interviews (with a particular target audience), each extra focus group or interview will teach you very little.
This is important to know because the number of participants in your study is usually one of the biggest drivers of cost. You don’t want to go too small and get unreliable results, but you also don’t want to go too big and spend money unnecessarily.
So what are the sample sizes we recommend? Here we go!
When people think of research, most of them think of surveys. Indeed, surveys are a very useful research tool and they are at the core of the services we offer at Moonshot Collaborative. As noted earlier, a bigger sample is almost always better when it comes to surveys. A larger sample gives you more confidence in the survey results and the option to dig further into specific segments. Also as noted above, however, there are diminishing returns.
Regardless of how well-designed it is, every survey based on a sample of the population will be an estimate. The accuracy of that estimate is often described by a term you’ve probably heard of: “margin of error.” The margin of error technically doesn’t apply to all surveys*, but it still gives us a useful tool to help understand the diminishing returns of investing in larger sample sizes for your survey:
As you can see in the table above, if you survey 250 people, your error margin is about ± 6%. That means if your (probability-based) survey found that 50% of respondents ate plant-based bacon in the past three months, you can know that — 95 times out of 100 — the “true” proportion of plant-based bacon eaters will be between roughly 44% and 56%.
Do the same study with 2,500 people and a corresponding margin of error of ± 2% and that range would tighten to between 48% and 52%. But the second study would cost a heck of a lot more, and as you can tell from the table, the improvement in the margin of error that you get decreases with every bump in sample size.
Here’s what Moonshot Collaborative recommends: Try to get at least 400-500 survey respondents and aim for 800-1,000 respondents to give yourself more confidence in the results. Incidentally, a sample size of 800-1,000 is what we provide in our monthly client survey.
There are a few different scenarios in which you might consider investing in a larger sample size. One example is when you’re making an especially big or expensive business decision, because you may need the extra confidence of having more accurate results. Another is when you’re looking for small differences, either between different segments of your sample or over time for tracking surveys.
Want to see the trade-off between sample size and margin of error for yourself? Try this calculator.
* “Margin of error” technically applies only to probability-based sampling, where every member of the population being studied has a random chance of being selected for the sample. Except for election-related polling, the vast majority of surveys you see are NOT probability-based.
Sample sizes for qualitative research are a whole different ball game. You’re not really analyzing the results statistically, so you don’t need to worry about things like margin of error. Instead, you should think about saturation. That is, how many people do you need to interview before you’ve generally exhausted the research topics of interest and you’re no longer learning anything new?
The answer can be hard to peg down and experts have different opinions on the matter. It also depends on the specific kind of qualitative study you’re undertaking.
Moonshot Collaborative recommends that studies involving one-on-one interviews include at least 25-30 people, but typically not more than 50-60. For focus groups, 3-4 groups is usually sufficient, with each group including 5-8 participants.
There are some cases when investing in more interviewees is beneficial. These might include having a long list of research topics that can’t be covered in a single interview (not recommended) or needing to interview two or more very distinct consumer categories. For focus groups, which we generally recommend only when participant interaction is needed, you should never rely on just a single group and should plan to conduct at least two and conducting 3-4 groups is preferred if resources allow.