I’m often asked what number of participants or response rate needed for a customer or contact feedback survey. There isn’t one definitive clear answer. A range of factors needs to be considered when designing a survey and the number of participants isn’t the only important aspect. In fact, a large number of participants can still be biased if they differ substantially from those who don’t participate or aren’t relevant to your survey. This means that you could make decisions for your business that are incorrect – wasting your resources. Many factors have been found to relate to whether a person invited to participate in a survey, will actively participate. Research about what influences one person to participate and someone else not respond, is conflicting. What works for one person, might not for another. Considerations include:
- Ability to be reached
- Advanced warning of the survey
- Who is conducting the research
- Time
- Incentives
- Interest in the topic & sense of obligation
For me, it is largely about getting the right number and type of hooks to participate and that is estimated by understanding a bit about your tribe. After the survey, the relevance of those who participated can be best established if you already have some good information about those invited to participate. This is why a feedback cycle with internal data collection is so important.
Ability to be reached
How up to date is your database? If you don’t have current contact details in your database, then you won’t be reaching as many people that you thought you could. Where are they? Is it best to contact them via email, telephone, paper survey in the post, distribution of a QR code, or social media. There are many other considerations in relation to their ability to be reached, some are:
- Email
- Will your email be blocked by a spam filter?
- Don’t include the survey with a newsletter as it will get lost with other content and you won’t reach those you’d like to participate.
- Phone
- Can you get through to them with a direct number?
- If contacting on a mobile they could have private numbers blocked.
- The interviewer has a big impact on participation levels.
- General
- Some companies have a policy to not allow staff to participate in surveys.
- If there are sensitive topics then prioritise privacy in the design and in the invitation letter.
Advanced warning of the survey
Distributing advanced warning of the surveys and igniting some interest about the importance of the survey will improve participation. The survey will have additional context for those you’re inviting to participate.
Who is conducting the research
Depending on the content, method of data collection and the invitee, the options are; you conduct the research yourself, or get the survey conducted externally. Then there are a range of options for which organisation that is. There are a range of situations when having the feedback collected externally is better – especially if there is sensitive information being collected and if the survey is by telephone interview.
Time
How long the survey will take to complete and if the participant has the time available is a factor in both deciding to participate and drop out during the survey. The higher the desire to participate, the longer the survey can be, however a survey shouldn’t take more than 20 minutes to complete. A 5-10 minute survey is preferable. You have to be honest in the invitation about the length of the survey, otherwise you’ll damage your reputation. Consider times when they’re available to participate. Think about the impact of school holidays, public or seasonal holidays, when a conference is on that many could attend, peak seasons for that industry or exam times for students, or when they’re pre-occupied with end-of-year.
Incentives
There are a range of views about incentives and their effect on participation. The decision isn’t just about if you should include an incentive but then also the type of incentive to include. An incentive doesn’t guarantee they’ll provide honest answers or result in the participants you need. In fact, incentives might encourage erroneous answers to just get to the end of the survey so they can get the incentive. Incentives have different effects for different groups. The general arguments for incentives are:
- the payment or gift is viewed as a counter to the burden of participation
- stimulates a sense of obligation by the person to respond in kind
Incentives can be contingent upon completion of the survey or non-contingent (provided in advance). The groups incentives do benefit are those that are traditionally under represented like minority, lower-educated or those less interested in the study’s topic.
Interest in the topic & sense of obligation
People more involved in the topic are more likely to participate than those not involved. Those not interested in the topic are more influenced by other factors in the survey invitation (mentioned above) to participate.
It isn’t about just ‘throwing a number in the air‘
It is for all these reasons that a market or social researcher will often take a breath when you ask: How many participants do we need for us to be able to get reliable feedback from a customer/contact feedback survey. A low response rate can be an important finding in itself. If you have considered these factors and there is a low participation, then there is a possibility that your database isn’t engaged and that highlights other issues in your business that you need to consider.
Just wanted a number?
Then use the table in this post on the Vovici blog. In case you were wondering, I wrote the first line of this post before getting the link to include here, although I’ve referred many people to the post before. The question is asked that often!