Fastuna AI: In-depth interviews are finally automated

Survey/Poll

Everything you need to know about quantitative DIY surveys.
The survey will help you:
— test your ideas and hypotheses. Select the most promising ones
— prioritise ideas
— choose the options to implement
— evaluate the results of a product launch or an ad
— define areas for improvement

Define your survey objective

Clearly define what problem you are trying to solve and what decisions need to be made as a result. To do this, determine which stage of the cycle you are currently at: "Idea — Testing — Development — Evaluation". A survey can be an effective research method at all stages, except for the very beginning when you are searching for ideas (observation and interviews can help here).

Define the target audience

The target audience of your product is all potential users who need it at the very least in theory and who can use it physically. Read why it's important to consider your target audience broadly.

Describe your target audience in terms of:
— demographics
— geography
— their key needs

Demographics are important when you search for respondents. As you know preferences are strongly influenced by demographics :). Just looking at geography we know that when working in different countries we should always test in each country separately, because the market situation varies greatly from country to country. You can determine whether your TA has a need either by asking a direct question "do you have a need for XXX in the last XX", or through behavior "did you do XX in the last XX".

For example, you are making an app for generating recipes depending on what is available in your fridge. The UK is the market you want to research. Then the description of your target audience might look like this:

— Male/Female 20−50 years old.
— UK national representative* sample.
— Those who have been cooking at home for the last month.

*Nationally representative sample means that the population of interest is the entire population of the country and that the sample should reflect this in its structure. E.g. the numbers of men vs. women will match the proportions naturally falling out in the real world, the percentage in each age group or each region will exactly match the population etc.

If you want to interview your current users, you don't usually need to limit the target audience any further. However, depending on your objectives you may want to select particular users. For example, you may want to interview your "heavy" users specifically to find out who they are exactly and clarify their profile and preferences. Or vice versa, you may want to talk to your "lapsed" users to understand why they stopped using your product.

Define decision criteria that will define your main survey questions

The survey is about numbers and the decisions that you make will also have to be based on numbers. In order to do that, you have to know the decision criteria in terms of those numbers.

Below are some examples of criteria and relevant questions, depending on the problem you need to solve:
(*) The threshold values of the indicators are given as an example. For indicators such as "likability" or NPS, there are market norms —benchmarks of "what is good" based on a large number of cases.

    How to write a great questionnaire

    You start your survey with the screening questions. Their job is to select only those who fit the description of your target audience. Next, ask the main questions, using the table above. Ask questions that will help you explain quantitative figures (for example, the open-ended "why?" questions).

    General recommendations:
    - Start with general questions and then move on to more specific ones.

    - Be straightforward and clear with your wording. Articulate your questions in a way that anyone can understand. Don't use terminology or slang. Test the questionnaire on friends who are not involved in marketing or market research.

    - Neutrality. Check that the wording of the questions is not biased and does not force a point of view on the respondent. For example, this question is likely to cause a biased answer: "Many people have already donated to charity, have you donated in the past month?"

    - Ask only ONE question at a time. Avoid asking 2-in-1, for example: "Do you like your job and do you think it pays well?"

    - For multiple-choice questions, make sure that there is a suitable answer for each person. If you are unsure that the list covers all the options, add "other" at the end.

    - Questions that help you evaluate something, e.g. "how clear" (… how likable, simple, fast, friendly, likely…) are best asked as closed questions with a five-point scale, see below:
    How clear is this idea?
    — Very clear
    — Rather clear
    — Partly clear, partly not very clear
    — Not very clear
    — Not clear at all

    Sample size

    How many people need to be interviewed in order to draw reliable conclusions? In simple terms, the sample size depends on the required accuracy, i.e. our tolerance for error, the probability of drawing the wrong conclusion, and the dispersion of the variable that you are evaluating.

    In order to solve practical problems we recommend the following. Let’s explain with an example.

    Let’s say you need to estimate how many people currently work from home in London. If you interview 200 people the sample error will be between ±4% to ±7%, depending on the dispersion. Let’s assume that in your survey, you found that 10% of Londoners work remotely. Taking into consideration the sample error, you can assume that the real proportion of urban residents working from home is in the range between 6% and 14%.

    You conducted another survey where 50% of respondents from Hamburg said that they cook at home at least once a week. You can argue that the proportion of Hamburg residents cooking at home ranges from 43% to 57%. This level of accuracy will be sufficient for most practical problems. If you need more precision to solve your particular problem then you will have to increase the sample size. Keep in mind, however, that to reduce the sampling error by half (to ±2% … ±3.5%), you will have to increase the sample five times (to 1000 interviews).

    If you have to choose between two versions of the same product or feature, two landing pages or ads, you can do that in two ways. Recruit 200 people for an interview. Show each person both options and ask which they like best. If one option is preferred over the other by at least 55% or more of the respondents, you can conclude that your target audience prefers this particular option. Otherwise, assume that people do not see the difference. This is a direct comparison test.

    The second way is to show one person only one option and ask such questions as "do you like it?" or "will you use?". Interview 400 people: 200 will be evaluating the first option and the other 200 will be evaluating the second option. This is a so-called 'monadic test'. Compare the percentage of those who answered "like" for the options they viewed. If the preference gap is more than 10%, you can conclude that one option was liked by the target audience more than the other. The larger the gap, the greater the difference in preference. This method helps to eliminate biases and provides a 'cleaner' or fair evaluation for different options you might have in mind.

    Quotas and representativeness

    The idea behind quotas is very simple. Suppose you have two product or advertising ideas. You want to compare them and choose the best one. To do this, you need to conduct a survey. You show the first idea to a hundred people, the second idea to a hundred other people, and compare the scores. In monadic testing as you just learned one person sees only one idea. But what if the first idea was rated only by women, and the second idea only by men? Or the first — only by young people, the second — only by the older respondents? The first — by Londoners, the second — by those who live in the countryside? Obviously, these results can not be compared like for like. So it is crucial that both ideas are shown to people with similar profiles or parameters. Similar requirements apply, for example, when conducting AB tests.

    We suggest to set quotas on age, sex and regional distribution that would reflect national representative statistics. To set quotas means to set the percentage of people with certain characteristics for both of your samples (those who are shown idea 1 and those who are shown idea 2). For example: M — 50%, F — 50%; 20−30 years old — 35%, 31−40 years old — 35%, 41−50 years old — 30% and so on.

    Quotas are also used to ensure the representativeness of the survey. This is necessary in cases where, according to the results of the study, you want to measure what % of the target audience has a particular characteristic. One example is a political poll to predict the outcome of elections or a study of habits and usage patterns of your target audience.

    Conducting a survey

    When everything else is ready to go you need to think about where to look for research participants and how to recruit them. You can read about it here (coming soon). The questionnaire itself can be programmed using one of the available specialised services. With fastuna.com you can either choose one of our standartasied solutions to test your product or advertising ideas, or use Fastuna DIY to program your own questionnaire.

    Analyse the data

    At this stage you need to look at the percentages, analyse the distribution of answers. Keep a research problem in mind. Your analysis should be focused on finding information in accordance with the decision criteria formulated above. Below are a few examples of typical cases:

    1) You need to estimate the proportion of people with a certain characteristic. In the example above, you interviewed 200 London residents and found that 10% of them work remotely. You may need this proportion to calculate the potential size of your market. Keep in mind that due to random sampling error, the real share can range from 6% to 14%. Use these boundaries to estimate the size of the market.

    2) You need to understand people’s preferences, select the best option. In our example, you had two ideas and tried two approaches: 1) each person was shown both ideas and asked which one they liked best; 2) each person was shown only one idea and asked if they liked it. In the first case, select the idea that scored more than 55%. In the second case, choose the idea that got 10% more likes than the second one. If the difference is smaller, choose any idea. Or choose none. It could also be that it is also possible that none of the ideas were good enough (see the next paragraph).

    3) You need to evaluate the potential of an idea. Not only do you need to select the best idea from several, but you also need to evaluate their potential. You need to decide whether it’s worth spending time working through them at all and push them further down the innovation funnel. You conducted a survey and asked two questions: a) Is the description clear? b) Will you use this product/service? If at least one of the parameters scores well below 70%, consider going back to the previous stages and refining your idea.
    Think of "I'll use it" (so-called trial interest) as a measure of how likely it is that your idea will be successful. If 70% of people said they would use it it does not mean that 70% of the target audience will buy your product. Real sales will be influenced by many other factors: the specific implementation of the idea, your marketing, distribution, price, the presence and activity of direct and indirect competitors, and so on. At this stage, your task is to make sure that your idea is appealing i.e. clear and generates interest in a big chunk of your target audience. Use 70% as a benchmark. This is a market average, based on a large number of tests of other ideas and verified by a large number of real product launches.

    4) You need to measure customer and user satisfaction. There are many different satisfaction metrics. NPS and CSAT are the two most common approaches. What you need to pay attention to:

    - Are the absolute values of the metric. In theory, it shows how happy your customers or users are. One watchout when using those metrics is that they are always relative to a specific brand, company or situation. So without any context, it is hard to say whether NPS = 30 is good or bad, or if 70% of people are saying they are completely satisfied with your service, is a good result. So always use benchmarks when reporting such outputs. This can be either a "market norm" (publicly available for some markets) or comparisons with your competitor (for this you need to research the customers of your competition).

    - Trends. Satisfaction measurements should be carried out regularly. Satisfaction metrics are good predictors of future problems. If satisfaction sees a gradual downward trend this will inevitably lead to financial consequences.

    Innovate with confidence
    Validate product and marketing decisions with real people in 24h using our hassle-free testing tool