Choosing an Idea: Which Approach to Use

Aleksandra Taranova
Client Success Manager at Fastuna
In new product development, the hardest moment often isn’t generating ideas — it’s choosing which ones deserve investment.

Most teams face the same reality: you have multiple promising routes, a limited budget, and pressure to make a decision fast. The right research approach depends on what you need from the test: a quick winner, a defensible KPI read, learning to refine the idea, or all of the above.
Fastuna solutions cover the full cycle of NPD — from early insights and raw ideas to finished concepts and marketing materials. Methodologically, there are four practical ways to evaluate and select ideas:
  • Direct comparison.
  • Monadic testing.
  • Sequential monadic.
  • Combined approach.
Let’s break down what each one is best for — and how to choose.
1. Direct comparison: when you need a quick shortlist
What it is
Direct comparison means respondents see multiple options and evaluate them side-by-side — typically by ranking, choosing the best.
When it’s the best choice
  • You have many ideas and need to narrow them down fast.
  • You want a clear relative winner (“which one wins vs the others?”).
  • You’re in an early stage and need to build a shortlist for deeper testing.
Strengths (pros)
  • Efficient shortlisting: great for turning up to 10 ideas into a manageable top set.
  • Fast and cost-effective compared to running a monadic test.
  • Easy to communicate internally: results answer a simple question — what wins.
Trade-offs (cons)
  • Context effects can influence choice: what an option is shown next to matters.
  • Often less diagnostic: it tells you what wins, but not always why or how to improve it.
  • Normative benchmarking is typically not applicable: direct comparison shows which option wins within your set, but it doesn’t reliably indicate how good or bad an option is versus the broader market.
Fastuna solutions using this approach:
2. Monadic testing: when you need KPI-based decisions
What it is
In monadic testing, each option is evaluated independently: respondents see one idea (or one stimulus) and rate it on a set of KPIs (e.g., appeal, relevance, uniqueness, clarity, purchase intent, brand relevance, trust, etc.).
When it’s the best choice
  • You need a go/ no-go read with strong methodological credibility.
  • You want to compare ideas fairly with less interference from side-by-side context.
  • You’re closer to execution and need to understand how an idea performs on its own.
Strengths (pros)
  • Pure measurement: not biased.
  • Absolute KPIs (“is this good enough?”), not only “better than others”.
  • More realistic for many real-world exposures (people often encounter one design/message at a time).
Trade-offs (cons)
  • Testing many alternatives monadically is heavier on sample and cost.
  • It doesn’t automatically create a “winner moment.”
Fastuna solutions using this approach:
3. Sequential method: when learning matters as much as choosing
What it is
A sequential monadic test is a method where participants evaluate multiple stimuli one after another in a controlled sequence.

This approach allows you to compare your materials unbiased (monadic, first exposure scores) and to have a bigger sample size for subgroup analysis (on consolidated data).

Or it helps you to test a lot of materials, reducing the monadic sample size (not analyzing monadic data), but having sufficient data for the consolidated analyses of many stimuli (saving cost).
Strengths (pros)
  • More diagnostic: reveals why something works or fails (as you could ask multiple questions about every stimulus).
  • Cost effective.
  • Efficient for screening many options without overloading respondents.
Trade-offs (cons)
  • Interview length could be an issue.
  • You always have to balance: data quality vs. number of KPIs per stimulus (less profound diagnostics).
Fastuna solutions using this approach:
4. Combined approach: when you need a winner and a defensible story
What it is
A combined approach blends two layers in one survey:
  • monadic evaluation of each option (KPIs + diagnostics),
  • and a direct comparison that forces a choice (e.g., pick the best / strongest).
It’s used when teams feel they still need to choose even if monadic data shows parity.
Pros & cons
  • You have all the benefits of pure monadic + force respondents to choose.
  • BUT! It still requires at least 100 interviews per stimuli.
Fastuna solutions using this approach: