Pricing is the NPD decision with the highest commercial impact — and the one most often made on gut feel. Most F&B brands set their retail price based on a combination of cost-plus margin, competitor benchmarking, and a category manager’s sense of what the shelf will tolerate.
That approach works until it doesn’t. A price that’s too high kills trial. A price that’s too low leaves margin on the table and signals “budget” in a premium category. And the gap between those two points is often narrower than brands assume.
Traditional price sensitivity research — focus groups, Van Westendorp surveys, Gabor-Granger studies — takes weeks to field and costs tens of thousands of dollars. For a founder-led brand in the $5M–$250M range, that timeline and budget often means pricing decisions get made without consumer data at all.
There’s a faster way. Discrete choice experiments can isolate price sensitivity in under two hours, with results that are directly actionable for your next retail pitch or range review.
Table of Contents
- Why Focus Groups Fail at Pricing
- The Discrete Choice Approach to Price Testing
- The Isolation Rule: Test Price and Nothing Else
- What Price Sensitivity Data Actually Tells You
- When to Run a Price Sensitivity Test
- Common Mistakes in Price Testing
- How to Set Up Your First Price Test
Why Focus Groups Fail at Pricing
Focus groups are useful for understanding how consumers talk about products, what language resonates, and what emotional associations they hold. They are not useful for pricing.
The problem is structural. In a focus group, when you ask “how much would you pay for this product?” you get stated preference, not revealed preference. Consumers systematically understate what they’d actually pay — partly because they’re anchored by the question format, and partly because there’s no real consequence to their answer. Nobody’s actually spending money.
There are also social dynamics at play. In a group setting, one vocal participant can anchor the entire room. If the first person says “I wouldn’t pay more than three dollars,” everyone else adjusts downward. Your pricing data now reflects one person’s opinion amplified by conformity bias.
Traditional survey approaches like Van Westendorp (“at what price would this be too expensive / too cheap / a bargain / getting expensive?”) are an improvement over focus groups, but they still rely on stated preference and they don’t account for competitive context. A consumer saying your product is “too expensive” at a certain price doesn’t tell you whether they’d still choose it over the alternatives actually available on shelf.
The Discrete Choice Approach to Price Testing
Discrete choice experiments solve the stated-preference problem by forcing a trade-off. Instead of asking “what would you pay?” they present the consumer with the actual product at a specific price and ask: “would you choose this product, or one of these alternatives?”
By varying the price across multiple rounds while holding everything else constant, you build a demand curve — a map of how preference changes as price increases. The result isn’t a single “willingness to pay” number. It’s a curve that shows you exactly where preference starts to drop, how steeply it falls, and where the revenue-maximising point sits.
This is the same methodology used by the major consumer research firms. The difference is speed and cost. With synthetic consumer validation, you can run a price sensitivity experiment with 500+ census-representative consumers in under two hours — no recruitment, no scheduling, no fieldwork.
The Isolation Rule: Test Price and Nothing Else
This is the single most important principle in price testing, and it’s the one most commonly violated: when testing price, hold every other variable constant.
That means:
- Same product description across all price points. Same ingredients, same weight, same format.
- Same front-of-pack claims. Don’t change the protein claim or the ingredient transparency statement between price variants.
- Same packaging. Don’t show a premium-looking pack at the higher price and a basic one at the lower price.
- Same competitive set. If you’re testing against two competitor products, keep them identical across rounds.
If you change the claims AND the price at the same time, you cannot tell whether a shift in preference was driven by the price change or the claim change. The data becomes uninterpretable.
We see this anti-pattern frequently in traditional concept testing: a survey that tests three “tiers” — a basic version at a low price, a standard version at a mid price, and a premium version at a high price. This doesn’t test price sensitivity. It tests three different products. The price variable is confounded with the product variable.
To test price, you need one product — fully defined, claims locked — at three to four different price points. Nothing else changes.
What Price Sensitivity Data Actually Tells You
A well-designed price test gives you four things:
1. The Preference Curve
At each price point, what percentage of consumers would choose your product over the alternatives? This is your demand curve. It always slopes downward — higher price means lower preference — but the shape of that curve is what matters. A gentle slope means you have pricing power. A steep drop at a specific point means you’ve found the price ceiling.
2. The Revenue-Maximising Point
Revenue is price multiplied by volume. The highest price isn’t always the best price — if a 15% price increase causes a 30% drop in preference, you’ve lost revenue. The data lets you model this trade-off and find the point where price × preference share is maximised.
3. Segment-Level Differences
Price sensitivity varies by consumer segment. Health-forward consumers may tolerate a premium for organic certification that mainstream shoppers won’t. Parents buying snack bars for kids may be more price-sensitive than adults buying for themselves. Synthetic consumer experiments can break results down by demographic, attitudinal, and behavioural segments — revealing whether your pricing strategy should vary by channel or audience.
4. Competitive Vulnerability
When your price goes up, where do consumers go? If they switch to a specific competitor, that tells you who your real competitive threat is. If they switch to “none of these” (opting out entirely), that suggests the category has a price ceiling, not just your product.
When to Run a Price Sensitivity Test
Price testing is valuable at three specific points in the NPD process:
| NPD Stage | Pricing Question | Why It Matters |
|---|---|---|
| Stage 1 — Concept | Can this product support a premium price in its category? | Determines whether the product concept is financially viable before formulation begins |
| Stage 3 — Development | Does a formulation change (e.g., switching to organic ingredients) justify a price increase? | Quantifies the consumer-perceived value of a specific input cost increase |
| Stage 6 — Post-Launch | Can we raise the price without losing velocity? How much will a promotion drive incremental volume? | Informs trade discussions and range review defence with consumer data |
The most overlooked opportunity is Stage 3. When R&D comes back and says “switching to organic pea protein adds cost to the COGS” — the question isn’t whether the ingredient is better. The question is whether consumers will pay the difference. That’s a testable question, and you can have the answer in two hours.
Common Mistakes in Price Testing
| Mistake | Why It’s Wrong | Fix |
|---|---|---|
| Testing price alongside claims or packaging | Can’t isolate what drove the preference shift | Fix everything except price. One variable only. |
| Using abstract tiers (“premium vs value”) | Not actionable — you need a specific number for your retailer pitch | Use real retail prices anchored around your current or intended price |
| Testing only two price points | Two points make a line, not a curve. You can’t see where the drop happens. | Test three to four price points in 10–15% increments |
| Ignoring the competitive set | Consumers don’t evaluate price in isolation — they compare to what else is on shelf | Include 1–2 real competitor products at their actual retail prices |
| Asking “what would you pay?” directly | Stated preference ≠ revealed preference. Consumers understate WTP. | Use discrete choice (forced trade-off) format instead |
How to Set Up Your First Price Test
- Define your product completely. Write the full product description — ingredients, weight, claims, packaging format. This is the constant. It does not change between price variants.
- Lock your front-of-pack claims. If you haven’t validated your claims yet, do that first (see our guide to front-of-pack claims testing). Price testing assumes the product positioning is already decided.
- Choose three to four price points. Anchor around your current or intended retail price. Test incrementally — not wildly different prices that no consumer would believe. The goal is to find the slope of the curve, not the extremes.
- Include real competitors. Add one to two competitor products at their actual shelf prices. This gives consumers a realistic competitive context and ensures your results reflect real-world trade-offs.
- Run the experiment. 500 consumers is the minimum for stable results across 3–4 price points. With synthetic consumer validation, this takes under two hours.
- Read the curve, not just the winner. The lowest price will always “win” on preference. The insight is in the shape of the decline — where it steepens, where segments diverge, and where revenue peaks.
Test Your Pricing Before Your Next Range Review
If you’re heading into a retailer pitch or preparing for a range review, bring pricing data — not pricing assumptions. Saucery runs price sensitivity experiments with 500+ census-backed synthetic consumers, typically in under two hours. Configure your product, set your price points, and see exactly where the demand curve bends. Start a free experiment.
