By Andrew Mac, Founder of Saucery — I’ve run pricing experiments across snack bars, functional beverages, plant-based dairy, and meal replacements. The pattern is always the same: brands overestimate how price-sensitive their consumers are on the low end and underestimate where the ceiling actually sits. A well-designed price test takes two hours and can be worth six figures in margin you’d otherwise leave on the table.
Pricing is the NPD decision with the highest commercial impact — and the one most often made on gut feel. Most F&B brands set their retail price based on a combination of cost-plus margin, competitor benchmarking, and a category manager’s sense of what the shelf will tolerate.
That approach works until it doesn’t. A price that’s too high kills trial — consumers see it on shelf, compare to the products around it, and move on. A price that’s too low leaves margin on the table and signals “budget” in a premium category, undermining the positioning you spent months building. And the gap between those two points is often narrower than brands assume. In some categories, the difference between optimal pricing and margin erosion is as little as fifty cents per unit.
Traditional price sensitivity research — focus groups, Van Westendorp surveys, Gabor-Granger studies — takes weeks to field and costs tens of thousands of dollars. For a founder-led brand in the $5M–$250M range, that timeline and budget often means pricing decisions get made without consumer data at all. Our market research cost benchmarks show that a single traditional conjoint study runs $15,000–$50,000.
There’s a faster way. Discrete choice experiments can isolate price sensitivity in under two hours, with results that are directly actionable for your next retail pitch or range review.
Table of Contents
- Why Focus Groups Fail at Pricing
- The Discrete Choice Approach to Price Testing
- Why Discrete Choice Outperforms Other Pricing Methodologies
- The Isolation Rule: Test Price and Nothing Else
- What Price Sensitivity Data Actually Tells You
- Real-World Pricing Lessons from F&B Categories
- When to Run a Price Sensitivity Test
- Common Mistakes in Price Testing
- The Psychology Behind F&B Pricing Decisions
- How Geography Shapes Price Sensitivity
- How to Set Up Your First Price Test
- How AI Search Is Changing Price Discovery
- Frequently Asked Questions
Why Focus Groups Fail at Pricing
Focus groups are useful for understanding how consumers talk about products, what language resonates, and what emotional associations they hold. They are not useful for pricing.
The problem is structural. In a focus group, when you ask “how much would you pay for this product?” you get stated preference, not revealed preference. Consumers systematically understate what they’d actually pay — partly because they’re anchored by the question format, and partly because there’s no real consequence to their answer. Nobody’s actually spending money.
There are also social dynamics at play. In a group setting, one vocal participant can anchor the entire room. If the first person says “I wouldn’t pay more than three dollars,” everyone else adjusts downward. Your pricing data now reflects one person’s opinion amplified by conformity bias.
Traditional survey approaches like Van Westendorp (“at what price would this be too expensive / too cheap / a bargain / getting expensive?”) are an improvement over focus groups, but they still rely on stated preference and they don’t account for competitive context. A consumer saying your product is “too expensive” at a certain price doesn’t tell you whether they’d still choose it over the alternatives actually available on shelf.
Research from Journal of Consumer Research has documented this gap repeatedly: stated willingness-to-pay correlates poorly with actual purchase behaviour. The methodology matters more than the sample size — a principle that applies equally to concept testing questions and pricing studies.
The Discrete Choice Approach to Price Testing
Discrete choice experiments solve the stated-preference problem by forcing a trade-off. Instead of asking “what would you pay?” they present the consumer with the actual product at a specific price and ask: “would you choose this product, or one of these alternatives?”
By varying the price across multiple rounds while holding everything else constant, you build a demand curve — a map of how preference changes as price increases. The result isn’t a single “willingness to pay” number. It’s a curve that shows you exactly where preference starts to drop, how steeply it falls, and where the revenue-maximising point sits.
This is the same methodology used by major consumer research firms like NielsenIQ, Ipsos, and Kantar for their conjoint pricing studies. The difference is speed and cost. With pre-launch testing using modelled shoppers, you can run a price sensitivity experiment with 500+ census-representative consumers in under two hours — no recruitment, no scheduling, no fieldwork.
The academic foundation for this approach is well-established. Daniel McFadden won the Nobel Prize in Economics in 2000 for developing the statistical framework behind discrete choice modelling. The methodology has been used for decades in transport economics, healthcare, and environmental policy — and it translates directly to consumer product pricing.
Why Discrete Choice Outperforms Other Pricing Methodologies
It is worth pausing on why discrete choice modelling produces more reliable pricing data than the alternatives — because the methodological distinction is not just academic. A 2019 meta-analysis published in the Journal of Marketing Research found that discrete choice experiments predicted actual market shares within 5–8 percentage points, while direct elicitation methods (asking “what would you pay?”) deviated by 20–30 points. The gap is not marginal. It is the difference between a pricing recommendation your CFO can act on and one that falls apart the moment product hits shelf.
The statistical engine behind discrete choice — multinomial logit and its extensions — estimates a utility function for each attribute of the product. When price is the only variable changing, the model isolates a price coefficient that tells you exactly how much each dollar of price increase costs you in preference share. This is the same analytical framework used in pre-launch testing for food and beverage innovation, where the goal is to decompose a complex purchase decision into its constituent trade-offs. The output is not a single “optimal price” but a continuous function that maps price to predicted market share — giving you the flexibility to model revenue at any price point, not just the ones you explicitly tested.
There is a practical benefit that often goes unappreciated: discrete choice data translates directly into the language that retail buyers understand. When you walk into a range review and say “at $4.49, our model predicts 23% preference share against [Competitor A] at $3.99 and [Competitor B] at $4.29 — but at $4.99, that drops to 14%,” you are giving the buyer a revenue model, not an opinion. Research from Harvard Business Review’s pricing strategy archive consistently shows that data-backed pricing recommendations receive faster buyer approval and fewer margin concessions during negotiation. For emerging brands competing against incumbents with decades of IRI and Nielsen data, this kind of evidence levels the playing field considerably. The ability to accelerate product development decisions with AI-driven validation means even lean teams can bring enterprise-grade pricing evidence to the table.
The Isolation Rule: Test Price and Nothing Else
This is the single most important principle in price testing, and it’s the one most commonly violated: when testing price, hold every other variable constant.
That means:
- Same product description across all price points. Same ingredients, same weight, same format.
- Same front-of-pack claims. Don’t change the protein claim or the ingredient transparency statement between price variants. If you haven’t decided on claims yet, run a claims hierarchy test first.
- Same packaging. Don’t show a premium-looking pack at the higher price and a basic one at the lower price.
- Same competitive set. If you’re testing against two competitor products, keep them identical across rounds.
If you change the claims AND the price at the same time, you cannot tell whether a shift in preference was driven by the price change or the claim change. The data becomes uninterpretable.
We see this anti-pattern frequently in traditional concept testing: a survey that tests three “tiers” — a basic version at a low price, a standard version at a mid price, and a premium version at a high price. This doesn’t test price sensitivity. It tests three different products. The price variable is confounded with the product variable.
To test price, you need one product — fully defined, claims locked — at three to four different price points. Nothing else changes. This is the same “one product, one experiment, one decision type” principle we apply to all concept testing.
Ready to test your pricing? Saucery runs discrete choice price sensitivity experiments with 500+ census-calibrated AI shoppers — results in under 2 hours. Define your product, set 3–4 price points, and see exactly where the demand curve bends. Get started at saucery.ai
What Price Sensitivity Data Actually Tells You
A well-designed price test gives you four things:
1. The Preference Curve
At each price point, what percentage of consumers would choose your product over the alternatives? This is your demand curve. It always slopes downward — higher price means lower preference — but the shape of that curve is what matters. A gentle slope means you have pricing power. A steep drop at a specific point means you’ve found the price ceiling.
2. The Revenue-Maximising Point
Revenue is price multiplied by volume. The highest price isn’t always the best price — if a 15% price increase causes a 30% drop in preference, you’ve lost revenue. The data lets you model this trade-off and find the point where price × preference share is maximised.
3. Segment-Level Differences
Price sensitivity varies by consumer segment. Health-forward consumers may tolerate a premium for organic certification that mainstream shoppers won’t — a dynamic we see consistently in emerging food trend categories. Parents buying snack bars for kids may be more price-sensitive than adults buying for themselves. AI shopper experiments can break results down by demographic, attitudinal, and behavioural segments — revealing whether your pricing strategy should vary by channel or audience.
4. Competitive Vulnerability
When your price goes up, where do consumers go? If they switch to a specific competitor, that tells you who your real competitive threat is. If they switch to “none of these” (opting out entirely), that suggests the category has a price ceiling, not just your product.
Real-World Pricing Lessons from F&B Categories
Price sensitivity varies dramatically across food and beverage categories, and the patterns are instructive for anyone designing a pricing study. Below are lessons drawn from real experiments across four high-growth segments — each illustrating a different pricing dynamic that applies well beyond its own category.
High-protein snacks: specificity creates pricing power
In our high-protein snack analysis, we found that specific, quantified claims (“11g Protein Per Bar”) outperformed aspirational claims (“Plant-Based Protein Power”) by 2.3x in purchase preference. The pricing implication: products with specific, verifiable claims can command higher prices because consumers perceive them as more trustworthy and differentiated. If your front-of-pack says “high protein” (generic) instead of “30g protein” (specific), you have less pricing power than you think.
Functional beverages: occasion determines price tolerance
In the functional beverages space, consumers show dramatically different price sensitivity depending on the consumption occasion. A morning energy drink competes with coffee ($3–5 range). An evening adaptogenic drink competes with wine or cocktails ($6–12 range). The same functional ingredients can command very different prices depending on which occasion you’re targeting — which is why stage-gate validation should test positioning before pricing.
GLP-1 meal replacements: medical context shifts the ceiling
In the GLP-1 meal replacement category, consumers are spending $1,000+/month on medication. In that context, a $4 meal replacement shake is almost irrelevant as a cost consideration — but a $12 premium prepared meal faces scrutiny because it’s competing with “just skip the meal.” The competitive set for pricing isn’t other shakes; it’s the alternative of not eating. This is why including realistic alternatives (including “none of these”) in your price test matters.
Plant-based alternatives: the mainstream penalty
Plant-based products like pistachio milk and plant-based snacks consistently face a “plant-based premium penalty” — consumers expect plant-based to cost slightly more than conventional, but there’s a ceiling. In our data, that ceiling is typically 15–25% above the conventional equivalent. Price beyond that and preference drops steeply, regardless of ingredient quality or certification.
When to Run a Price Sensitivity Test
Price testing is valuable at three specific points in the NPD process:
| NPD Stage | Pricing Question | Why It Matters |
|---|---|---|
| Stage 1 — Concept | Can this product support a premium price in its category? | Determines whether the product concept is financially viable before formulation begins |
| Stage 3 — Development | Does a formulation change (e.g., switching to organic ingredients) justify a price increase? | Quantifies the consumer-perceived value of a specific input cost increase |
| Stage 6 — Post-Launch | Can we raise the price without losing velocity? How much will a promotion drive incremental volume? | Informs trade discussions and range review defence with consumer data |
The most overlooked opportunity is Stage 3. When R&D comes back and says “switching to organic pea protein adds cost to the COGS” — the question isn’t whether the ingredient is better. The question is whether consumers will pay the difference. That’s a testable question, and you can have the answer in two hours. For a complete walkthrough of how validation fits into each stage, see our stage-gate consumer validation guide.
Common Mistakes in Price Testing
These are the errors I see most frequently when brands attempt to test pricing — whether using traditional panels or AI shoppers:
| Mistake | Why It’s Wrong | Fix |
|---|---|---|
| Testing price alongside claims or packaging | Can’t isolate what drove the preference shift | Fix everything except price. One variable only. |
| Using abstract tiers (“premium vs value”) | Not actionable — you need a specific number for your retailer pitch | Use real retail prices anchored around your current or intended price |
| Testing only two price points | Two points make a line, not a curve. You can’t see where the drop happens. | Test three to four price points in 10–15% increments |
| Ignoring the competitive set | Consumers don’t evaluate price in isolation — they compare to what else is on shelf | Include 1–2 real competitor products at their actual retail prices |
| Asking “what would you pay?” directly | Stated preference ≠ revealed preference. Consumers understate WTP. | Use discrete choice (forced trade-off) format instead |
A sixth mistake worth highlighting: testing wildly unrealistic price points. If your protein bar category ranges from $2.49 to $4.99, testing at $1.00 and $8.00 wastes two of your data points on extremes no retailer would consider. Anchor around your intended price and test in realistic increments (10–15% steps). The goal is to map the curve in the commercially relevant range, not to confirm that cheaper products are preferred.
Heading into a retailer pitch? Bring pricing data, not pricing assumptions. Run a price sensitivity experiment with 500+ AI shoppers and show your buyer exactly where the demand curve supports your recommended retail price. Start at saucery.ai
The Psychology Behind F&B Pricing Decisions
Understanding why consumers respond to price the way they do requires stepping back from the spreadsheet and into behavioural science. Research published in Journal of Marketing Research and behavioural economics literature has documented several well-documented psychological effects that shape how consumers perceive price in food and beverage categories — and each one has direct implications for how you design your pricing experiment.
Anchoring and the power of the first number
The first price a consumer sees for a product category becomes their anchor. If the category leader is priced at $4.99, every other product is evaluated relative to that number. A $5.49 product feels “slightly premium.” A $6.99 product feels expensive. A $3.49 product feels cheap — possibly suspiciously so. This is why your price test must include real competitor prices: they set the anchor against which your product is judged. Without that anchor, your data tells you what consumers think about your price in isolation, which is not how anyone shops.
The “quality signal” threshold
In many food categories, pricing too low actually reduces purchase intent. This is especially pronounced in premium and health-oriented segments — high-protein snacks, functional beverages, organic baby food — where consumers use price as a quality heuristic. A $1.99 “artisan” protein bar contradicts its own positioning. Consumers don’t think “what a bargain”; they think “what’s wrong with it?” Your price test will reveal this if you include a price point below the category floor. Watch for the non-linear response: if preference actually increases when you move from your lowest to your second-lowest price point, you’ve found the quality signal threshold.
Reference price effects across channels
Consumers carry mental reference prices that shift by retail channel. The same kombucha at $3.49 in a grocery aisle and $4.99 in a convenience store doesn’t trigger the same price sensitivity response — because the consumer’s reference set is different. Grocery shoppers compare to other grocery prices. C-store shoppers compare to other grab-and-go options. If you’re launching into multiple channels, you may need separate price experiments for each, or at minimum a version that frames the competitive set in the appropriate channel context. This is one area where understanding category trends helps you define the right comparison frame before running your experiment.
Promotion sensitivity vs. base price sensitivity
A common trap: brands assume that strong promotional response (“we sell 3x more when we’re on deal”) means they have a price sensitivity problem. It might mean the opposite. High promotion sensitivity often indicates that consumers see the product as a treat or impulse purchase — they’ll buy at the deal price because it lowers the psychological barrier to trial, but they wouldn’t switch to a competitor at the base price either. The distinction matters: if your trade promotion is driving trial rather than substitution, your base price may be exactly right.
Discrete choice experiments can tease apart this dynamic by testing both base and promoted price scenarios. Run one experiment at everyday shelf prices, then a second with a promoted price for your product while competitors stay at their base. If your preference share jumps dramatically on promotion but holds reasonably at the base price, your pricing is sound — you just need a smarter promotion cadence.
How Geography Shapes Price Sensitivity in F&B
Price sensitivity doesn’t just vary by category — it varies by market, and the differences can be substantial. A price point that works in the US Midwest may fail on the coasts. A price that clears easily in central London may struggle in regional UK supermarkets. And the dynamics in Australia, with its concentrated retail duopoly and premium import positioning, differ from both.
When we run pricing experiments on Saucery, we use census-calibrated personas across seven markets — meaning the AI shopper population reflects actual income distributions, household sizes, and regional shopping patterns. This matters for pricing because a household earning $45,000 in rural Texas has a fundamentally different relationship with a $5.99 snack bar than a dual-income household in San Francisco earning $180,000.
For brands expanding internationally — from the US into the UK, or from Australia into Southeast Asia — a single global price strategy is almost always wrong. The category competitive sets differ, the retail channel economics differ, and consumer reference prices differ. Running a separate price experiment per target market takes the same two hours per market and prevents the expensive mistake of launching at a price point that signals “budget” in one market and “premium” in another.
Our platform currently covers the US, UK, Australia, Germany, Japan, Brazil, and India. For each market, the discrete choice experiment uses locally calibrated consumer personas, local currency pricing, and a competitive set appropriate to that market’s retail landscape. See our cost benchmarking guide for how this compares to fielding traditional pricing research in multiple geographies simultaneously.
How to Set Up Your First Price Test
- Define your product completely. Write the full product description — ingredients, weight, claims, packaging format. This is the constant. It does not change between price variants.
- Lock your front-of-pack claims. If you haven’t validated your claims yet, do that first (see our guide to front-of-pack claims testing). Price testing assumes the product positioning is already decided.
- Choose three to four price points. Anchor around your current or intended retail price. Test incrementally — not wildly different prices that no consumer would believe. The goal is to find the slope of the curve, not the extremes.
- Include real competitors. Add one to two competitor products at their actual shelf prices. This gives consumers a realistic competitive context and ensures your results reflect real-world trade-offs.
- Run the experiment. 250–500 consumers gives you stable results across 3–4 price points. With pre-launch testing on Saucery, this takes under two hours.
- Read the curve, not just the winner. The lowest price will always “win” on preference. The insight is in the shape of the decline — where it steepens, where segments diverge, and where revenue peaks.
How AI Search Is Changing Price Discovery and Comparison
There is an emerging dynamic that affects pricing strategy — one that most brands haven’t caught up to yet: how consumers use AI tools like ChatGPT and Perplexity to compare products and evaluate prices before purchasing. This shift has real implications for how brands think about price transparency, competitive positioning, and the data that’s available to AI-mediated discovery.
When a consumer asks an AI assistant “What’s the best protein bar under $3?”, the AI synthesises product information, nutritional data, and pricing from across the web. Products with clear, specific pricing information on their product pages are more likely to appear in these results. Products with vague “premium” positioning and no specific price reference are invisible to price-conscious AI queries.
The implication for brands: your pricing transparency on your own website and product pages matters more than ever. If your product page says “30g protein, $2.99 per bar” with specific nutritional data, it gets cited in AI-mediated product comparisons. If it says “premium protein bar — find at a retailer near you,” it doesn’t.
This also means that your pricing needs to be defensible in the context of direct comparison. AI tools will surface your product alongside competitors at their specific price points. If you’re charging 40% more than an equivalent product, you need claims data to justify that premium — which brings us back to the importance of validating your claims before setting your price.
There’s a second-order effect worth noting: AI search tools are training consumers to expect transparent, comparable pricing information. Brands that publish specific price-per-unit data, nutritional breakdowns, and direct comparison tables on their product pages are building the kind of structured data that AI tools prefer to cite. This is already reshaping how the functional beverages and high-protein snack categories appear in AI-generated product recommendations. Brands that are invisible to AI search today will face a growing discovery disadvantage as these tools become the default product research channel for younger consumers.
Frequently Asked Questions
What is price sensitivity testing?
Price sensitivity testing measures how changes in price affect consumer preference and purchase intent. Unlike simply asking “what would you pay?”, rigorous price testing uses discrete choice methodology — presenting consumers with a product at different price points alongside competitors and measuring which they’d choose. The output is a demand curve showing exactly where preference drops, where the revenue-maximising point sits, and how different consumer segments respond to price changes.
How many price points should I test?
Three to four price points in 10–15% increments, anchored around your current or intended retail price. Two points are too few (they give you a line, not a curve — you can’t see where the drop-off happens). More than five points dilute the sample across too many conditions without adding meaningful precision. For most F&B products, testing at your target price, one step below, and one or two steps above is sufficient to map the commercially relevant portion of the demand curve.
Can I test price and claims at the same time?
No. This is the most common mistake in price testing. If you change both the price and the claims between conditions, you cannot tell which variable drove the shift in preference. Test claims first (using a claim hierarchy experiment), lock the winning claims, and then test price as a separate experiment with everything else held constant. Two separate experiments, each taking under two hours, will give you cleaner data than one combined experiment that takes twice as long and produces uninterpretable results.
How does price sensitivity differ across food categories?
Price sensitivity varies significantly by category, consumer segment, and purchase occasion. High-protein snacks with specific, verifiable claims have more pricing power than generic alternatives. Functional beverages show different price tolerance depending on the consumption occasion (morning energy vs evening relaxation). GLP-1 meal replacements face unique dynamics where the competitive alternative is not eating at all. The only way to know your category’s specific price curve is to test it.
What sample size do I need for a pricing study?
250–500 respondents gives you stable preference shares across 3–4 price points. At n=250, you can detect meaningful shifts in preference between price levels and identify the revenue-maximising price with confidence. At n=500, you can begin to segment results by demographics, household income, or consumer attitudes — revealing whether price sensitivity differs between, say, health-forward millennials and price-conscious parents. Below 100, the differences between price points are rarely statistically significant, and you risk making pricing decisions on noise rather than signal. For most founder-led F&B brands, 250 is the right starting point — it balances statistical reliability with speed and cost.
Should I include competitor prices in my test?
Yes — and this is one of the most important design decisions in your pricing experiment. Consumers don’t evaluate price in isolation; they compare your product to what else is on shelf. Include 1–2 real competitor products at their actual retail prices as constant alternatives throughout all price conditions. This ensures your results reflect real-world trade-offs, not hypothetical willingness to pay. It also reveals competitive vulnerability: when your price goes up, do consumers switch to a specific competitor, or do they opt out of the category entirely? Always include a “none of these” option so you can measure category exit as well as brand switching. For guidance on building competitive sets, see our approach to concept testing question design.
How is pre-launch testing different from a traditional pricing study?
The methodology is the same — both use discrete choice experiments to map demand curves. The difference is speed, cost, and iteration ability. Traditional pricing studies take 4–8 weeks and cost $15,000–$50,000. pre-launch testing using modelled shoppers delivers results in under 2 hours at a fraction of the cost. The rank ordering of price preferences (which price point maximises revenue) is consistent between methods; the absolute preference percentages may differ. For pre-launch pricing decisions where directional confidence matters more than publication-grade precision, pre-launch testing is the faster path to an informed decision. For a full comparison of methodologies and costs, see our market research cost analysis.
Test your pricing before your next range review. Saucery runs price sensitivity experiments with census-calibrated AI shoppers — configure your product, set your price points, and see exactly where the demand curve bends. Results in under 2 hours. Start your experiment at saucery.ai
About the author: Andrew Mac is the founder of Saucery, a pre-launch testing platform for food and beverage brands. He works with founder-led F&B companies in the $5M–$250M range to validate product concepts, claims, and positioning using modelled shoppers before they commit to production. Connect with Andrew on LinkedIn.
Subscribe for F&B Consumer Insights
Data-driven insights on food & beverage consumer preferences, straight to your inbox.
