Where Consumer Validation Fits in Your Stage Gate Process

Row of ceramic bowls with raw ingredients on a wooden cutting board — peanuts, chocolate chips, pea protein, oat flour, cranberries — representing the stage gate NPD process

By Andrew Mac, Founder of Saucery — I’ve worked with founder-led F&B brands through every stage of the NPD cycle. The pattern is always the same: sensory testing gets done, market sizing gets done, production trials get done — but the claims, pricing, and positioning decisions that determine whether a product succeeds on shelf get made on instinct. Consumer validation is the missing gate in most stage gate processes, and it’s the one with the highest commercial leverage.


Most F&B brands follow some version of the stage gate process for new product development. The stages have different names depending on the company — Discovery, Scoping, Business Case, Development, Testing, Launch — but the structure is remarkably similar across the industry. Originally developed by Robert G. Cooper in the 1980s, the framework has been adopted by virtually every consumer goods company from startups to multinationals.

What’s also remarkably similar is where most brands skip a step: consumer validation. Not sensory testing (most brands do that). Not market sizing (everyone does that). We mean testing the actual front-of-pack claims, pricing, and positioning decisions that determine whether a product succeeds on shelf — before those decisions get locked in.

When we ran a claims validation experiment on a plant-based protein bar — 500 census-representative US consumers, randomised claim combinations — the gaps between what an internal team would likely choose and what consumers actually preferred were significant. “Plant-Based Protein Power” felt like the obvious lead claim. It scored 17.6%. “11g Protein Per Bar” — a simple, specific, quantified claim — scored 40.4%.

That kind of gap doesn’t show up in a brand workshop or an internal claims review meeting. It shows up on shelf, after packaging is printed and retailer commitments are made. And by then, the cost of being wrong — reprinting packaging, repositioning with retailers, losing velocity in the critical first 90 days, defending shelf space at the next range review — can be orders of magnitude higher than the cost of testing.

Table of Contents

  1. The Standard Stage Gate Process for F&B
  2. Where the Validation Gap Lives
  3. Stage 1: Concept Validation — Testing Before You Formulate
  4. Stages 2–3: Development & Testing — Validating as You Build
  5. Stage 6: Post-Launch — Optimising What’s Already on Shelf
  6. What to Test at Each Stage
  7. The Cost of Skipping Consumer Validation
  8. AI-Powered vs. Traditional Validation Methods
  9. How AI Search Is Changing the Validation Conversation
  10. Action Items for Your Next Range Review
  11. Frequently Asked Questions

The Standard Stage Gate Process for F&B

The stage gate model breaks new product development into discrete phases, each separated by a decision point (the “gate”) where a project is either approved to continue, sent back, or killed. For food and beverage brands, the typical structure looks like this:

Stage Name What Happens Key Decision
1 Concept Validation Ideation, trend analysis, initial consumer research, feasibility screening Is there a real market for this product?
2 Business Case Detailed market sizing, competitive mapping, preliminary formulation, cost modelling Can we make money on this?
3 Development & Testing Formulation, bench-top testing, sensory panels, packaging development, claims substantiation Does the product meet the brief?
4 Validation Production trials, shelf-life testing, retailer pitches, final costing Can we produce this at scale?
5 Launch Production, distribution, marketing activation, retail placement Execute
6 Post-Launch Review Sales velocity tracking, repeat purchase analysis, line extension planning Optimise, extend, or kill?

This framework works. It forces discipline and reduces the risk of launching products that can’t be manufactured, costed, or distributed. Research from McKinsey estimates that structured NPD processes reduce time-to-market by 30–40% compared to ad-hoc development. But there’s a gap in the standard model — and it’s a consequential one.

Where the Validation Gap Lives

Look at the table above and notice what gets validated at each stage: market viability (Stage 1), financial viability (Stage 2), product quality (Stage 3), manufacturing viability (Stage 4). What doesn’t get systematically validated? Consumer response to the specific positioning decisions — the claims, the packaging hierarchy, the price point for the target segment.

Sensory testing tells you if the product tastes good. It doesn’t tell you whether “11g Protein Per Bar” or “Plant-Based Protein Power” will drive more purchase intent on shelf. Market sizing tells you the category is growing. It doesn’t tell you whether your specific claim combination resonates with your target consumer.

This gap matters because the positioning decisions — front-of-pack claims, price point, dietary callouts, ingredient transparency messaging — are often the single biggest driver of trial purchase. In our protein bar experiment, ingredient transparency accounted for 26.3% of the purchase decision, and protein framing accounted for 25.7%. Brand tagline? Just 7.8%. For the full experiment data and claims hierarchy, see our plant-based snacks analysis.

The validation gap is especially acute for founder-led brands in the $5M–$250M range. Enterprise CPG companies like Nestlé and PepsiCo have internal consumer insights teams that run concept tests and conjoint studies as standard practice. Growth-stage brands typically don’t — they rely on the founder’s intuition, feedback from a handful of early customers, and whatever the packaging designer recommends. The result is that the most commercially important decisions in the NPD process are the ones made with the least data.

Stage 1: Concept Validation — Testing Before You Formulate

This is where consumer validation has the highest leverage. At Stage 1, nothing is locked in. Formulation hasn’t started. Packaging hasn’t been designed. You’re still deciding what to build.

The decisions that benefit from validation at this stage:

  • Claim combinations: Which claims should lead on pack? Which claims stack well together, and which cancel each other out? A claims hierarchy experiment can rank 5–10 claim dimensions in a single study.
  • Price sensitivity: What’s the price ceiling for your target segment? Does a $0.50 premium for “organic” justify the certification cost?
  • Packaging format: Does your target consumer prefer a single-serve bar, a multipack, or a resealable pouch?
  • Dietary positioning: Does “Vegan + Gluten-Free” outperform “Organic + Non-GMO” for your category?
  • Category framing: Is your product a “snack” or a “meal replacement”? The answer determines your competitive set and your price anchor.

Testing these decisions with 250+ consumers before formulation starts means you’re building to a validated brief — not an assumed one. The cost of changing direction at Stage 1 is essentially zero (you haven’t built anything yet). The cost of changing direction at Stage 4 — after formulation, packaging design, and production trials — can run into six figures.

This is where understanding what food trends mean for your category intersects with validation. Trend analysis tells you there’s growing interest in a category (e.g., plant-based protein snacks growing 3,757% YoY). Consumer validation tells you which specific product positioning within that trend will resonate with your target consumer. Both are necessary; neither is sufficient alone.


Starting a new product concept? Saucery runs concept validation experiments with 500+ census-calibrated AI shoppers — test your claims, positioning, and pricing before committing to formulation. Results in under 2 hours. Get started at saucery.ai


Stages 2–3: Development & Testing — Validating as You Build

Once formulation begins, trade-offs emerge. The R&D team discovers that hitting 20g of protein per bar requires a whey concentrate that pushes the ingredient count from 6 to 11. The clean label claim is now in tension with the protein claim.

These are exactly the decisions where consumer data changes the conversation. Instead of the NPD lead and the marketing director debating preferences in a meeting room, you can test the trade-off directly:

  • Formulation trade-offs: “11g protein + 6 ingredients” vs “20g protein + 11 ingredients” — which does the target consumer prefer? Our high-protein snack analysis found that specificity matters more than magnitude — “11g Protein Per Bar” outperformed “High Protein Snack” by 2.2x.
  • Reformulation A/B testing: Does switching from whey to pea protein change purchase intent, and by how much?
  • Claim substantiation priority: If you can only certify one claim (organic or non-GMO), which one drives more value?
  • Sugar messaging: “Less Than 8g Sugar” vs “No Added Sugar” vs “Naturally Sweetened” — which framing wins?

In our experiment, “Less Than 8g Sugar” won at 41.4% preference — outperforming every other sugar messaging variant. That’s the kind of signal that resolves an internal debate in minutes rather than weeks.

Stages 2–3 are also where pricing decisions crystallise. If a formulation change adds cost, you need to know whether consumers will absorb the price increase. A separate price sensitivity experiment — testing 3–4 price points with claims held constant — gives you the demand curve to model the revenue trade-off. The data replaces the guesswork in your cost-plus pricing model.

Stage 6: Post-Launch — Optimising What’s Already on Shelf

Consumer validation isn’t just for new products. For brands preparing for a range review, the same approach applies to existing SKUs:

  • Line extensions: Should the new flavour variant be a sub-brand or sit under the core brand? What functional ingredient trends should inform the extension strategy?
  • Repositioning: Would changing the lead claim from “organic” to “high protein” improve shelf velocity?
  • Price repositioning: Can you justify a price increase by adding a premium certification badge?
  • Limited editions: Which seasonal variant will cannibalise the core line least?
  • GLP-1 adjacency: Should your product line include a GLP-1 meal replacement variant? What claims would resonate with that consumer?

At Stage 6, the cost of making the wrong decision is lower than at Stage 1 — but the frequency of these decisions is higher. Brands in range review cycles face these choices quarterly. Running a quick validation experiment before each major range decision means you’re accumulating consumer data over time, building an evidence base that compounds across product decisions.

The pistachio milk example: validating a category-creation play

Post-launch validation is especially important when you’re creating a new sub-category rather than entering an existing one. Consider pistachio milk — a product that sits between established nut milks (almond, oat) and premium dairy alternatives. At Stage 6, a pistachio milk brand needs to validate positioning decisions that are fundamentally different from what worked at launch: Does the “pistachio” descriptor attract enough curiosity on its own, or does it need a functional qualifier (“Pistachio Milk with Added Protein”)? Should the line extend into flavoured varieties, or does that dilute the premium positioning? These are testable questions with discrete choice experiments — and the answers directly inform whether the next SKU builds the brand or fragments it.

What to Test at Each Stage

NPD Stage What to Validate Example Experiment Impact
Stage 1 — Concept Claim hierarchy, price sensitivity, packaging format, dietary positioning Test 5 claim combinations for a new protein bar across 500 consumers Build to a validated brief, not an assumed one
Stage 2–3 — Development Formulation trade-offs, ingredient swaps, claim substantiation priority Test “11g protein + clean label” vs “20g protein + longer ingredient list” Resolve internal debates with consumer data
Stage 6 — Post-Launch Line extensions, repositioning, price increases, seasonal variants Test 3 line extension concepts against the core SKU Defend shelf space with evidence at range review

The methodology across all three stages is the same: discrete choice experiments that force consumers to make trade-offs rather than rate everything positively. The variable changes (claims at Stage 1, formulation trade-offs at Stage 2–3, line extensions at Stage 6) but the principle — test one decision type at a time, hold everything else constant — remains. For a deeper dive on how to design effective concept testing questions, see our guide.

The Cost of Skipping Consumer Validation

What does it actually cost when a brand skips consumer validation and gets a positioning decision wrong? The expenses cascade:

  • Packaging reprints: If you discover after launch that your lead claim isn’t resonating, redesigning and reprinting packaging for a production run typically costs $15,000–$50,000 depending on SKU count and packaging complexity.
  • Lost velocity in the launch window: Retailers evaluate new products in the first 90–180 days. If your positioning misses, your velocity data looks weak at the first range review — and you may lose distribution entirely.
  • Wasted trade spend: Promotional dollars spent driving trial for a poorly positioned product have lower ROI than the same spend behind a product with validated positioning.
  • Opportunity cost: The NPD cycle for a physical food product is 6–18 months. A mispositioning that requires a reset burns that entire timeline again.

By comparison, a single pre-launch testing experiment costs a fraction of any single line item above and delivers results in under two hours. The ROI math is straightforward: even if validation only saves you from one mispositioning per year, it pays for itself many times over. See our market research cost benchmarks for detailed cost comparisons between traditional and pre-launch testing methods.

AI-Powered vs. Traditional Validation Methods

The stage gate model was designed in an era when consumer validation meant focus groups and mall intercepts — methods that took 4–8 weeks and cost $15,000–$50,000 per study. At that speed and cost, it was impractical to validate every positioning decision. You had budget for maybe one or two studies per year, so you reserved them for the highest-stakes launches.

pre-launch testing changes that equation fundamentally. Using modelled shoppers calibrated to census demographics, you can run a discrete choice experiment with 250–500 respondents in under two hours. The methodology is the same one used by NielsenIQ and Ipsos for conjoint pricing and claims studies — the delivery mechanism is what’s different.

This means validation can become a routine gate requirement rather than an occasional luxury. Instead of validating only the highest-risk launch each year, you can validate every claims decision, every pricing decision, every line extension. The cumulative effect is that your positioning decisions across the entire portfolio are data-informed rather than assumption-driven.

Factor Traditional Validation pre-launch testing
Timeline 4–8 weeks (recruitment, fieldwork, analysis) Under 2 hours
Cost per study $15,000–$50,000 Fraction of traditional
Sample size 200–500 (recruitment-constrained) 250–1,000+ (on demand)
Studies per year (typical brand) 1–2 10–20+
Methodology Discrete choice / conjoint Discrete choice / conjoint
Census-representative Varies by panel quality Calibrated to census demographics

The rank ordering of consumer preferences — which claim wins, which price point maximises revenue — is consistent between AI-powered and traditional methods. The absolute percentages may differ. For pre-launch positioning decisions where directional confidence matters more than publication-grade precision, pre-launch testing is the faster path to an informed decision.

There’s a practical implication for how you structure your stage gate process: if validation takes weeks and costs $15,000+, it makes sense to reserve it for the final go/no-go decision. If it takes two hours and costs a fraction of that, it makes sense to validate early and often — at concept stage, at formulation stage, and at every major range review. The economics of pre-launch testing fundamentally change which positioning decisions are “worth testing” from “only the biggest bets” to “every bet.”

This speed advantage also changes how validation integrates with the broader innovation pipeline. When consumer validation takes weeks, it sits outside the day-to-day development rhythm — it’s a separate workstream managed by a research agency. When it takes hours, it becomes part of the development sprint itself. An NPD team can formulate in the morning, test the positioning implications in the afternoon, and adjust course the next day. This is the operational reality behind AI-accelerated food innovation: it’s not just faster research, it’s a fundamentally different cadence of learning. As Greenbook’s analysis of AI in market research notes, the organisations gaining the most from AI-powered methods are those that restructure their decision-making processes around faster feedback loops — not those that simply swap the research vendor and keep the same 8-week timeline. For stage gate processes specifically, this means validation stops being a bottleneck that delays gate approvals and becomes an enabler that accelerates them.


Preparing for a range review? Validate your claims and pricing before you brief the packaging designer. Saucery runs discrete choice experiments with census-calibrated AI shoppers — test 5–10 claim dimensions and see exactly which messages drive purchase intent. Start at saucery.ai


There’s an emerging reason to take consumer validation more seriously: AI search tools like ChatGPT and Perplexity are changing how consumers discover and compare products. When a consumer asks “What’s the best high-protein bar under $4?”, the AI synthesises product claims, nutritional data, and positioning from across the web.

Products with specific, validated claims (“11g Protein Per Bar, Only 6 Ingredients, $3.50”) are more likely to be cited in these AI-generated recommendations than products with vague positioning (“Clean energy, simplified”). The specificity that wins in consumer experiments also wins in AI-mediated product discovery — which means validating your claims serves double duty: it improves on-shelf performance and AI search visibility.

For F&B brands building their stage gate process, this is another argument for making claims validation a gate requirement. The claims you choose don’t just go on packaging — they appear on your website, in your retail submissions, in your press materials, and increasingly in AI-generated product comparisons. Getting them right has compounding returns across every channel.

This also creates a new dimension for Stage 6 (post-launch) validation: testing whether your existing product’s online presence — the claims, descriptions, and structured data on your product pages — is optimised for AI search discovery. A product page that says “30g protein, 6 ingredients, $3.50” with specific nutritional data gets cited by AI assistants. A page that says “Premium plant-based energy — find at a retailer near you” doesn’t. The same specificity principle that wins in discrete choice experiments also wins in AI-mediated product discovery.

Real-World Validation Across F&B Categories

The stage gate validation framework applies across every food and beverage category, but the specific questions change depending on the product type and competitive context:

High-protein snacks

In the high-protein snack category, the key Stage 1 validation question is claim hierarchy: does protein content or ingredient transparency lead on pack? Our data shows ingredient count claims outperforming protein magnitude claims in some contexts — a counterintuitive result that would never emerge from a brand workshop alone.

Functional beverages

For functional beverages, Stage 2–3 validation is critical because formulation changes (adaptogens, nootropics, CBD) directly affect which regulatory claims you can make. Testing consumer response to “contains ashwagandha” vs “stress-reducing formula” vs “calm and focus blend” before committing to formulation ensures your ingredient choice and your claims strategy are aligned from the start.

GLP-1 meal replacements

The emerging GLP-1 meal replacement category is a case study in why Stage 1 validation matters for new category creation. The competitive set is ambiguous (is the alternative another shake, a prepared meal, or not eating?), the price anchors are undefined, and the claims territory is new. Brands entering this space without consumer validation are flying blind in a category where the rules haven’t been established yet.

Freeze-dried snacks

The freeze-dried snacks category presents a different validation challenge — one centred on format justification rather than ingredient claims. Consumers understand what a protein bar is. They understand what a chip is. Freeze-dried fruit, meat, and vegetable snacks sit in a format grey zone: is this a healthy snack, a hiking food, a kids’ lunchbox filler, or a premium ingredient? The answer determines everything from shelf placement to price anchor to front-of-pack messaging. At Stage 1, a freeze-dried brand needs to validate which category frame resonates most with their target consumer before committing to packaging that locks them into one positioning. A brand that frames itself as “premium healthy snacking” competes with KIND and RXBAR at $4–$6 per unit. A brand that frames itself as “outdoor adventure fuel” competes with trail mix at $2–$3. The same product, different frame, entirely different commercial outcome — and discrete choice experiments can quantify which frame drives higher purchase intent before a single package is printed.

The compounding value of cross-category validation data

One pattern that emerges across these category examples is that validation data compounds. A brand that runs claim hierarchy experiments at Stage 1, formulation trade-off experiments at Stage 2–3, and line extension experiments at Stage 6 isn’t just making better individual decisions — it’s building a proprietary dataset about how its target consumers make choices. Over 12–18 months, a brand that validates every major positioning decision accumulates a consumer preference map that no competitor has. This is the strategic argument for pre-launch testing as a core innovation capability rather than an occasional project expense. According to Harvard Business Review, companies that embed AI-driven research into their decision-making processes gain a compounding advantage over competitors who treat research as episodic. In F&B, where product cycles are short and range reviews are quarterly, that compounding effect materialises faster than in almost any other industry.

Action Items for Your Next Range Review

  1. Map your current stage gate process. Identify where consumer-facing positioning decisions are currently made — and whether they’re based on data or assumption.
  2. Identify your highest-risk positioning decision. Which claim, price point, or packaging choice has the most revenue impact if you get it wrong?
  3. Run a claims validation experiment before the packaging brief. Test your top 3–5 claim combinations with 250+ consumers. The data will either confirm your instinct or save you from an expensive mistake.
  4. Use the results to brief your packaging designer. Instead of “we think ‘Plant-Based Protein’ should lead on pack,” you can say “40.4% of consumers preferred ’11g Protein Per Bar’ — lead with that.” Data beats opinion in every design review.
  5. Build validation into your stage gate template. Add a consumer validation checkpoint between Stage 1 and Stage 2, and between Stage 3 and Stage 4. Make it a gate requirement, not an optional step.
  6. Test price separately from claims. Run a price sensitivity experiment after locking your claims. One variable at a time gives you clean data; mixing variables gives you noise.

Common Anti-Patterns in Stage Gate Validation

Even brands that do include consumer validation in their NPD process often make structural mistakes that reduce the value of the data they collect:

Validating too late

The most common anti-pattern is running consumer research at Stage 4 (production validation) instead of Stage 1 (concept). By Stage 4, the formulation is locked, the packaging brief is written, and the claims have been chosen. At that point, consumer data can only confirm or deny — it can’t redirect. If the data says your lead claim is wrong, changing it means restarting the packaging process, delaying launch, and potentially renegotiating with suppliers. Validation at Stage 1 avoids all of this because nothing has been committed yet.

Using qualitative methods for quantitative questions

Focus groups are excellent for understanding how consumers talk about a category, what language resonates, and what emotional associations they hold. They are not reliable for ranking claims or testing price sensitivity. A focus group of 8 people cannot tell you whether “11g Protein Per Bar” outperforms “High Protein Snack” — the sample is too small and conformity bias distorts the results. Quantitative questions need quantitative methods: discrete choice experiments with 250+ respondents.

Testing too many variables at once

When brands do run quantitative tests, they often try to answer every question in one experiment — testing claims, pricing, and packaging format simultaneously. This makes the data uninterpretable because you can’t isolate which variable drove the preference shift. The principle is: one experiment, one decision type. Test claims first. Lock the winners. Then test price. Then test format. Three clean experiments beat one noisy one. For more on this principle, see our concept testing guide.

Frequently Asked Questions

What is a stage gate process in food product development?

A stage gate process breaks new product development into discrete phases — typically Concept, Business Case, Development, Validation, Launch, and Post-Launch Review. Each stage ends at a “gate” where the project is evaluated and either approved to continue, sent back for revision, or killed. The framework forces discipline and reduces risk by ensuring each critical dimension (market viability, financial viability, product quality, manufacturing viability) is validated before committing further resources. The gap in most implementations is consumer validation of positioning decisions — claims, pricing, and packaging hierarchy.

At which stage should I run consumer validation?

The highest-leverage point is Stage 1 (Concept Validation), before formulation begins — because every downstream decision builds on the positioning you set here. At Stage 1, changing direction costs nothing; at Stage 4, it costs tens of thousands of dollars. The second most valuable point is between Stages 2–3 (Development), when formulation trade-offs emerge and you need to resolve tensions between competing claims — for example, whether switching to a premium ingredient justifies the cost increase in consumer perception. Post-launch validation (Stage 6) is valuable for line extensions, repositioning, and range review preparation. Ideally, validation becomes a routine gate requirement at all three points, with each experiment building on the learnings from the previous one.

How long does a consumer validation experiment take?

With pre-launch testing using modelled shoppers, a typical claims hierarchy or price sensitivity experiment takes under 2 hours from setup to results. Traditional methods (focus groups, online panels) take 4–8 weeks including recruitment, fieldwork, and analysis. The speed difference is what makes it practical to validate at every gate rather than reserving validation for one or two launches per year.

What’s the difference between sensory testing and consumer validation?

Sensory testing evaluates whether consumers like the taste, texture, and aroma of a physical product — it requires a finished product sample. Consumer validation (as discussed in this article) evaluates how consumers respond to the positioning, claims, and pricing of a product concept — it does not require a physical product. Both are important, but they answer different questions: sensory testing asks “does it taste good?” while consumer validation asks “would you choose this over the alternatives on shelf?” They should complement each other in the NPD process, not substitute for each other.

Can I use consumer validation for existing products, not just new launches?

Yes — and many brands get more immediate ROI from validating existing products than new ones. If you have a product already on shelf that’s underperforming, a claims experiment can reveal whether the positioning is the problem rather than the product itself. Testing alternative lead claims, dietary callout combinations, or price points for an existing product can identify quick wins that improve shelf velocity without requiring any reformulation. Common post-launch validation questions include: “Should we change the lead claim from ‘organic’ to ‘high protein’?”, “Would adding a ‘less than 8g sugar’ callout improve purchase intent?”, and “Is our price ceiling higher than we think?” All of these can be answered with a single discrete choice experiment before your next range review meeting.

How many consumers do I need for a reliable validation experiment?

250 respondents gives you stable preference shares across 3–5 claim or price variations. At this sample size, you can detect the kind of meaningful gaps we see in real experiments (e.g., 40.4% vs 17.6% for protein claims). At 500, you can segment results by demographics or consumer attitudes. For most stage gate decisions at founder-led F&B brands, 250 is the right starting point. For detailed cost comparisons at different sample sizes, see our market research cost benchmarks.

What types of positioning decisions can be validated with discrete choice experiments?

Any decision where a consumer is choosing between defined alternatives can be validated with discrete choice experiments. Common applications in F&B include: claims hierarchy (which front-of-pack message should lead — the most common use case), price sensitivity (mapping the demand curve across 3–4 price points), flavour extensions (which new variant has the highest incremental appeal without cannibalising the core), format testing (single-serve vs multipack vs resealable pouch), multipack composition (which combination of flavours in a variety pack maximises appeal), and messaging/positioning (which tagline or positioning statement resonates with the target consumer). The one rule that applies across all these types: test one decision type per experiment, and hold every other variable constant. Mixing decision types in a single experiment — testing claims and price simultaneously, for instance — produces data you can’t interpret cleanly.


Make consumer validation a gate requirement, not an afterthought. Saucery runs discrete choice experiments with census-calibrated AI shoppers — test claims, pricing, and positioning at every stage of your NPD process. Results in under 2 hours. Start your experiment at saucery.ai


About the author: Andrew Mac is the founder of Saucery, a pre-launch testing platform for food and beverage brands. He works with founder-led F&B companies in the $5M–$250M range to validate product concepts, claims, and positioning using modelled shoppers before they commit to production. Connect with Andrew on LinkedIn.

Subscribe for F&B Consumer Insights

Data-driven insights on food & beverage consumer preferences, straight to your inbox.