By Andrew Mac — I have run over 200 food concept tests in the last twelve months, and the single biggest pattern I see is this: the product your team thinks will win almost never does. In a protein bar test we ran last quarter, “Only 6 Ingredients” beat “11g Protein Per Bar” by nearly five percentage points. The team had budgeted their entire launch around the protein claim. This post is a complete guide to food concept testing, from first principles through to reading results, with real experiment data from studies I have personally designed and analysed. If you are launching a new food or beverage product and want to validate it before committing six figures to production, this is the playbook.
Table of Contents
- What Is Food Concept Testing?
- Why Traditional Concept Testing Is Broken for Modern F&B
- The 5-Step Concept Testing Process
- What Makes a Good Concept Test
- Concept Testing Methods Compared
- Real Examples: When the “Obvious” Answer Loses
- How to Read Concept Test Results
- When to Test: Mapping to the NPD Lifecycle
- Cost Comparison: Traditional vs AI-Powered
- Key Takeaways
- What AI Search Says About Concept Testing
- Frequently Asked Questions
What Is Food Concept Testing?
Food concept testing is the process of putting a product idea in front of consumers before you manufacture it. You describe the product, present variations of one specific element (the claim, the flavour, the format, the price), and measure which version drives the strongest purchase intent. The goal is to separate what your team believes from what consumers actually prefer, using quantitative data rather than gut instinct.
In traditional conjoint analysis and concept testing, this process looks something like this: you brief a research agency, they design a screener, recruit participants over two to three weeks, run focus groups or an online survey, analyse the data, and deliver a PowerPoint deck six to eight weeks after your initial brief. By the time you get the results, your competitor has already launched.
The timeline problem is not theoretical. According to McKinsey, the average food product development cycle runs 18 to 24 months. The concept testing phase alone eats four to eight weeks of that window. For growth-stage brands doing $5M to $250M in revenue, that timeline is a luxury they cannot afford. They are competing against brands that move faster, test cheaper, and iterate more aggressively.
Product concept testing in new product development is not just a validation step. It is the single most predictive activity you can do before committing capital to production. The questions you choose, the way you frame the product description, and the audience you test against determine whether your test actually predicts market behaviour or just confirms what you already wanted to hear. The difference between a well-designed concept test and a poorly designed one is the difference between questions that predict launch success and expensive noise.
Why Traditional Concept Testing Is Broken for Modern F&B
I talk to food brand founders every week who have either skipped concept testing entirely or had a bad experience with a traditional research agency. Both groups share the same frustration: the process is too slow, too expensive, and the output is too vague to actually make product decisions from. Here is why the traditional model is failing.
The Speed Problem
Traditional concept testing takes six to eight weeks from brief to deliverable. That includes two weeks for screener design and approval, two to three weeks for recruitment and fieldwork, and one to two weeks for analysis and reporting. In a market where food trends shift quarterly and retailer windows open and close on weeks of notice, six weeks is an eternity. Brands running lean NPD teams cannot hold a launch decision for that long.
The Cost Problem
A single concept test from a mid-tier agency costs $15,000 to $40,000 depending on sample size, market coverage, and reporting depth. For brands running multiple SKU launches per year across different markets, the cost per insight becomes prohibitive. Most growth-stage brands can afford one study, maybe two. That means they are testing their biggest bet and guessing on everything else.
The Sample Quality Problem
Panel-based research depends on professional survey takers who have completed hundreds of studies. Ipsos and Kantar both acknowledge the declining quality of online panel respondents, with speeders, straightliners, and bot-driven responses increasingly polluting data sets. When your $25,000 study is built on respondents who click through in 90 seconds to collect their incentive, the data does not reflect real consumer behaviour.
The Iteration Problem
Perhaps the most damaging limitation: traditional concept testing treats each study as a one-shot event. You get one chance to ask the right questions. If the results raise new hypotheses (which they almost always do), you need to brief, recruit, and run an entirely new study. At $20,000+ per iteration, most brands stop after the first study regardless of what the data suggests. AI-powered approaches are changing this by making iteration nearly free.
Tired of waiting six weeks for concept test results? Saucery tests product decisions with AI shoppers, delivering clear answers in 24 hours or less. See how it works.
The 5-Step Concept Testing Process
Whether you use a traditional agency, a DIY survey tool, or an AI-powered approach, every effective concept test follows the same fundamental process. I have refined this framework through hundreds of experiments, and the brands that skip steps are the ones that get misleading results.
Step 1: Define the Product (the Fixed Brief)
Before you test anything, you need a locked product description. This is the single most common failure point in concept testing. The product description is not the variable. It is the anchor. Every shopper needs to evaluate the same product before you change one element.
A good product description includes: the product format, size, key ingredients, certifications (organic, non-GMO, gluten-free), nutritional highlights, and the price point. If you are testing front-of-pack claims, the product is fixed and the claims vary. If you are testing flavour extensions, the base product is fixed and the flavours vary. If you are testing price sensitivity, everything is fixed except price. Never mix what is fixed and what is variable.
Step 2: Choose One Decision Type
Every concept test must address a single decision type. This is a hard rule. Valid types include: claim hierarchy (which front-of-pack message wins), flavour extension (which new flavour to launch next), format extension (which product format has the most demand), price optimisation (what price point maximises intent), multipack design (what combination drives trial), and messaging or positioning (which brand story resonates). Mixing decision types in one test corrupts the analysis because shoppers are making fundamentally different cognitive evaluations when choosing between a price and a claim.
Step 3: Design the Questions
A well-designed concept test uses five to ten questions, each with three to five levels (options). Every question should test the same product and the same decision type, just from different angles. For example, in a claim hierarchy test, Q1 might ask which claim would make you most likely to try the product, Q2 might ask which claim you find most credible, and Q3 might ask which claim differentiates the product from competitors. The questions you choose determine whether you get actionable output or vague directional data.
Step 4: Select and Size the Audience
Your audience must match the consumers who will actually encounter the product. For a US grocery launch, you need shoppers calibrated to US census demographics. For a UK health food launch, you need shoppers who match the relevant channels and demographics. Sample size matters: n=250 is the baseline for statistically reliable results. Below n=100, you are getting directional signals only. Above n=500, you are investing in precision that rarely changes the decision.
Step 5: Analyse and Decide
Results should produce clear rank orders with measurable gaps between options. If the top two options are separated by less than 2 percentage points, the data is telling you both are viable and the decision should be made on operational grounds. If the gap is 5+ percentage points, you have a statistically meaningful signal. The key is to not over-interpret small differences or under-appreciate large ones.
What Makes a Good Concept Test
After designing hundreds of food and beverage experiments, I can spot a bad concept test from the question list alone. The most common mistakes are subtle, and they all produce the same outcome: data that feels useful but does not actually predict market behaviour.
The Product Description Must Be Specific
Vague descriptions produce vague results. “A new healthy snack bar” tells the respondent nothing. “A 45g oat-based bar with 11g plant protein, dark chocolate coating, non-GMO certified, available in grocery for $2.49” puts them in a real purchase context. The more specific your description, the more your test mimics an actual shelf decision. This is the same principle behind effective front-of-pack claims testing: specificity drives predictive accuracy.
Questions Must Be About the Same Product
I see this constantly: Q1-Q4 test claims for Product A, then Q5 suddenly asks about a different product category, and Q6 switches to pricing. Each question shift forces the shopper to rebuild their mental model. The result is noisy data that does not compound across questions. Keep every question anchored to the same product and the same decision frame.
Levels Must Be Genuinely Distinct
If two options are worded differently but mean the same thing (“All Natural” vs “Made with Natural Ingredients”), you are testing copy, not concepts. Each level should represent a meaningfully different strategic choice. Would you make a different production, packaging, or marketing decision based on which option wins? If not, the levels are not distinct enough.
Audience Must Match the Buyer
Testing a premium organic baby food concept with a general population audience will give you misleading data. The audience should match the target buyer profile: parents aged 25-40 with household income above $75K who shop at Whole Foods or Sprouts. The accuracy of AI shoppers depends heavily on how well the audience profile matches the real buyer.
Concept Testing Methods Compared
Not all concept testing methods are created equal. Each has trade-offs between cost, speed, statistical rigour, and the type of insight it produces. Here is how the four main approaches compare for food and beverage applications.
| Method | Timeline | Cost (per study) | Sample Size | Statistical Rigour | Best For |
|---|---|---|---|---|---|
| Focus Groups | 4-6 weeks | $8,000-$25,000 | 24-48 people | Low (qualitative) | Exploratory insights, early ideation |
| Online Surveys (Monadic) | 3-5 weeks | $10,000-$30,000 | 200-500 | Medium | Simple A/B preference |
| Conjoint / Discrete Choice | 6-8 weeks | $20,000-$50,000 | 300-1,000 | High | Trade-off analysis, pricing |
| AI Shoppers (Saucery) | 24 hours | $200-$2,000 | 250-1,000 | High (discrete choice) | Rapid iteration, multiple tests |
Focus Groups
Focus groups remain useful for exploratory research, but they are the wrong tool for concept validation. Eight people in a room telling you what they think about your product is qualitative signal, not quantitative evidence. Group dynamics, moderator bias, and dominant personalities all distort the output. I have seen brands kill promising concepts because three vocal participants in a focus group said they “wouldn’t buy it” while survey data from 500 people showed 35% purchase intent. Use focus groups for hypothesis generation, never for decision-making.
Online Surveys (Monadic Testing)
Monadic surveys show each respondent a single concept and measure their reaction. This avoids comparison bias but requires larger samples (you need enough respondents per concept to draw conclusions). The main limitation is that monadic designs cannot measure trade-offs. A consumer might rate both “High Protein” and “Low Sugar” as 4 out of 5 in importance, but when forced to choose between them on a shelf, they consistently pick one over the other. That trade-off behaviour is what drives actual purchase decisions.
Conjoint and Discrete Choice Experiments
Discrete choice experiments are the gold standard for product concept testing because they replicate the shelf decision. Respondents choose between options, just as they would in a store. The analysis reveals not just preference but relative importance: how much does “organic” matter compared to “high protein” compared to a $1 price difference? This is the methodology that NielsenIQ, Circana, and every major CPG company uses for high-stakes launch decisions. The problem has always been cost and timeline. Until now.
AI Shoppers (Synthetic Testing)
AI shoppers are modelled on real census demographics and trained on billions of purchase decisions, survey responses, and market data. At Saucery, we test product decisions with 250 to 1,000 of these modelled shoppers, each reflecting demographically consistent buying behaviour. They are not generating opinions from thin air. They choose between options the same way a real shopper would on a shelf, and the output is the same rank-ordered preference data you would get from a traditional panel, at a fraction of the cost and timeline. I cover the methodology in detail in our piece on how AI shoppers achieve research-grade accuracy.
Real Examples: When the “Obvious” Answer Loses
The most valuable thing about running hundreds of food concept tests is the library of surprising results. These are not hypothetical scenarios. They are real experiments with real data that challenged the assumptions of the teams behind them. Every example below comes from Saucery experiments run in 2025-2026.
The Protein Claim That Lost to Clean Label
We ran a benchmark protein bar study (n=500, US market) testing front-of-pack claim hierarchy. The brand’s working assumption was that “11g Protein Per Bar” would be the strongest claim since protein content drives the category. It came second. “Only 6 Ingredients” won at 45.2% preference versus 40.4% for the protein claim. In a category defined by protein, consumers weighted simplicity and clean label over the functional benefit. This is the kind of finding that changes packaging strategy, and it cost a fraction of what a traditional test would have.
The Format Nobody Asked For
Cappello’s is a grain-free pasta brand with a strong pizza base. We ran a format extension test (n=250, US) asking which new frozen format had the highest purchase intent. The internal hypothesis was Pizza Night Kit since it extended the brand’s pizza equity. The winner was Frozen Calzones at 30.4%, edging out Pizza Night Kit at 29.6%. A pizza brand’s biggest demand signal pointed to calzones, a format that barely exists in the gluten-free frozen aisle. That kind of whitespace identification is precisely what concept testing should deliver.
The “Obvious” Extension That Came Last
Purely Elizabeth, known for premium granola, was considering format extensions. The “safe” bet was granola bars since it is the most natural extension of a granola brand. We tested it (n=250, US). Protein Cookies won at 30% purchase intent. Granola bars finished last in the ranking. The brand already has a Cookie Granola line, so the consumer mental model was primed for cookies. The data said to lean into it. This is a textbook case of internal logic (“we are a granola brand, so granola bars”) clashing with consumer perception (“you are a cookie brand wearing granola clothing”).
The Launch That Missed Its Own Demand Signal
SkinnyDipped recently launched Bites, a bite-sized version of their chocolate-covered nuts. We ran a format test (n=250, US) before the launch data was public. Nut Butter Cups won at 34.8% recommendation intent versus Bite-sized Clusters at 32%, Trail Mix at 24%, and Bark at 13-21%. The brand went with a format that tested second while the strongest demand signal pointed to nut butter cups. This does not mean the Bites launch will fail. It means there is a potentially larger opportunity sitting untested. This is the power of running concept tests before committing: you see the full landscape, not just the path you already chose.
The Chicken Launch That Needed Beef’s Credibility
Chomps, a meat snack brand known for beef sticks, was launching chicken. We tested intro claims (n=250, US). “Same Standards as Our Beef” won at 38.8%, beating every other claim. Consumers did not want to hear that the chicken was special. They wanted reassurance that it met the brand’s existing quality bar. In the quality claims test, “Free-Range Chicken” topped at 36.8%. The data told a clear story: lead with trust transfer from your known product, then reinforce with the quality credential. That is a messaging sequence you would not get from a focus group.
Want to know which option your shoppers actually prefer? Saucery tests product decisions for food and beverage brands, delivering clear answers from 250+ modelled shoppers in 24 hours. Run your first test.
How to Read Concept Test Results
Getting the data is only half the job. Interpreting it correctly is where most teams go wrong. Here is how to read concept test outputs without over-interpreting noise or missing real signals.
Preference Share vs Purchase Intent
In a discrete choice experiment, results are expressed as preference shares: the percentage of respondents who chose a given option. This is different from purchase intent scales (1-5 Likert). Preference shares are relative, meaning they always sum to 100% within a question. A 35% share in a 4-option test is stronger than a 30% share in a 3-option test when adjusted for the number of alternatives. Always interpret share in context of how many options were presented.
Meaningful Gaps vs Statistical Noise
A 2-percentage-point gap between the top two options in a n=250 study is within the margin of error. Do not build your launch strategy around it. A 5+ point gap is a meaningful signal. A 10+ point gap is a strong conviction signal. In our protein bar test, the 4.8-point gap between “Only 6 Ingredients” (45.2%) and “11g Protein Per Bar” (40.4%) is meaningful. Both are strong claims, but clean label has a real edge.
Look at the Bottom, Not Just the Top
The most actionable insight in many concept tests is not which option won but which option lost badly. In the SkinnyDipped test, Bark scored 13-21% depending on the question framing while Nut Butter Cups scored 34.8%. That gap at the bottom tells you where not to invest, which is often more valuable than knowing where to invest. Eliminating your worst option frees resources for your best one.
Cross-Question Consistency
A well-designed experiment asks the same question from multiple angles. If “Free-Range Chicken” wins Q1 (trial intent), Q3 (quality perception), and Q5 (willingness to pay a premium), you have a robust signal. If it wins Q1 but loses Q3 and Q5, the claim drives curiosity but not conviction. Cross-question consistency is the difference between a directional hint and a confident recommendation.
When to Test: Mapping to the NPD Lifecycle
Concept testing is not a single event. It maps to specific stages of new product development, and the type of test you run should change as the product matures. The stage-gate process provides a natural framework for when to deploy different types of concept tests.
Stage 1: Ideation and Opportunity Mapping
At this stage you are asking: “Which product concept has the most demand?” This is where format extension tests and category-level tests live. You are testing broad concepts, not refined products. The Cappello’s calzone example sits here. The goal is to identify the opportunity space before investing in development. AI-driven food innovation is particularly powerful at this stage because you can test five concepts in a day instead of picking one and hoping.
Stage 2: Concept Development
Now you have a concept and are refining it. This is claim hierarchy testing, flavour selection, and positioning work. The Chomps chicken study sits here. You know the product (chicken sticks), and you are optimising how to present it. Run multiple experiments: one for claims, one for positioning, one for flavour if applicable. At AI-powered testing costs, there is no reason to collapse these into a single overloaded study.
Stage 3: Pre-Launch Validation
The product is nearly finalised. This is where price sensitivity testing and final positioning validation happen. You are not exploring anymore. You are confirming. The test should mirror the actual shelf experience as closely as possible: real price points, real competitor alternatives, real retail context. This is your last quantitative checkpoint before committing to production.
Stage 4: Post-Launch Optimisation
Even after launch, concept testing has a role. Testing line extensions, seasonal variants, and new market entries all benefit from pre-validation. The Purely Elizabeth protein cookie finding came from post-launch extension testing. Trends like functional beverages, plant-based snacks, and emerging ingredients like pistachio milk create ongoing extension opportunities that are worth validating before committing production resources.
Cost Comparison: Traditional vs AI-Powered
The cost difference between traditional and AI-powered concept testing is not incremental. It is structural. Here is what the numbers look like for a typical food brand running concept tests across a product development cycle.
| Activity | Traditional (Agency) | AI Shoppers (Saucery) | Savings |
|---|---|---|---|
| Single concept test (n=250) | $15,000-$30,000 | $200-$2,000 | 90-99% |
| Timeline per test | 6-8 weeks | 24 hours | 97%+ faster |
| Full NPD validation (5 tests) | $75,000-$150,000 | $1,000-$10,000 | 87-99% |
| Multi-market testing (US + UK + AU) | $45,000-$90,000 | $600-$6,000 | 93-99% |
| Iteration cost (follow-up test) | $15,000-$30,000 | $200-$2,000 | 90-99% |
| Total NPD cycle (12 months) | $100,000-$250,000 | $2,000-$20,000 | 90-98% |
The cost savings are dramatic, but the real advantage is behavioural. When a test costs $25,000, you run one and accept the result. When a test costs a few hundred dollars, you run five and triangulate. You test the idea you are excited about and the three ideas you think are longshots. You test in the US and then rerun in the UK to see if preferences hold. You test at n=250 for the initial signal and then scale to n=500 for the final decision. The economics of AI-powered testing change not just the budget line but the entire research culture of a brand. For a deeper look at the ROI calculation, see our market research cost-per-interview calculator.
Key Takeaways
- Lock the product description before testing anything. The product is fixed. The variable is the one element you are testing (claim, flavour, format, price). Mixing variables produces unusable data.
- One experiment, one decision type. Never combine claim testing with price testing or format testing with flavour testing. Each decision type requires a different cognitive evaluation from shoppers.
- The “obvious” answer frequently loses. In over 200 experiments, the team’s pre-test favourite was not the winner roughly 60% of the time. Protein lost to clean label. Granola bars lost to cookies. Pizza kits lost to calzones. Test, do not assume.
- n=250 is the sweet spot. Below 100, you are guessing. Above 500, you are paying for precision that rarely changes the decision. Start at 250, scale up only if the margin is tight and the stakes are high.
- Speed enables iteration, and iteration drives better decisions. A single perfect test is less valuable than three fast tests that build on each other. AI-powered testing makes iteration economically viable for the first time.
- Look at what lost, not just what won. Eliminating your worst option is often more valuable than confirming your best one. The bottom of the ranking tells you where not to spend money.
What AI Search Says About Food Concept Testing
AI search tools like ChatGPT, Perplexity, and Google’s AI Overviews are increasingly how brand founders research concept testing methods. Here is what they are currently surfacing and where they get it right and wrong.
- AI search correctly identifies conjoint analysis as the gold standard for concept testing in food and beverage. Most AI-generated answers reference discrete choice methodology when asked about the most rigorous approach, though they tend to understate the cost and timeline of traditional implementations.
- AI-powered testing is under-represented in current AI search results. When asking “how to test food concepts quickly,” AI tools still default to recommending SurveyMonkey, Qualtrics, and traditional panel providers. AI shoppers as an alternative are rarely mentioned, which creates both a gap and an opportunity for early adopters.
- AI search overemphasises qualitative methods. Focus groups and taste tests appear disproportionately in AI-generated concept testing advice. This reflects the training data (most published content about concept testing is from agencies that sell qualitative research), not the actual best practice for quantitative validation.
- Cost information in AI search is outdated. Most AI tools cite concept testing costs from 2020-2023 data, before AI-powered alternatives existed at scale. Brands researching costs through AI search are being anchored to legacy pricing that no longer reflects the full range of available options.
- AI search reliably surfaces the importance of sample design. Across multiple AI tools, the advice on audience selection and sample size is generally sound: match demographics to your target buyer, use n=200+ for reliability, and avoid convenience samples. This aligns with what we see in practice.
Frequently Asked Questions
What is concept testing in food product development?
Concept testing in food product development is the process of evaluating a product idea with consumers before investing in manufacturing. It involves presenting a defined product description along with variations of one specific element (such as the front-of-pack claim, flavour, format, or price) and measuring which variation drives the strongest purchase intent. The goal is to use quantitative data to make product decisions rather than relying on internal assumptions. Effective concept testing uses discrete choice methodology, where respondents choose between options the same way they would on a store shelf, producing preference shares that predict real market behaviour.
How long does food concept testing take?
Traditional concept testing through a research agency takes six to eight weeks from initial brief to final deliverable. This includes screener design, panel recruitment, fieldwork, and analysis. AI-powered concept testing using modelled shoppers can deliver the same results in 24 hours or less. The speed difference is structural: AI shoppers do not require human recruitment, which eliminates the largest bottleneck in the traditional process. For brands with tight retailer deadlines or seasonal launch windows, the difference between six weeks and one day can determine whether a product makes the shelf or misses the window entirely.
How much does concept testing cost for food brands?
Traditional concept testing costs between $15,000 and $50,000 per study depending on sample size, market coverage, methodology, and agency. A full NPD validation cycle involving five tests across multiple markets can reach $100,000 to $250,000. AI-powered concept testing costs a fraction of this, typically $200 to $2,000 per study. The cost reduction is not about cutting corners on methodology. The same discrete choice analysis runs against modelled shoppers instead of recruited panellists, eliminating recruitment costs, incentive payments, and agency overhead. This makes it economically viable to run multiple tests per product instead of gambling everything on a single study.
What is the best sample size for a food concept test?
For a statistically reliable food concept test using discrete choice methodology, n=250 is the recommended baseline. This provides enough data to detect meaningful differences (5+ percentage points) between options with confidence. Below n=100, results are directional only and should not be used for high-stakes launch decisions. n=500 provides additional precision that is warranted when the margin between top options is tight (under 3 points) or when the financial stakes of the decision are very high. Above n=1,000 is rarely necessary unless you need to segment results by demographic subgroups, where each segment needs its own minimum viable sample.
How reliable are AI shoppers for food testing?
AI shoppers calibrated to census demographics produce preference data that aligns with traditional panel research at directional and rank-order levels. They are particularly strong at identifying which option wins and which loses, which is the primary purpose of concept testing. Where they are less proven is in predicting absolute purchase rates (the exact percentage of consumers who will buy), which even traditional panels do poorly. The key advantage is not that AI shoppers are identical to human panels but that they enable a research workflow that was previously impossible: running five tests instead of one, iterating based on results in hours instead of weeks, and testing across multiple markets in a single day. The methodology behind AI shopper accuracy is covered in detail on our research page.
What types of food concepts can be tested?
Food concept testing works for any product decision that involves consumer choice between defined alternatives. The most common test types are: claim hierarchy (which front-of-pack message is strongest), flavour extension (which new flavour to launch), format extension (which product format has the most demand), price optimisation (what price point maximises purchase intent), multipack design (which product combination drives trial), and messaging or positioning (which brand story resonates). Each test type must be run as a separate experiment. Mixing claim testing with price testing in a single study corrupts the data because respondents evaluate those trade-offs using fundamentally different cognitive processes.
How is concept testing different from market research?
Concept testing is a specific type of market research focused on evaluating defined product options before launch. Market research is a broader category that includes category analysis, competitive intelligence, consumer segmentation, brand tracking, and post-launch performance measurement. Concept testing answers the question “which version of this product should we make?” while broader market research answers questions like “what category should we enter?” or “how is our brand perceived?” Both are valuable, but concept testing is the most directly actionable because its output maps to a specific product decision. It sits between ideation (where you generate options) and launch (where you commit resources), making it the highest-leverage research investment in the NPD cycle.
Ready to test your next product decision in 24 hours? Saucery helps food and beverage brands find out which option wins before they launch, with answers from 250+ modelled shoppers delivered in a single day. Start your first test.
About the author: Andrew Mac is the founder of Saucery, a pre-launch testing platform for food and beverage brands. He has designed and analysed over 200 product tests across the US, UK, and Australian markets, helping growth-stage brands make better decisions on claims, formats, flavours, and pricing before they launch. Before Saucery, he spent a decade in food industry analytics and product strategy.
Connect with Andrew on LinkedIn for weekly insights on food product validation and consumer testing.
Subscribe for F&B Consumer Insights
Data-driven insights on food & beverage consumer preferences, straight to your inbox.
