Back to Learn
#AI Growth Playbooks

10 Best AI Prompts for Market Research: Approaches That Actually Produce Specific, Actionable Output (2026)

Ten structured AI prompt frameworks that produce specific, actionable market research output — from customer segment analysis and sentiment mapping to A/B testing hypothesis generation.

Ten AI prompt frameworks for B2B market research in 2026

Last updated: April 2026

This article was prepared by the GrowthX AI team, which builds growth engines for companies like Webflow, Ramp, and Lovable. We use AI-assisted strategy workflows across our client portfolio to speed up positioning, competitive analysis, and GTM planning. For more on building AI-native marketing systems, join AI-Led Growth.

--

Most growth marketers treat AI like a search engine with better grammar. They type "what are the trends in B2B SaaS?" and get back confident-sounding generalities that could apply to any company in any category. The output looks like research, but often says little a practitioner couldn't have written from memory.

The failure mode is usually straightforward. The prompt is too broad or too unconstrained. In many cases, it is also missing the context the model needs to produce something specific to your market. As Kyle Poyar documented in his GTM newsletter, most deep research tools won't ask for context. They either make assumptions or stay generic. Jonathan Bland put it more bluntly. When the model has no context, no knowledge of your POV, and no idea about your messaging, the output won't match what the brand actually needs.

This guide covers 10 prompt approaches organized by the research job they are designed to do. Each one includes the full prompt structure, the inputs that make it work, and the conditions where it breaks down.

How We Evaluated These

We selected these approaches using four criteria applied across real research programs and practitioner-reported outcomes:

Output specificity: The prompt produces segment definitions, ranked themes, gap analyses, or experiment hypotheses grounded in your market, your ICP, or your product. The same prompt run by two different companies should return meaningfully different answers.

Reproducibility: Any operator can run this prompt and get structured output they can review and use without specialized prompt engineering skills, as long as they provide specific context.

Research integrity: The prompt includes structural constraints that push the model toward source-grounded analysis and away from hallucinated plausibility. It improves reliability compared with open-ended questions, though it remains imperfect.

Time-to-insight ratio: The approach saves meaningful hours compared to doing the same analysis manually, without sacrificing the depth required for decisions.

These criteria point to the same practical rule. The best prompts in this guide use source material, a clearly defined job, and a structured output format. That is also why we cut the common "summarize this industry report" prompt. Teams use it widely, but it only reflects the quality of the document you feed it. It also encourages people to skip the harder job of forming their own research questions.

TL;DR: Market Research Prompts At A Glance

If you already know the research job you need to do, this table gives you the fastest match. Use it to choose a prompt based on the input material you already have:

  • Customer Segment Analysis — Revealing hidden behavioral groups beyond demographics. Requires: Product description + customer data or CRM notes.
  • Customer Sentiment & Feedback Analysis — Structuring qualitative feedback chaos into prioritized themes. Requires: Reviews, support tickets, or social mentions.
  • Competitor Strategy Breakdown — Reverse-engineering competitor positioning from public sources. Requires: Competitor content + buyer language samples.
  • Market Trend Identification — Separating structural shifts from hype cycles. Requires: Industry signals you provide.
  • Pain Point & Needs Discovery — Uncovering unmet needs between solutions and jobs-to-be-done. Requires: Customer interviews or qualitative data.
  • Buyer Persona Development — Building data-informed personas faster than traditional methods. Requires: Segment profiles + customer examples.
  • Content Gap & Opportunity Analysis — Identifying white space competitors aren't covering. Requires: Competitor content inventory + search data.
  • Customer Journey Mapping — Mapping touchpoints and friction at each buying stage. Requires: Existing customer data or interview transcripts.
  • Market Sizing & Growth Potential — Building defensible TAM/SAM/SOM estimates from buyer constraints. Requires: Buyer definitions and budget parameters.
  • A/B Testing Hypothesis Generation — Turning data observations into testable experiments. Requires: Performance data or analytics exports.

1. Customer Segment Analysis Prompt

Use this prompt to identify behavioral customer groups that demographic filters miss. It lets you move from a broad label like "VP of Marketing at a mid-market company" to a more useful profile such as a team lead who inherited a tech stack they didn't choose and needs to show ROI within 90 days.

Traditional segmentation often stops at firmographics. This approach asks the model to cluster customers by what they were doing before they found you, what outcome drove the purchase, and what nearly killed the deal. Those three constraints push the output past generic job-title groupings into segments you can build campaigns around.

The prompt structure emphasizes role, instructions, and specific output guidance. The purpose statement also orients the model toward how you'll use the output.

The prompt

You are a B2B SaaS customer segmentation analyst. I'm going to give you our product description and data about our customers — CRM notes, deal notes, or customer descriptions. Identify distinct customer segments based on behavioral patterns, not demographics. For each segment, define the following: what they were doing or trying before finding us; the specific outcome that drove the purchase decision; what nearly caused them to abandon the buying process; and the common trigger event that initiated their search. Format each segment as a named profile built around those four dimensions. I'll use this output to prioritize campaign targeting and messaging. Here's our context: [paste product description + customer data]

This prompt is best used in the following situations:

Who it's right for: Teams that have a rough ICP definition but can't explain why some customers expand while others churn after three months.

Where it breaks down: The prompt needs detailed customer evidence. Vague CRM notes like "good fit, closed won" usually lead to vague segments, while deal notes with actual buyer language give you segment definitions you can act on.

Start with your richest deal notes and expand the dataset as segment patterns emerge.

2. Customer Sentiment And Feedback Analysis Prompt

Use this prompt to turn reviews, tickets, and social mentions into a sentiment map that shows where buyer frustration concentrates and where satisfaction peaks.

The method works best as aspect-based sentiment analysis. Instead of labeling each review as positive, negative, or neutral, it asks the model to identify the specific themes buyers mention and classify sentiment at the theme level. A single G2 review might be positive about onboarding and negative about reporting. That distinction is where the useful insight lives.

The prompt

You are a customer insights analyst specializing in B2B SaaS. I'm going to give you a set of customer feedback — reviews, support tickets, survey responses, or social mentions. Analyze the feedback and do four things. Identify the 8-10 most frequently mentioned themes or topics. For each theme, classify the dominant sentiment as positive, negative, or mixed. Pull 2-3 direct quotes per theme that represent the sentiment in the customer's own language. Then rank themes by frequency of mention. Don't summarize the feedback. Extract the patterns and preserve the original language. Here's the source material: [paste]

The key structural constraint is "don't summarize — extract." Summarization strips out the buyer language that makes sentiment analysis useful for messaging and positioning work. Direct quotes preserve the exact phrasing your customers use. That language feeds directly into differentiation work downstream.

This prompt is best used in the following situations:

Who it's right for: Product marketing teams building messaging architecture from customer language, and CS teams identifying systemic friction patterns across the customer base.

Where it breaks down: AI sentiment classification works well for English-language reviews but loses nuance with sarcasm, industry jargon, and mixed-sentiment passages. Always spot-check the model's sentiment classifications against 10-15 source reviews before trusting the full output.

Run the spot-check before sharing the output with anyone who will make decisions from it.

3. Competitor Strategy Breakdown Prompt

Use this prompt to surface the gap between how competitors describe themselves and how buyers describe the problem they are trying to solve. That gap often holds the clearest positioning opportunities.

The approach compares two inputs. One is competitor messaging from their website, case studies, and sales collateral. The other is buyer language from reviews, interviews, and forum posts. The model identifies which buyer concerns competitor messaging actively addresses and which ones are missing or lightly covered.

The prompt

You are a competitive intelligence analyst for a B2B SaaS company. I'm going to give you two inputs: (1) competitor messaging — from their websites, case studies, and key claims; and (2) buyer language — from reviews, interviews, or forum posts describing the problem they're trying to solve. Compare the two. Show which buyer concerns are actively addressed in competitor messaging. Show which concerns appear frequently in buyer language but are absent or underrepresented in competitor messaging. Identify the dominant positioning angle each competitor uses, meaning what they want to be known for. Also note the specific language patterns competitors avoid. Format the result as a structured comparison table. Here's the competitor content: [paste]. Here's the buyer language: [paste]

Klue documents a similar battlecard-based analysis approach, including "Why We Win" and "Why We Lose" battlecards for competitive analysis and sales enablement. A clear battlecard structure alongside buyer-language analysis can organize competitive insights.

This prompt is best used in the following situations:

Who it's right for: Teams entering a competitive category or refreshing positioning after 12+ months without a meaningful update.

Where it breaks down: Screenshots and marketing pages usually produce shallow output. G2 reviews, case studies, and sales call transcripts produce better results. One r/ProductMarketing practitioner, Particular_Bet4865, reported that results were so inaccurate they stopped trying the use case entirely. The broader community finding is that AI tools work best for competitive analysis when you have clear frameworks rather than open-ended questions.

Update the inputs quarterly because competitor messaging shifts faster than most teams track.

4. Market Trend Identification Prompt

Use this prompt to sort genuine category shifts from noise.

Many trend-analysis prompts return lists of buzzwords. This one asks the model to classify signals into three buckets. It separates structural shifts that change how buyers make decisions, tactical trends that represent temporary behavior changes, and noise that does not predict much of anything. The definition of structural matters most. A signal only qualifies when it changes buyer decision criteria rather than simply reflecting the features people are talking about.

The prompt

You are a market analyst specializing in B2B SaaS. I'm going to give you a set of industry signals — recent press coverage, analyst reports, social posts, earnings call transcripts, or conference themes. Sort each signal into one of three buckets. Structural shifts are changes that affect how buyers evaluate and purchase solutions, not just what features get attention. Tactical trends are temporary behavior changes that may reverse within 12 months. Noise is coverage that isn't predictive of buyer behavior. For each structural shift, explain what buyer decision criteria it changes, what evidence supports the classification, and what the 12-month and 24-month implications are for a B2B SaaS company in this space. A signal only qualifies as structural if it changes buyer decision criteria. Here's the source material: [paste]

The time-horizon requirement reflects a practical prompt design consideration — making outputs time-bound. That is particularly relevant for B2B SaaS market research, where competitive landscapes shift rapidly and model training cutoffs can make insights stale without current retrieval.

This prompt is best used in the following situations:

Who it's right for: Teams building a content strategy or competitive analysis that needs to hold for 12+ months, not just the next quarter.

Where it breaks down: AI trend analysis is only as current as its training data. For fast-moving categories, supplement with recent source material you provide. Perplexity with its premium data sources like PitchBook and Wiley can help bridge this gap, but always cross-reference against primary sources.

Feed in the most recent signals you can find because the model cannot surface what it has not seen.

5. Pain Point And Needs Discovery Prompt

Use this prompt to uncover unmet needs by analyzing the gap between what current solutions provide and what customers are actually trying to accomplish. It separates feature requests from the jobs the buyer hired the product to do.

The approach asks the model to extract functional, emotional, and social jobs from source material, then map each job against how well existing solutions address it. The gaps between jobs and solutions point to product and positioning opportunities.

The prompt

You are a jobs-to-be-done researcher analyzing B2B SaaS buyer motivations. I'm going to give you customer interviews, support tickets, sales call transcripts, or review data. For each source, extract the functional jobs — the tasks the buyer is trying to complete. Extract the emotional jobs — how the buyer wants to feel during and after the process. Extract the social jobs — how the buyer wants to be perceived by colleagues or leadership. For each job, note how frequently it appears and pull the actual language buyers use to describe it. Then identify which jobs are well-served by current solutions mentioned in the data and which jobs represent gaps — frequently mentioned but poorly addressed. Rank gaps from largest to smallest based on frequency and urgency. Here's the source material: [paste]

This prompt produces some of the most useful output in this guide when you have 10-15 customer interviews to feed it. It produces much weaker output when you do not. CXL's guidance on AI-assisted ICP targeting emphasizes using AI to sharpen targeting through signal detection and faster analysis, while keeping human judgment central.

This prompt is best used in the following situations:

Who it's right for: Product marketing teams building or refreshing messaging architecture, and product teams deciding which capabilities to prioritize on the roadmap.

Where it breaks down: This prompt needs customer language from interviews, tickets, calls, or reviews. Without that material, the model falls back to category-level assumptions that apply to nearly every company in your space.

If you do not have interview transcripts yet, design the research first and come back with data worth analyzing.

6. Buyer Persona Development Prompt

Use this prompt to create data-informed personas that capture decision-making behavior rather than demographic filler. A persona built only on job titles and company sizes tells you very little about why someone buys or walks away.

Andy Crestodina documented the core approach: you can use AI to create AI-generated marketing personas and ask them about triggers, hopes, and fears. But Crestodina also included a critical counterpoint from B2B content strategist Ardath Albee: "Andy, do you really know this is accurate? If you incorporate this perspective into your content, what's the ..." and noted her skepticism about trusting AI outputs without careful review and validation with real customers.

The prompt structure we recommend addresses Albee's concern by grounding the persona in real customer data instead of asking the model to generate one from assumptions.

The prompt

You are a B2B persona strategist and market researcher. I'm going to give you data about a specific customer segment — CRM notes, interview excerpts, deal notes, or behavioral data. Create a detailed buyer persona that captures their current situation and what triggered their search. Include the specific outcome they're trying to achieve and the timeline pressure behind it. Rank their decision criteria when evaluating vendors by importance. Identify the internal stakeholders they need to convince and the objections those stakeholders raise. Note the information sources they trust during evaluation. Then show what would cause them to abandon the buying process entirely. After generating the persona, flag which elements are directly supported by the source data and which are inferred. Here's the segment data: [paste]

Clear labeling can help distinguish data-supported observations from assumptions in persona outputs. It forces the model to separate evidence-based persona dimensions from speculative fill. That aligns with Albee's critique that AI-generated personas should be validated against real customer evidence.

This prompt is best used in the following situations:

Who it's right for: Teams that need to build or update personas quickly but want to avoid the generic outputs that Michelle Sebek describes as ordering at a drive-thru without investing in the thinking required.

Where it breaks down: Synthetic personas still fall short of real human research[cite-12], especially in B2B contexts where decision certainty is crucial. Use this prompt to speed up synthesis of existing data, not to replace primary research.

Validate the inferred elements against your next five customer conversations before building campaigns on them.

7. Content Gap And Opportunity Analysis Prompt

Use this prompt to identify the topics and angles your competitors are not covering even though your buyers actively search for them. That white space is often where content investment pays off.

A stronger content gap analysis looks past keyword comparison and into intent. This prompt asks the model to analyze competitor content through the lens of buyer questions at each stage of the buying journey, then identify which questions remain unanswered across the competitive set.

The prompt

You are a content strategist specializing in B2B SaaS. I'm going to give you two inputs: (1) a competitor content inventory — URLs, titles, and topic summaries of the top content from 3-5 competitors; and (2) a list of buyer questions, pain points, or search queries our target audience uses during their evaluation process. Analyze the competitor content and identify four things. First, topics that multiple competitors cover thoroughly — saturated zones where differentiation is difficult. Second, topics that one competitor covers but others don't — partial gaps. Third, buyer questions or pain points that no competitor addresses in their content — full gaps. Fourth, content formats that are underrepresented across the competitive set. Rank the full gaps by estimated buyer intent. Prioritize questions that signal active evaluation over general education. Here are the inputs: [paste competitor inventory] / [paste buyer questions]

Ranking by buyer intent is the structural differentiator. A gap in "what is [category]" content matters less than a gap in "[product A] vs. [product B] for [specific use case]" content.

This prompt is best used in the following situations:

Who it's right for: Content teams building editorial calendars and SEO teams prioritizing topic investments based on competitive white space.

Where it breaks down: The competitor content inventory you provide sets the ceiling for output quality. Blog titles alone produce shallow gap analysis. Full content summaries with coverage depth assessments produce gaps you can actually build a content program around.

Prioritize the full gaps with high buyer intent because those are your highest-ROI content investments.

8. Customer Journey Mapping Prompt

Use this prompt to map touchpoints, emotions, and friction at each buying stage with more consistency than manual journey mapping usually achieves across an entire funnel.

Journey mapping by committee often reflects the team's assumptions rather than the buyer's experience. This version grounds the map in actual customer data such as interview transcripts, sales call notes, and support tickets, then traces the journey through buyer language rather than the internal team's funnel terminology.

The prompt

You are a customer experience researcher specializing in B2B SaaS buying journeys. I'm going to give you customer interviews, sales call transcripts, or support interaction data. Map the complete buying journey. Identify each distinct stage the buyer moves through, from trigger event to post-purchase evaluation. For each stage, show the specific activities and information-seeking behaviors, the questions the buyer is trying to answer, the emotional state — confidence level, anxiety, frustration — and what drives it, the touchpoints where they interact with our brand or competitors, and the friction points or moments where progress stalls or reverses. Format the output as a structured table with one row per stage and columns for each dimension. Here's the source material: [paste]

The emotional state dimension is often skipped in B2B journey mapping because teams assume business purchases are purely rational. Anxiety about choosing the wrong vendor and confidence shifts after a strong demo both shape the journey in ways that content and sales enablement can address.

This prompt is best used in the following situations:

Who it's right for: Teams building or revising their content funnel, sales enablement teams aligning materials to specific buying stages, and product marketing teams mapping the path from awareness to purchase.

Where it breaks down: Journey maps based on three interviews can look plausible without representing your actual buyer base. The model will fill gaps confidently. Cross-reference the output against your CRM stage-transition data to validate whether the stages and friction points match real pipeline behavior.

Start with your highest-value segment and validate the map against pipeline data before extending to other segments.

9. Market Sizing And Growth Potential Prompt

Use this prompt to build market size estimates from buyer constraints you can inspect rather than from a top-down analyst projection. The output gives you an auditable chain of logic.

AI-generated market sizing often fails because it returns a confident figure with no mechanism behind it. This approach walks the model through a bottom-up sizing exercise using constraints you specify. Those constraints include the companies that could plausibly buy, the decision-maker who approves the purchase, the budget that typically funds it, and realistic adoption rates.

The prompt

You are a market analyst building a bottom-up market size estimate for a B2B SaaS product. Do not give me a top-down projection from an analyst report. Instead, work through these constraints I'll provide. Include the total number of companies that match our target profile by industry, size, and geography. Include the percentage of those companies that have the specific problem our product solves. Include the decision-maker who approves this type of purchase and their typical budget authority. Include our average contract value or expected price point. Include realistic adoption rates, separated by early adopters versus mainstream buyers. Build the TAM, SAM, and SOM estimates from these constraints. Show every assumption and calculation step so the logic is auditable. Where you're uncertain about an input, provide a range rather than a point estimate. Here are my constraints: [paste]

The instruction to show every assumption is the structural safeguard against the biggest risk in AI market sizing: a number that looks precise but rests on unstated assumptions that may be wrong by an order of magnitude.

This prompt is best used in the following situations:

Who it's right for: Teams preparing fundraise narratives, content investment justifications, or category ownership arguments where a defensible market size estimate matters more than precision.

Where it breaks down: The output depends on the accuracy of your input constraints. If your buyer definition or budget estimate is wrong, the TAM will be wrong too. Treat the output as a thinking tool that shows which assumptions drive the estimate rather than as an authoritative market figure.

Stress-test the output by changing one assumption at a time and watching how the estimate moves.

10. A/B Testing Hypothesis Generation Prompt

Use this prompt to turn raw performance data into structured, testable hypotheses with clear reasoning chains. It bridges the gap between "something changed in our metrics" and "we know what to test next."

Many A/B testing programs struggle to produce clear hypotheses. Teams know which metrics underperform but cannot explain why, so they test random variations instead of a specific theory about buyer behavior. This approach asks the model to connect observed data patterns to testable explanations.

The prompt

You are a growth experimentation strategist for a B2B SaaS company. I'm going to give you performance data — conversion rates, engagement metrics, funnel drop-offs, or campaign results. For each underperforming metric, generate 3 distinct hypotheses for why the metric is underperforming, with each one grounded in a different assumption about buyer behavior. For each hypothesis, specify the exact test you would run, including the variable to change, the control condition, the success metric, and the minimum sample size needed for statistical significance. Rank the hypotheses by expected impact and ease of implementation. Then identify which hypotheses are mutually exclusive and which could be true simultaneously. Format the output as a testing roadmap with clear prioritization. Here's the performance data: [paste]

A practitioner in r/b2bmarketing shared a similar RevOps-framed funnel diagnosis, using a prompt that asked AI to identify funnel bottlenecks and suggest experiments: "When I did this, we found our nurture was qualifying leads too early, misdirecting effort and wasting time. A few tweaks later, stalled deals started to move again."

This prompt is best used in the following situations:

Who it's right for: Growth teams with existing performance data that need to move from observation to experimentation faster, and marketing ops teams building quarterly testing roadmaps.

Where it breaks down: The model generates plausible hypotheses, but plausible is not the same as correct. The highest-value step in this workflow is the review against your team's qualitative knowledge of the product and buyer before you commit test resources.

A hypothesis that sounds logical but contradicts what your sales team hears daily is a waste of a test cycle.

How To Prioritize

Use the prompt that matches your current constraint rather than the one that sounds most interesting. These starting points make the sequence clearer:

Starting from scratch: Run the customer segment analysis and pain point discovery prompts first. Every downstream research job improves when your buyer definition is precise.

Have existing data but need insights: The sentiment analysis and buyer persona prompts extract structured insight from qualitative data you've already collected.

Preparing for a competitive launch: The competitor strategy breakdown and content gap analysis prompts work well as a pair. The first shows where competitors are positioned, and the second shows where they are not producing content.

Optimizing existing channels: The A/B testing hypothesis generation and customer journey mapping prompts help you find friction in what you're already running.

Don't have data yet: The market trend identification and market sizing prompts both work with publicly available source material. Start there, then use the insights to design primary research.

Pick the scenario that matches your current constraint and work forward from there.

Frequently Asked Questions

Related Content