Market Research Industry Terminology

A/B Testing

A randomized controlled experiment that compares two versions (A vs. B) of an experience, stimulus, or message to measure which performs better on a predefined metric.

We ran an A/B test on the headline to lift survey sign-ups.|Variant B increased click-through by 12% versus control.|Allocate 50/50 traffic and keep the test running for two weeks to reach power.


Ad Hoc Research

A one-time, custom study designed to answer a specific question, not part of an ongoing tracking program.

This is an ad hoc study to explore the new concept’s appeal.|Unlike our tracker, the ad hoc project will be one-and-done.|We commissioned ad hoc research to answer the board’s question on pricing.


Attitudinal Data

Information about beliefs, opinions, perceptions, or intentions (e.g., satisfaction, preferences), often collected via surveys or interviews.

Attitudinal data shows strong perceived quality but low value-for-money.|We need both attitudinal and behavioral data for a full view.|The Likert items capture attitudinal measures like trust and satisfaction.


Awareness (Brand Awareness)

The extent to which consumers recognize or recall a brand, commonly measured as unaided, aided (prompted), and total awareness.

Unaided awareness rose five points this wave.|Prompted awareness remains above 80%.|We’ll measure total, aided, and unaided awareness in the tracker.


Behavioral Data

Observed records of actions (e.g., purchases, clicks, views), often sourced from digital analytics, CRM, or transactional systems.

Clickstream behavioral data validates stated purchase intent.|We merged loyalty card transactions with survey data.|Observed behavioral metrics show repeat purchase patterns.


Benchmarking

Comparing metrics against competitors, historical results, or industry standards to contextualize performance.

Let’s benchmark our NPS against category leaders.|Awareness is below the industry benchmark by 7 points.|We used last year’s study as an internal benchmark.


Bias (Response Bias)

Systematic error introduced by survey design or respondent behavior (e.g., leading questions, order effects, social desirability) that distorts results.

Acquiescence bias may inflate agreement on these items.|We randomized answer order to reduce primacy bias.|Social desirability bias can overstate sustainable intentions.


Brand Equity

The value a brand adds to products/services, reflected in consumer perceptions and behaviors (e.g., preference, loyalty, willingness to pay).

Price premium indicates strong brand equity.|Equity drivers include awareness, associations, and preference.|Our conjoint simulates how equity affects share under price changes.


Coding (Qualitative)

The process of categorizing open-ended or qualitative data into themes or codes to enable analysis and quantification.

We developed a codeframe for open-ends about product dislikes.|Two coders double-coded and reconciled discrepancies.|Axial coding yielded five overarching themes.


Concept Test

Research that evaluates new product or ad ideas to estimate appeal, clarity, uniqueness, and likely behavior before further development.

Monadic concept testing avoids interaction effects.|Measures include appeal, uniqueness, and purchase intent.|We’ll screen out low-viability concepts before prototyping.


Confidence Interval

A range around an estimate that likely contains the true population value, typically expressed with a confidence level (e.g., 95%).

The 95% CI for purchase intent is 38–44%.|Wider intervals reflect smaller sample sizes.|We’ll report means with confidence intervals, not just point estimates.


Conjoint Analysis

A trade-off method that infers part-worth utilities for attributes by observing choices, enabling simulation of preference and market share.

CBC revealed price and warranty as key utilities.|We built a simulator to forecast share under new feature bundles.|ACBC helped handle the large attribute set.


Data Cleaning

Procedures to improve data quality by detecting and addressing errors, duplicates, low-quality responses, and inconsistencies.

We removed speeders and straightliners from the dataset.|Open-end quality checks flagged bot-like responses.|Deduping by digital fingerprint reduced fraud.


Demographics

Statistical characteristics of populations (e.g., age, gender, income, education) used for sampling, weighting, and profiling.

Quota controls ensured demographic representativeness.|We weighted to census demographics for age and gender.|Segment profiles include demographics and attitudes.


Desk Research (Secondary Research)

Analyzing existing information (reports, databases, literature) rather than collecting new primary data.

We synthesized syndicated reports and government stats.|Secondary sources show category growth slowing.|Desk research informed the discussion guide.


Ethnography

Contextual, immersive research observing people in real environments to understand behaviors, motivations, and context.

In-home ethnos revealed workaround behaviors.|Shop-alongs uncovered shelf navigation pain points.|Diary ethnography captured longitudinal usage patterns.


Experimental Design

The structured plan for assigning treatments and controls (e.g., randomized, factorial, blocked) to enable causal inference.

We used a 2x3 factorial to test price and message.|Blocking controlled for store-level variation.|Randomization guards against selection bias.


Eye Tracking

A technique that measures where and how long people look at stimuli, generating metrics like fixations, saccades, and attention maps.

Heatmaps showed low attention on the call-to-action.|Time-to-first-fixation improved with the new layout.|Shelf tests used gaze plots to refine packaging.


Fieldwork

The operational phase of data collection, including recruiting, interviewing, quota management, and quality control.

Fieldwork starts Monday with a 300-complete quota.|CATI and CAPI teams will handle rural sampling.|We’ll monitor incidence and length of interview daily.


Focus Group

A moderated discussion with a small group (typically 6–8) to explore attitudes, motivations, and reactions to stimuli.

We’ll run six 90-minute focus groups, online.|Stimulus rotation avoids order effects in groups.|The moderator guide includes projective techniques.


Forecasting

Techniques to predict future demand or behavior using historical data, causal models, or simulations.

The base-plus-lift model predicts promo impact.|Diffusion curves estimate adoption over time.|We combined survey and sales data for a hybrid forecast.


Gabor-Granger Pricing

A direct pricing method asking purchase likelihood at multiple price points to build a price–demand curve and estimate optimal price.

Demand drops sharply beyond $19.99 in Gabor-Granger.|We estimated optimal price from the price-response curve.|Results triangulate with Van Westendorp and conjoint.


Generalizability

The extent to which findings from a sample apply to the broader target population (external validity).

A probability sample improves generalizability.|Nonresponse bias can limit external validity.|We discuss which segments the findings generalize to.


Hawthorne Effect

Behavioral changes that occur when people know they are being observed, potentially biasing results.

Observed improvements may reflect the Hawthorne effect.|We minimized it by passive metering instead of labs.|Awareness of being studied can alter behavior.


Hypothesis Testing

Statistical procedures to assess evidence against a null hypothesis, commonly using p-values and confidence levels.

We tested H0: no difference in means (t-test, p<0.05).|Chi-square showed a significant association by segment.|We pre-registered hypotheses to avoid p-hacking.


Incidence Rate (IR)

The proportion of the screened population that qualifies for a study, affecting feasibility, timeline, and cost.

The IR is 18%, so recruitment will be costly.|Low incidence increases field time and incentives.|We used a prescreener to estimate true incidence.


In-Depth Interview (IDI)

One-on-one, semi-structured qualitative interviews that explore motivations, experiences, and decision processes in depth.

We’ll conduct 20 IDIs with heavy users.|Semi-structured IDIs allow probing for why.|Expert IDIs informed our B2B buyer journey map.


Jobs To Be Done (JTBD)

A framework focusing on the functional, emotional, and social 'jobs' customers are trying to accomplish, used to guide innovation.

The core job is “get dinner on the table fast.”|We mapped jobs, pains, and desired outcomes.|Features were prioritized by job importance and satisfaction gaps.


Key Driver Analysis

Analytical methods (e.g., regression, Shapley, SEM) that quantify which attributes most influence an outcome (e.g., satisfaction, NPS).

Relative importance shows service speed is the top driver.|We used Shapley values to decompose impact on NPS.|Controlling for price, quality remains the key driver.


KPI (Key Performance Indicator)

A critical, quantifiable metric used to gauge progress toward objectives (e.g., conversion rate, awareness, NPS).

Top KPIs are awareness, trial, repeat, and NPS.|We set KPI targets for each funnel stage.|Dashboards visualize KPI trends monthly.


Likert Scale

An ordered response scale (commonly 5 or 7 points) measuring agreement, frequency, or importance across statements.

Use a 5-point Likert for agreement items.|We reported top-2 box on the 7-point scale.|Neutral midpoint reduces forced choice bias.


Longitudinal Study

Research that measures the same subjects repeatedly over time to observe changes and causal relationships.

Panel respondents are followed for 12 months.|Longitudinal data reveals cohort effects.|We track changes pre- and post-campaign.


Margin of Error

Half the width of a confidence interval for a proportion or mean, reflecting sampling variability at a given confidence level.

For n=1,000, MoE is about ±3.1% at 95% confidence.|MoE narrows as sample size increases.|We report MoE alongside percentages.


Market Segmentation

Dividing a market into distinct groups with similar needs or behaviors to enable targeted strategy and messaging.

We identified four needs-based segments.|Personas were built from the segmentation outputs.|Targeting focuses on the “value seeker” segment.


MaxDiff (Best-Worst Scaling)

A trade-off method where respondents pick the best and worst items, producing scaled importance scores across many attributes.

MaxDiff ranks features by relative importance.|Share of preference scores guide the MVP feature set.|We validated MaxDiff with purchase intent.


Net Promoter Score (NPS)

A loyalty metric computed as promoters (9–10) minus detractors (0–6) from the 'likelihood to recommend' question.

NPS improved from 12 to 24 this quarter.|We analyze verbatims by promoter/detractor.|Relationship NPS and transactional NPS tell different stories.


Nonresponse Bias

Bias arising when those who do not respond differ meaningfully from respondents, potentially skewing estimates.

Late responders were weighted to reduce bias.|Low response rates can increase nonresponse bias.|Follow-up calls assessed differences in nonrespondents.


Observational Research

Studying behavior by watching subjects in natural or controlled settings, with minimal interference.

We used structured observation in-store.|Digital ethnography observed app usage patterns.|Unobtrusive observation minimized participant effects.


Online Panel

A recruited pool of online respondents available for surveys, typically profiled and incentivized for ongoing participation.

The study uses a double-opt-in online panel.|Panel profiling enables fast B2B targeting.|We monitor panel health for quality and fraud.


Panel (Research Panel)

A standing sample of respondents who participate in multiple studies over time; may be online or offline.

Our customer panel supports monthly trackers.|We refresh the panel to avoid conditioning.|Panels enable longitudinal analyses.


Pricing Research

Methods to assess price sensitivity, willingness to pay, and optimal pricing (e.g., conjoint, Gabor-Granger, Van Westendorp).

We combined Van Westendorp and Gabor-Granger.|Conjoint captured price elasticity by segment.|Price ladders tested thresholds for the premium tier.


Primary Research

Collecting new data firsthand for a specific purpose through surveys, experiments, interviews, or observation.

We’ll gather primary data via an online survey.|Ethnography provided primary qualitative insights.|Primary research fills gaps not covered by syndicated data.


Qualitative Research

Exploratory methods (e.g., IDIs, focus groups, ethnography) that provide depth of understanding, typically with small, nonprobability samples.

Qual explores the why behind behaviors.|We ran IDIs and groups before the quant.|Thematic analysis synthesized qual insights.


Quantitative Research

Structured, numeric measurement and analysis (e.g., surveys, experiments) designed for statistical inference and quantification.

Quant validates findings at scale.|We powered the survey to detect a 5-point lift.|The quant sample allows subgroup analysis.


Regression Analysis

Statistical techniques that estimate relationships between a dependent variable and one or more predictors (e.g., linear, logistic).

Logistic regression modeled conversion likelihood.|We standardized coefficients for comparability.|Controls reduce confounding in driver analysis.


Response Rate

The proportion of eligible sample members who complete the survey, often reported using standard formulas (e.g., AAPOR RR).

Completed interviews divided by invitations yields response rate.|Incentives improved response by 3 points.|We report AAPOR response rate formulas.


Sample Size

The number of completed observations; larger sizes generally yield more precise estimates and higher statistical power.

We need n=400 per cell for precision.|Sample size drives margin of error.|Power analysis determined minimum n for the test.


Statistical Significance

A determination that an observed effect is unlikely due to chance under a null hypothesis, typically assessed via p-values or CIs.

The lift is significant at p<0.05.|We adjusted for multiple comparisons (FDR).|Significance does not imply practical importance.


Tracking Study

A repeated-measures study that monitors key metrics (e.g., awareness, usage, consideration) over time with consistent methods.

Brand tracker fields quarterly.|KPIs trend over waves to show momentum.|We held the sample design constant for comparability.


TURF Analysis

Total Unduplicated Reach and Frequency analysis used to select combinations of items (e.g., messages, SKUs) that maximize audience reach.

TURF shows the 5-flavor combo maximizing reach.|Feature bundles were optimized using TURF.|We balanced reach gains with SKU complexity.


Was this page helpful? We'd love your feedback — please email us at feedback@dealstream.com.