Methodology
We aggregate public review data from platforms such as Google, TripAdvisor, and Yelp via a multi-source API, ingesting continuously and snapping everything to monthly windows. Listings are matched to real businesses using name, address, and category signals; duplicates and closed venues are removed to protect panel integrity.
Every review passes spam and anomaly checks that include burst detection, reviewer history scoring, language verification, and platform cross-validation. We run NLP to identify intent and experience dimensions, map native 0–10 recommendation answers directly when present, and otherwise infer promoter, passive, and detractor classes from text and rating context with calibrated thresholds. For each industry and geography, NPS is computed as promoters minus detractors, with Bayesian smoothing to stabilize low-volume samples and confidence ranges published alongside the point estimate.
We normalize categories to a unified industry taxonomy, align time zones, standardize language, and apply weighting so large markets do not overpower smaller ones within a region. Monthly values are reported as exact snapshots, and a rolling three-month view clarifies trend direction while preserving month-to-month comparability. We publish a monthly NPS only when minimum sample size and coverage criteria are met, with methods, thresholds, and known limitations disclosed in the report's technical notes.
Finally, models and pipelines are audited continuously with backtests and drift checks, and any detected bias or source change triggers a documented review and re-calibration.