Pipeline transparency
How GoodPickr publishes.
We publish AI-assisted product comparisons. We are also open about exactly how that happens — the sources we read, the gates a draft has to clear, and the things we explicitly refuse to do.
By the numbers
Pulled live from the production database when this page was rendered. Nothing is fabricated; if a counter is unavailable it shows an em-dash.
The pipeline
From a search query in someone’s head to a /vs/ page in their browser, every step is named and gated.
-
01
Keyword harvest
GSC, Reddit, and free trends data surface comparisons people actually search for. No purchased keyword lists; no auto-suggest mining.
-
02
AI generation (Grok)
Grok-3 produces a structured draft against a strict JSON schema. Source URLs are required for every spec it cites.
-
03
Spec sanity guard
A deterministic validator flags physically impossible specs (Ryzen AI inside a Galaxy Tab, etc.). Failures get noindex + admin queue.
-
04
AI Editor review
A second model scores draft quality (factuality, citation density, hallucination signals). Below threshold → noindex + rerun.
-
05
Confidence gate
Each /vs/ row gets a 0–1 confidence. Below the gate, the slug never enters the canonical index and never reaches the sitemap.
-
06
Publish
Page is rendered SSR with Article + FAQ + Breadcrumb schema. Author byline links to a real human profile (Helpful Content compliance).
-
07
Indexing
IndexNow pings Bing/Yandex/Seznam/Naver. Google Indexing API submits high-priority URLs. RSS + sitemap fan-out runs every push.
-
08
Distribution
Drafts to Discord webhook fan-out + email digest queue. Promotion only after the confidence gate clears — never auto-blasted.
Sources we use
If a fact lands on a /vs/ page, it traces back to one of these. Specifications are reconciled against the manufacturer when they conflict.
-
DataForSEOKeyword volume + SERP context for query selection.
-
Google TrendsDemand seasonality and emerging-product signal.
-
Google Search ConsoleReal impression / click data — we publish what people search for, not what we wish they searched for.
-
RedditOwner-reported friction, long-tail comparisons, real complaints.
-
Amazon (PA-API + scraped reviews)Rating distribution, recent review velocity, ASIN-level price tracking.
-
Best BuyIn-stock signal + price snapshot for North American availability.
-
eBayUsed / open-box price floor for trade-in and budget guidance.
-
Wikipedia / Wikimedia CommonsPublic-domain product imagery and disambiguation context.
-
Manufacturer spec sheetsAuthoritative spec values when the AI draft references a numeric claim.
What we don’t do
A short list, on purpose. These are the lines we will not cross.
- No fake reviews. We never write owner reviews from a persona we did not interview.
- No hands-on testing claims. We synthesize public data and label it as such — no "I tested this for two weeks" voice.
- No auto-publishing low-confidence content. Anything below the confidence gate ships with noindex and stays out of the sitemap.
- No mass-generated articles without source citations. Every spec claim has to trace back to a public source URL.
- No editorial fingers on the scale for affiliates. Affiliate partnerships do not influence verdicts, weights, or rankings.
- No purchased backlinks, no PBNs, no link schemes. Discovery is earned via real search and direct distribution.
Editorial process
The AI Editor scores every draft on five axes: factuality, citation density, hallucination risk, spec coherence, and reader-task fit. Each axis is 0–1; the overall score is a weighted average, surfaced as a 0–1 confidence on the comparison row.
The publish threshold is 0.8. Below 0.8, the page renders with noindex, nofollow, the slug is held out of comparison_slugs, and the URL never reaches the sitemap. The admin queue surfaces these for manual rerun or removal.
When the spec sanity guard flags an impossibility (e.g. a tablet listed with a desktop CPU), the row is force-noindexed regardless of confidence and a violation row is written to spec_violations for ops review.
Verdict flips are tracked in verdict_changes and exposed publicly on the /verdict-tracker page and inline on each /vs/ page — we publicly admit when our pick changes.
See the scoring methodology → Editorial policy → Verdict tracker →