Pipeline transparency

How GoodPickr publishes.

We publish AI-assisted product comparisons. We are also open about exactly how that happens — the sources we read, the gates a draft has to clear, and the things we explicitly refuse to do.

By the numbers

Pulled live from the production database when this page was rendered. Nothing is fabricated; if a counter is unavailable it shows an em-dash.

503
Comparisons cached
Every /vs/ page is backed by a real cached row.
10
Articles published
Long-form auto_articles, gated by confidence.
79%
Avg confidence
Mean AI Editor score across the canonical index.
84
Spec violations open
Awaiting human resolution in the admin queue.
5
Verdict flips (30d)
When new evidence flips our pick, we log it.
71.9/day
Daily generation rate
Rolling 7-day average of new comparisons.

The pipeline

From a search query in someone’s head to a /vs/ page in their browser, every step is named and gated.

  1. 01

    Keyword harvest

    GSC, Reddit, and free trends data surface comparisons people actually search for. No purchased keyword lists; no auto-suggest mining.

  2. 02

    AI generation (Grok)

    Grok-3 produces a structured draft against a strict JSON schema. Source URLs are required for every spec it cites.

  3. 03

    Spec sanity guard

    A deterministic validator flags physically impossible specs (Ryzen AI inside a Galaxy Tab, etc.). Failures get noindex + admin queue.

  4. 04

    AI Editor review

    A second model scores draft quality (factuality, citation density, hallucination signals). Below threshold → noindex + rerun.

  5. 05

    Confidence gate

    Each /vs/ row gets a 0–1 confidence. Below the gate, the slug never enters the canonical index and never reaches the sitemap.

  6. 06

    Publish

    Page is rendered SSR with Article + FAQ + Breadcrumb schema. Author byline links to a real human profile (Helpful Content compliance).

  7. 07

    Indexing

    IndexNow pings Bing/Yandex/Seznam/Naver. Google Indexing API submits high-priority URLs. RSS + sitemap fan-out runs every push.

  8. 08

    Distribution

    Drafts to Discord webhook fan-out + email digest queue. Promotion only after the confidence gate clears — never auto-blasted.

Sources we use

If a fact lands on a /vs/ page, it traces back to one of these. Specifications are reconciled against the manufacturer when they conflict.

What we don’t do

A short list, on purpose. These are the lines we will not cross.

Editorial process

The AI Editor scores every draft on five axes: factuality, citation density, hallucination risk, spec coherence, and reader-task fit. Each axis is 0–1; the overall score is a weighted average, surfaced as a 0–1 confidence on the comparison row.

The publish threshold is 0.8. Below 0.8, the page renders with noindex, nofollow, the slug is held out of comparison_slugs, and the URL never reaches the sitemap. The admin queue surfaces these for manual rerun or removal.

When the spec sanity guard flags an impossibility (e.g. a tablet listed with a desktop CPU), the row is force-noindexed regardless of confidence and a violation row is written to spec_violations for ops review.

Verdict flips are tracked in verdict_changes and exposed publicly on the /verdict-tracker page and inline on each /vs/ page — we publicly admit when our pick changes.

See the scoring methodology → Editorial policy → Verdict tracker →