Methodology

How we make verdicts.
And how we don’t.

The contract behind every recommendation we publish. Six rules, plain English, no marketing fluff.

Last reviewed · auto-updated from git

How we score products.

Every product we evaluate receives a 1–10 score in six dimensions. Each dimension is weighted by the use case the user described, not by a fixed global formula.

  • Performance — raw capability against same-tier competitors (typical weight: 25–35%).
  • Value — price-to-performance against products at the same or nearby price (15–30%).
  • Build Quality — materials, durability, fit-and-finish (10–20%).
  • Features — usefulness of included capabilities relative to the stated use case (10–20%).
  • Ecosystem — software support, accessory availability, update history (5–15%).
  • User Experience — setup, day-to-day usability, owner-reported friction (10–20%).

The full scoring framework, including category-specific weight defaults, is on How We Score.

Every spec is traceable to a source.

Every spec, price, and rating cited on GoodPickr is traceable to one of the following sources. We do not write specs from memory.

  • Best Buy Products API (via Impact) — retail pricing, current availability, manufacturer-supplied specs, and aggregated customer ratings for TVs, laptops, monitors, headphones, gaming hardware, and major appliances.
  • eBay Browse API — live marketplace pricing for new, refurbished, and open-box inventory; used to surface real alternatives when retail stock is thin.
  • Amazon Product Advertising API — pricing, availability, and customer rating signals on Amazon-listed products. ASINs are resolved from canonical product names.
  • CJ Affiliate (Commission Junction) catalog — pricing and product feeds from LG Electronics, BenQ, and other CJ-network retailers.
  • Manufacturer datasheets — official technical specifications cross-referenced against retailer-supplied data.
  • Owner feedback aggregates — public retailer reviews and community-forum sentiment, used to flag recurring real-world issues that on-paper specs miss.

Prices and stock are timestamped on retrieval. Historical prices are persisted so deal claims are checked against trailing 90-day medians, not inflated MSRPs.

AI is involved. We are direct about that.

GoodPickr is AI-assisted. We are direct about that.

  • Model: verdicts are generated by xAI’s Grok-3 (with Grok-mini handling lower-stakes summaries to control cost).
  • Inputs: the model receives a structured spec table, current price + stock, aggregated customer-rating signals, and the user’s stated use case. It does not free-style from training memory.
  • Schema-locked output: every verdict is forced to a JSON schema so each scored dimension, claim, and citation can be validated before render.
  • Confidence gate: we run a four-signal confidence scorer (spec verification, category coherence, engine confidence, image quality) and refuse to persist or index any comparison below a 0.70 threshold. Low-confidence verdicts are flagged or regenerated.
  • Image safety filter: any product image that ends up on a verdict page passes through a safety + relevance filter before display.
  • What humans verify: the editor (Billy G.) reviews flagship articles, buying guides, and any comparison flagged by the confidence gate. Reader-reported corrections are reviewed by hand.

Content is updated, not abandoned.

  • Prices and stock — refetched on cache miss; volatile fields refresh more aggressively than slow-moving specs.
  • Comparison verdicts — cached for up to 7 days, then re-generated on next request so the spec table and price snapshot stay current.
  • Articles and buying guides — reviewed at least quarterly. When a product is discontinued or a price floor shifts meaningfully, the article is rewritten and its dateModified schema field is bumped.
  • This methodology page — the “Last reviewed” date above auto-updates from the most recent git commit to this file.

Money never touches editorial.

GoodPickr is funded entirely by affiliate commissions. Affiliate revenue never influences a score, a ranking, or a verdict. We routinely recommend products without affiliate programs over ones with them. The full disclosure, including every network we participate in, is on the Affiliate Disclosure page.

We do not accept sponsored placements, branded content, or paid reviews. If that policy ever changes, sponsored content will be labeled as such on every surface where it appears.

What we don’t do.

  • We do not personally test or use the products in our home. GoodPickr is a data-aggregation and AI-analysis layer, not a hands-on review lab. Every verdict is built from retailer APIs, manufacturer datasheets, and aggregated owner feedback — not from a reviewer holding the device.
  • We do not accept payment for higher scores or for inclusion in any list, ranking, or comparison.
  • We do not invent benchmarks, urgency, or stock counts. No fabricated “only 2 left” pressure tactics, no fake countdown timers, no invented hands-on quotes.
  • We do not cover medical devices, financial products, or any category where regulatory testing is the only responsible source.

Have a question?

Spot a factual error or want to flag a verdict?

Email billy@goodpickr.com with the URL and the specific claim. Confirmed errors are corrected within 48 hours.