2026-04-12
Retail shelf vision: moving a computer vision audit from prototype to field tests
Practical notes on capture conditions, label noise, and how we decided what “good enough” meant before asking stores to change behavior.
Context
A shelf and merchandising audit concept started as a promising notebook demo: detect gaps, wrong facings, and competitor intrusions from phone photos. Store lighting, glare, and angle variance quickly turned “accuracy on a validation set” into a weak predictor of field happiness.
What we did
- Defined minimum viable capture: distance, framing, and lighting guidance that staff could follow in under ten seconds per bay.
- Built a human-in-the-loop review queue for low-confidence frames instead of pretending the model was fully autonomous on day one.
- Tracked per-store calibration (camera models, fixture types) so we could spot systematic bias before it polluted executive dashboards.
Outcome
Pilot stores adopted the workflow because the tool asked for small habit changes, not a photoshoot. Merchandising leads received summaries they could defend in regional reviews.
Takeaway
Computer vision in retail lives or dies on capture discipline and honest uncertainty. Ship the review path first, then tighten the model once the data distribution stabilizes.
