ShelfApi

Automatic shelf bay detection and isolation for retail shelf photos. Upload a photo, detect the central bay, correct perspective, and mask surrounding products.

How it works

1
Upload
Shelf photo from store
2
Detect
CNN finds bay boundaries
3
Straighten
Perspective correction
4
Mask
Blur, black, or crop

A trained EfficientNet-B0 model detects eight boundary points — four for left/right (x-axis) and four for top/bottom (y-axis). This defines the product region as a quadrilateral, which is then warped to a rectangle (perspective correction) and surrounding areas are masked. Works for regular shelves, endcap displays, and checkout displays.

Examples

Green lines show detected bay boundaries. Orange lines show top/bottom boundaries. Right side shows the result after perspective correction and blur masking.

Shelf — nearly straight-on
Original + detected boundaries
Corrected + masked
Shelf — slight perspective skew
Original + detected boundaries
Corrected + masked
Shelf — significant perspective distortion
Original + detected boundaries
Corrected + masked
Endcap display (kopstelling) — left/right detection
Original + detected boundaries
Corrected + masked
Checkout display (kassameubel) — top/bottom + left/right detection
Original + detected boundaries
Corrected + masked

Try it yourself

📷

Click or drag a shelf photo here

JPG or PNG, any size

Processing...

Original + detected boundaries
Result
API endpoints
POST /api/process-shelf
Upload an image, get back the processed result (detected, corrected, masked).
image file (required) · method blur|black|crop · correct_perspective true|false · correct_horizontal true|false · detector cnn|gemini
POST /api/detect-meters
Upload an image, get back boundary detection as JSON.
image file (required) · detector cnn|gemini
GET /api/dataset/stats
Get dataset statistics (total images, verified, annotated).
curl examples

Process a shelf photo (returns image):

# Blur masking with perspective correction (default)
curl -X POST -F "image=@photo.jpg" \
  "https://shelfapi.wiggert.nl/api/process-shelf" \
  -o result.jpg

# Crop to detected bay only
curl -X POST -F "image=@photo.jpg" \
  "https://shelfapi.wiggert.nl/api/process-shelf?method=crop" \
  -o cropped.jpg

# Black masking + level shelves
curl -X POST -F "image=@photo.jpg" \
  "https://shelfapi.wiggert.nl/api/process-shelf?method=black&correct_horizontal=true" \
  -o result.jpg

# No perspective correction, PNG output
curl -X POST -F "image=@photo.jpg" \
  "https://shelfapi.wiggert.nl/api/process-shelf?correct_perspective=false&output_format=png" \
  -o result.png

Detect boundaries only (returns JSON):

# CNN detector (fast, ~50ms)
curl -X POST -F "image=@photo.jpg" \
  "https://shelfapi.wiggert.nl/api/detect-meters"

# Example response:
{
  "left_top_pct": 12.3,
  "left_bottom_pct": 14.1,
  "right_top_pct": 87.5,
  "right_bottom_pct": 85.9,
  "left_boundary_pct": 13.2,
  "right_boundary_pct": 86.7,
  "top_boundary_pct": 0.0,
  "bottom_boundary_pct": 100.0,
  "source": "cnn"
}

Dataset statistics:

curl "https://shelfapi.wiggert.nl/api/dataset/stats"

# Returns: {"total_images": 3814, "annotated": 3408, "verified": 880, ...}
Changelog
v0.6.0 2026-03-11
  • Trained on 1571 verified samples (shelves + endcaps + checkouts + half displays + Lidl bays) — 1.54% mean error
  • Added half display and Lidl bay photo types
  • Annotator: corners mode (C key) — place 4 corner points, lines are extrapolated automatically
  • Gemini pre-annotation pipeline for half displays, checkouts, and Lidl bays
v0.5.0 2026-03-10
  • 8-output model: top/bottom boundary detection for checkout displays (kassameubels)
  • Horizontal leveling: optional rotation to level shelf lines
  • Annotator: top/bottom boundary handles with keyboard shortcuts
  • Landing page: 5 examples (shelves, endcap, checkout), curl examples, changelog
v0.4.0 2026-03-09
  • Trained on 780 samples (shelves + 200 endcap displays) — 1.60% mean error
  • Perspective correction using detected boundary trapezoid
  • Added endcap display (kopstelling) support
  • Docker deployment with CPU-only PyTorch
v0.3.0 2026-03-08
  • Interactive annotation tool with SVG drag handles and keyboard shortcuts
  • Iterative training: Gemini labels → human review → retrain cycle
  • Performance report with error distribution analysis
v0.2.0 2026-03-07
  • EfficientNet-B0 CNN model for fast local inference (~10ms GPU, ~50ms CPU)
  • Replaced Gemini/Claude Vision with trained CNN as default detector
  • Blur, black, and crop masking modes
v0.1.0 2026-03-06
  • Initial release: Gemini Vision-based meter detection
  • FastAPI with process-shelf and detect-meters endpoints