Self-calibrating cross-camera homography for real-time ghost prediction in multi-camera person tracking[P]
·
0 reactions
·
0 comments
·
3 views
The problem: In multi-camera tracking, when camera A loses track of a person but camera B still sees them, naive approaches extrapolate pixel coordinates linearly. This fails immediately because cameras have completely different coordinate systems. A person at pixel (400, 300) on camera B might be at (800, 500) on camera A, depending on relative position and angle. Approach: When both cameras simultaneously observe the same person (matched via 64-dim HSV appearance descriptors, L2-normalized, EM
Original article
Machine Learning
Anonymous · no account needed