Introduction
Online Multi-Object Tracking (MOT) has wide applications in time-critical video analysis scenarios, such as
robot navigation and autonomous driving. In trackingby-detection, a major challenge of online MOT is how to robustly associate noisy object detections on a new video frame with previously tracked objects. In this work, we formulate the online MOT problem as decision making in Markov Decision Processes (MDPs), where the lifetime of an object is modeled with a MDP. Learning a similarity function for data association is equivalent to learning a policy for the MDP, and the policy learning is approached in a reinforcement learning fashion which benefits from both advantages of offline-learning and online-learning for data association. Moreover, our framework can naturally handle the birth/death and appearance/disappearance of targets by treating them as state transitions in the MDP while leveraging
existing online single object tracking methods. We conduct experiments on the MOT Benchmark [1] to verify the effectiveness of our method.
Code
- The github repository for this project is here.
References
- L. Leal-Taixe, A. Milan, I. Reid, S. Roth, and K. Schindler. MOTChallenge 2015: Towards a Benchmark for Multi-Target Tracking. arXiv:1504.01942 [cs], 2015.
Acknowledgements
- We acknowledge the support of DARPA UPSIDE grant A13-0895-S002.
Tracking results
Contact : yuxiang at umich dot edu
Last update : 9/20/2015