Abstract

The Visual Object Tracking challenge VOT2016 aims at comparing short-term single-object visual trackers that do not apply prelearned models of object appearance. Results of 70 trackers are presented, with a large number of trackers being published at major computer vi- sion conferences and journals in the recent years. The number of tested state-of-the-art trackers makes the VOT 2016 the largest and most challenging benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the Appendix. The VOT2016 goes beyond its predecessors by (i) introducing a new semi-automatic ground truth bounding box annotation methodology and (ii) extending the evaluation system with the no-reset experiment. The dataset, the evaluation kit as well as the results are publicly available at the challenge website.

Venue

The 14th European Conference on Computer Vision

Publication Year

2016

Authors

Matas et al.

Cite Us

@Inbook{Kristan2016,
author=”Kristan, Matej
and Leonardis, Ale{\v{s}}
and Matas, Ji{\v{r}}i
and Felsberg, Michael
and Pflugfelder, Roman
and {\v{C}}ehovin, Luka
and Voj{\’i}r̃, Tom{\’a}{\v{s}}
and H{\”a}ger, Gustav
and Luke{\v{z}}i{\v{c}}, Alan
and Fern{\’a}ndez, Gustavo
and Gupta, Abhinav
and Petrosino, Alfredo
and Memarmoghadam, Alireza
and Garcia-Martin, Alvaro
and Sol{\’i}s Montero, Andr{\’e}s
and Vedaldi, Andrea
and Robinson, Andreas
and Ma, Andy J.
and Varfolomieiev, Anton
and Alatan, Aydin
and Erdem, Aykut
and Ghanem, Bernard
and Liu, Bin
and Han, Bohyung
and Martinez, Brais
and Chang, Chang-Ming
and Xu, Changsheng
and Sun, Chong
and Kim, Daijin
and Chen, Dapeng
and Du, Dawei
and Mishra, Deepak
and Yeung, Dit-Yan
and Gundogdu, Erhan
and Erdem, Erkut
and Khan, Fahad
and Porikli, Fatih
and Zhao, Fei
and Bunyak, Filiz
and Battistone, Francesco
and Zhu, Gao
and Roffo, Giorgio
and Subrahmanyam, Gorthi R. K. Sai
and Bastos, Guilherme
and Seetharaman, Guna
and Medeiros, Henry
and Li, Hongdong
and Qi, Honggang
and Bischof, Horst
and Possegger, Horst
and Lu, Huchuan
and Lee, Hyemin
and Nam, Hyeonseob
and Chang, Hyung Jin
and Drummond, Isabela
and Valmadre, Jack
and Jeong, Jae-chan
and Cho, Jae-il
and Lee, Jae-Yeong
and Zhu, Jianke
and Feng, Jiayi
and Gao, Jin
and Choi, Jin Young
and Xiao, Jingjing
and Kim, Ji-Wan
and Jeong, Jiyeoup
and Henriques, Jo{\~a}o F.
and Lang, Jochen
and Choi, Jongwon
and Martinez, Jose M.
and Xing, Junliang
and Gao, Junyu
and Palaniappan, Kannappan
and Lebeda, Karel
and Gao, Ke
and Mikolajczyk, Krystian
and Qin, Lei
and Wang, Lijun
and Wen, Longyin
and Bertinetto, Luca
and Rapuru, Madan Kumar
and Poostchi, Mahdieh
and Maresca, Mario
and Danelljan, Martin
and Mueller, Matthias
and Zhang, Mengdan
and Arens, Michael
and Valstar, Michel
and Tang, Ming
and Baek, Mooyeol
and Khan, Muhammad Haris
and Wang, Naiyan
and Fan, Nana
and Al-Shakarji, Noor
and Miksik, Ondrej
and Akin, Osman
and Moallem, Payman
and Senna, Pedro
and Torr, Philip H. S.
and Yuen, Pong C.
and Huang, Qingming
and Martin-Nieto, Rafael
and Pelapur, Rengarajan
and Bowden, Richard
and Lagani{\`e}re, Robert
and Stolkin, Rustam
and Walsh, Ryan
and Krah, Sebastian B.
and Li, Shengkun
and Zhang, Shengping
and Yao, Shizeng
and Hadfield, Simon
and Melzi, Simone
and Lyu, Siwei
and Li, Siyi
and Becker, Stefan
and Golodetz, Stuart
and Kakanuru, Sumithra
and Choi, Sunglok
and Hu, Tao
and Mauthner, Thomas
and Zhang, Tianzhu
and Pridmore, Tony
and Santopietro, Vincenzo
and Hu, Weiming
and Li, Wenbo
and H{\”u}bner, Wolfgang
and Lan, Xiangyuan
and Wang, Xiaomeng
and Li, Xin
and Li, Yang
and Demiris, Yiannis
and Wang, Yifan
and Qi, Yuankai
and Yuan, Zejian
and Cai, Zexiong
and Xu, Zhan
and He, Zhenyu
and Chi, Zhizhen”,
editor=”Hua, Gang
and J{\’e}gou, Herv{\’e}”,
title=”The Visual Object Tracking VOT2016 Challenge Results”,
bookTitle=”Computer Vision — ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part II”,
year=”2016″,
publisher=”Springer International Publishing”,
address=”Cham”,
pages=”777–823″,
abstract=”The Visual Object Tracking challenge VOT2016 aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 70 trackers are presented, with a large number of trackers being published at major computer vision conferences and journals in the recent years. The number of tested state-of-the-art trackers makes the VOT 2016 the largest and most challenging benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the Appendix. The VOT2016 goes beyond its predecessors by (i) introducing a new semi-automatic ground truth bounding box annotation methodology and (ii) extending the evaluation system with the no-reset experiment. The dataset, the evaluation kit as well as the results are publicly available at the challenge website ( http://votchallenge.net ).”,
isbn=”978-3-319-48881-3″,
doi=”10.1007/978-3-319-48881-3_54″,
url=”https://doi.org/10.1007/978-3-319-48881-3_54″
}

Page layout inspired by https://research.google.com/