skip to main content
10.1007/978-3-030-68238-5_39guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

The Eighth Visual Object Tracking VOT2020 Challenge Results

Authors: Matej Kristan, Aleš Leonardis, Jiří Matas, Michael Felsberg, Roman Pflugfelder, Joni-Kristian Kämäräinen, Martin Danelljan, Luka Čehovin Zajc, Alan Lukežič, Ondrej Drbohlav, Linbo He, Yushan Zhang, Song Yan, Jinyu Yang, Gustavo Fern�ndez, Alexander Hauptmann, Alireza Memarmoghadam, �lvaro Garc�a-Mart�n, Andreas Robinson, Anton Varfolomieiev, Awet Haileslassie Gebrehiwot, Bedirhan Uzun, Bin Yan, Bing Li, Chen Qian, Chi-Yi Tsai, Christian Micheloni, Dong Wang, Fei Wang, Fei Xie, Felix Jaremo Lawin, Fredrik Gustafsson, Gian Luca Foresti, Goutam Bhat, Guangqi Chen, Haibin Ling, Haitao Zhang, Hakan Cevikalp, Haojie Zhao, Haoran Bai, Hari Chandana Kuchibhotla, Hasan Saribas, Heng Fan, Hossein Ghanei-Yakhdan, Houqiang Li, Houwen Peng, Huchuan Lu, Hui Li, Javad Khaghani, Jesus Bescos, Jianhua Li, Jianlong Fu, Jiaqian Yu, Jingtao Xu, Josef Kittler, Jun Yin, Junhyun Lee, Kaicheng Yu, Kaiwen Liu, Kang Yang, Kenan Dai, Li Cheng, Li Zhang, Lijun Wang, Linyuan Wang, Luc Van Gool, Luca Bertinetto, Matteo Dunnhofer, Miao Cheng, Mohana Murali Dasari, Ning Wang, Ning Wang, Pengyu Zhang, Philip H. S. Torr, Qiang Wang, Radu Timofte, Rama Krishna Sai Gorthi, Seokeon Choi, Seyed Mojtaba Marvasti-Zadeh, Shaochuan Zhao, Shohreh Kasaei, Shoumeng Qiu, Shuhao Chen, Thomas B. Sch�n, Tianyang Xu, Wei Lu, Weiming Hu, Wengang Zhou, Xi Qiu, Xiao Ke, Xiao-Jun Wu, Xiaolin Zhang, Xiaoyun Yang, Xuefeng Zhu, Yingjie Jiang, Yingming Wang, Yiwei Chen, Yu Ye, Yuezhou Li, Yuncon Yao, Yunsung Lee, Yuzhang Gu, Zezhou Wang, Zhangyong Tang, Zhen-Hua Feng, Zhijun Mai, Zhipeng Zhang, Zhirong Wu, Ziang MaAuthors Info & Claims
Published: 23 August 2020 Publication History

Abstract

The Visual Object Tracking challenge VOT2020 is the eighth annual tracker benchmarking activity organized by the VOT initiative. Results of 58 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The VOT2020 challenge was composed of five sub-challenges focusing on different tracking domains: (i) VOT-ST2020 challenge focused on short-term tracking in RGB, (ii) VOT-RT2020 challenge focused on “real-time” short-term tracking in RGB, (iii) VOT-LT2020 focused on long-term tracking namely coping with target disappearance and reappearance, (iv) VOT-RGBT2020 challenge focused on short-term tracking in RGB and thermal imagery and (v) VOT-RGBD2020 challenge focused on long-term tracking in RGB and depth imagery. Only the VOT-ST2020 datasets were refreshed. A significant novelty is introduction of a new VOT short-term tracking evaluation methodology, and introduction of segmentation ground truth in the VOT-ST2020 challenge – bounding boxes will no longer be used in the VOT-ST challenges. A new VOT Python toolkit that implements all these novelites was introduced. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website (http://votchallenge.net).

References

[1]
Babenko B, Yang MH, and Belongie S Robust object tracking with online multiple instance learning IEEE Trans. Pattern Anal. Mach. Intell. 2011 33 8 1619-1632
[2]
Berg, A., Ahlberg, J., Felsberg, M.: A thermal object tracking benchmark. In: 12th IEEE International Conference on Advanced Video- and Signal-based Surveillance, Karlsruhe, Germany, 25–28 August 2015. IEEE (2015)
[3]
Berg, A., Johnander, J., de Gevigney, F.D., Ahlberg, J., Felsberg, M.: Semi-automatic annotation of objects in visual-thermal video. In: IEEE International Conference on Computer Vision, ICCV Workshops (2019)
[4]
Bertinetto L, Valmadre J, Henriques JF, Vedaldi A, and Torr PHS Hua G and Jégou H Fully-convolutional Siamese networks for object tracking Computer Vision – ECCV 2016 Workshops 2016 Cham Springer 850-865
[5]
Bhat, G., Danelljan, M., Gool, L.V., Timofte, R.: Learning discriminative model prediction for tracking. In: IEEE International Conference on Computer Vision, ICCV (2019)
[6]
Bhat, G., Johnander, J., Danelljan, M., Khan, F.S., Felsberg, M.: Unveiling the power of deep tracking. In: ECCV, pp. 483–498 (2018)
[7]
Bhat G et al. Vedaldi A, Bischof H, Brox T, Frahm J-M, et al. Learning what to learn for video object segmentation Computer Vision – ECCV 2020 2020 Cham Springer 777-794
[8]
Cai, Z., Vasconcelos, N.: Cascade R-CNN: delving into high quality object detection. In: CVPR (2018)
[9]
Chen, K., et al.: Hybrid task cascade for instance segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (2019)
[10]
Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 801–818 (2018)
[11]
Dai, K., Zhang, Y., Wang, D., Li, J., Lu, H., Yang, X.: High-performance long-term tracking with meta-updater. In: CVPR (2020)
[12]
Danelljan, M., Bhat, G., Khan, F.S., Felsberg, M.: ECO: efficient convolution operators for tracking. In: CVPR, pp. 6638–6646 (2017)
[13]
Danelljan, M., Bhat, G., Khan, F.S., Felsberg, M.: ATOM: accurate tracking by overlap maximization. In: CVPR, pp. 4660–4669 (2019)
[14]
Danelljan, M., Gool, L.V., Timofte, R.: Probabilistic regression for visual tracking. In: CVPR (2020)
[15]
Danelljan M, Häger G, Khan FS, and Felsberg M Discriminative scale space tracking IEEE Trans. Pattern Anal. Mach. Intell. 2016 39 8 1561-1575
[16]
Dunnhofer, M., Martinel, N., Luca Foresti, G., Micheloni, C.: Visual tracking by means of deep reinforcement learning and an expert demonstrator. In: The IEEE International Conference on Computer Vision (ICCV) Workshops, October 2019
[17]
Dunnhofer, M., Martinel, N., Micheloni, C.: A distilled model for tracking and tracker fusion (2020)
[18]
Fan, H., et al.: Lasot: a high-quality benchmark for large-scale single object tracking. In: Computer Vision Pattern Recognition (2019)
[19]
Galoogahi, H.K., Fagg, A., Huang, C., Ramanan, D., Lucey, S.: Need for speed: a benchmark for higher frame rate object tracking. CoRR abs/1703.05884 (2017). http://arxiv.org/abs/1703.05884
[20]
Goyette, N., Jodoin, P.M., Porikli, F., Konrad, J., Ishwar, P.: Changedetection.net: a new change detection benchmark dataset. In: CVPR Workshops, pp. 1–8. IEEE (2012)
[21]
Guo C and Zhang L A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression IEEE Trans. Image Process. 2009 19 1 185-198
[22]
Gustafsson FK, Danelljan M, Bhat G, and Schön TB Vedaldi A, Bischof H, Brox T, and Frahm J-M Energy-based models for deep probabilistic regression Computer Vision – ECCV 2020 2020 Cham Springer 325-343
[23]
Gustafsson, F.K., Danelljan, M., Timofte, R., Schön, T.B.: How to train your energy-based model for regression. CoRR abs/2005.01698 (2020). https://arxiv.org/abs/2005.01698
[24]
Henriques J, Caseiro R, Martins P, and Batista J High-speed tracking with kernelized correlation filters PAMI 2015 37 3 583-596
[25]
Huang, L., Zhao, X., Huang, K.: Got-10k: a large high-diversity benchmark for generic object tracking in the wild. arXiv:1810.11981 (2018)
[26]
Huang, L., Zhao, X., Huang, K.: GlobalTrack: a simple and strong baseline for long-term tracking. In: AAAI (2020)
[27]
Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: SqueezeNet: alexnet-level accuracy with 50x fewer parameters and <0.5mb model size. arXiv:1602.07360 (2016)
[28]
Jack, V., et al.: Long-term tracking in the wild: A benchmark. arXiv:1803.09502 (2018)
[29]
Jung, I., Son, J., Baek, M., Han, B.: Real-time MDNet. In: ECCV, pp. 83–98 (2018)
[30]
Kalal Z, Mikolajczyk K, and Matas J Tracking-learning-detection IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 2012 34 7 1409-1422
[31]
Kristan, M., et al.: The seventh visual object tracking vot2019 challenge results. In: ICCV2019 Workshops, Workshop on Visual Object Tracking Challenge (2019)
[32]
Kristan, M., et al.: The visual object tracking vot2018 challenge results. In: ECCV2018 Workshops, Workshop on Visual Object Tracking Challenge (2018)
[33]
Kristan, M., et al.: The visual object tracking vot2017 challenge results. In: ICCV2017 Workshops, Workshop on Visual Object Tracking Challenge (2017)
[34]
Kristan, M., et al.: The visual object tracking vot2016 challenge results. In: ECCV2016 Workshops, Workshop on Visual Object Tracking Challenge (2016)
[35]
Kristan, M., et al.: The visual object tracking vot2015 challenge results. In: ICCV2015 Workshops, Workshop on Visual Object Tracking Challenge (2015)
[36]
Kristan, M., et al.: The visual object tracking vot2013 challenge results. In: ICCV2013 Workshops, Workshop on Visual Object Tracking Challenge, pp. 98–111 (2013)
[37]
Kristan, M., et al.: The visual object tracking vot2014 challenge results. In: ECCV2014 Workshops, Workshop on Visual Object Tracking Challenge (2014)
[38]
Kristan M et al. A novel performance evaluation methodology for single-target trackers IEEE Trans. Pattern Anal. Mach. Intell. 2016 38 11 2137-2155
[39]
Leal-Taixé, L., Milan, A., Reid, I.D., Roth, S., Schindler, K.: Motchallenge 2015: towards a benchmark for multi-target tracking. CoRR abs/1504.01942 (2015). http://arxiv.org/abs/1504.01942
[40]
Li, A., Li, M., Wu, Y., Yang, M.H., Yan, S.: Nus-pro: a new visual tracking challenge. IEEE-PAMI (2015)
[41]
Li, B., Wu, W., Wang, Q., Zhang, F., Xing, J., Yan, J.: SiamRPN++: evolution of Siamese visual tracking with very deep networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4282–4291 (2019)
[42]
Li, B., Yan, J., Wu, W., Zhu, Z., Hu, X.: High performance visual tracking with Siamese region proposal network. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8971–8980, June 2018
[43]
Li, C., Liang, X., Lu, Y., Zhao, N., Tang, J.: RGB-T object tracking: benchmark and baseline. Pattern Recogn. (2019, submitted)
[44]
Liang P, Blasch E, and Ling H Encoding color information for visual tracking: algorithms and benchmark IEEE Trans. Image Process. 2015 24 12 5630-5644
[45]
Lin T-Y et al. Fleet D, Pajdla T, Schiele B, Tuytelaars T, et al. Microsoft COCO: common objects in context Computer Vision – ECCV 2014 2014 Cham Springer 740-755
[46]
Lukežič, A., Kart, U., Kämäräinen, J., Matas, J., Kristan, M.: CDTB: a color and depth visual object tracking dataset and benchmark. In: ICCV (2019)
[47]
Lukežič, A., Vojír̃ T., Čehovin Zajc, L., Matas, J., Kristan, M.: Discriminative correlation filter with channel and spatial reliability. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6309–6318, July 2017
[48]
Lukežič, A., Čehovin Zajc, L., Vojír̃ T., Matas, J., Kristan, M.: Now you see me: evaluating performance in long-term visual tracking. CoRR abs/1804.07056 (2018). http://arxiv.org/abs/1804.07056
[49]
Lukezic, A., Cehovin Zajc, L., Vojir, T., Matas, J., Kristan, M.: Performance evaluation methodology for long-term single object tracking. IEEE Trans. Cybern. (2020)
[50]
Lukezic, A., Matas, J., Kristan, M.: D3S - a discriminative single shot segmentation tracker. In: CVPR (2020)
[51]
Memarmoghadam, A., Moallem, P.: Size-aware visual object tracking via dynamic fusion of correlation filter-based part regressors. Signal Process. 164, 84–98 (2019). http://www.sciencedirect.com/science/article/pii/S0165168419301872
[52]
Moudgil, A., Gandhi, V.: Long-term visual object tracking benchmark. arXiv preprint arXiv:1712.01358 (2017)
[53]
Mueller M, Smith N, and Ghanem B Leibe B, Matas J, Sebe N, and Welling M A benchmark and simulator for UAV tracking Computer Vision – ECCV 2016 2016 Cham Springer 445-461
[54]
Muller, M., Bibi, A., Giancola, S., Alsubaihi, S., Ghanem, B.: TrackingNet: a large-scale dataset and benchmark for object tracking in the wild. In: ECCV, pp. 300–317 (2018)
[55]
Nam, H., Han, B.: Learning multi-domain convolutional neural networks for visual tracking. In: CVPR, pp. 4293–4302 (2016)
[56]
Oh, S.W., Lee, J.Y., Xu, N., Kim, S.J.: Video object segmentation using space-time memory networks. In: ICCV (2019)
[57]
Pernici F and del Bimbo A Object tracking by oversampling local features IEEE Trans. Pattern Anal. Mach. Intell. 2013 36 12 2538-2551
[58]
Phillips PJ, Moon H, Rizvi SA, and Rauss PJ The FERET evaluation methodology for face-recognition algorithms IEEE Trans. Pattern Anal. Mach. Intell. 2000 22 10 1090-1104
[59]
Real, E., Shlens, J., Mazzocchi, S., Pan, X., Vanhoucke, V.: YouTube-BoundingBoxes: a large high-precision human-annotated data set for object detection in video. In: Computer Vision and Pattern Recognition, pp. 7464–7473 (2017)
[60]
Robinson, A., Lawin, F.J., Danelljan, M., Khan, F.S., Felsberg, M.: Learning fast and robust target models for video object segmentation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Computer Vision Foundation, June 2020
[61]
Ronneberger O, Fischer P, and Brox T Navab N, Hornegger J, Wells WM, and Frangi AF U-Net: convolutional networks for biomedical image segmentation Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 2015 Cham Springer 234-241
[62]
Ross DA, Lim J, Lin RS, and Yang MH Incremental learning for robust visual tracking Int. J. Comput. Vis. 2008 77 1–3 125-141
[63]
Russakovsky O et al. ImageNet large scale visual recognition challenge IJCV 2015 115 3 211-252
[64]
Seoung, W.O., Lee, J.Y., Kim, S.J.: Fast video object segmentation by reference-guided mask propagation. In: Computer Vision Pattern Recognition, pp. 7376–7385 (2018)
[65]
Smeulders AWM, Chu DM, Cucchiara R, Calderara S, Dehghan A, and Shah M Visual tracking: an experimental survey TPAMI 2013
[66]
Solera, F., Calderara, S., Cucchiara, R.: Towards the evaluation of reproducible robustness in tracking-by-detection. In: Advanced Video and Signal Based Surveillance, pp. 1–6 (2015)
[67]
Song, S., Xiao, J.: Tracking revisited using RGBD camera: unified benchmark and baselines. In: ICCV (2013)
[68]
Tao, R., Gavves, E., Smeulders, A.W.M.: Tracking for half an hour. CoRR abs/1711.10217 (2017). http://arxiv.org/abs/1711.10217
[69]
Tian, Z., Shen, C., Chen, H., He, T.: FCOS: fully convolutional one-stage object detection. arXiv preprint arXiv:1904.01355 (2019)
[70]
Čehovin, L., Kristan, M., Leonardis, A.: Is my new tracker really better than yours? Technical report 10, ViCoS Lab, University of Ljubljana, October 2013. http://prints.vicos.si/publications/302
[71]
Čehovin, L.: TraX: The visual Tracking eXchange Protocol and Library. Neurocomputing (2017).
[72]
Čehovin L, Leonardis A, and Kristan M Visual object tracking performance measures revisited IEEE Trans. Image Process. 2016 25 3 1261-1274
[73]
Vojír̃, T., Noskova, J., Matas, J.: Robust scale-adaptive mean-shift for tracking. Pattern Recogn. Lett. 49, 250–258 (2014)
[74]
Wang, Q., Zhang, L., Bertinetto, L., Hu, W., Torr, P.H.: Fast online object tracking and segmentation: a unifying approach. In: CVPR, pp. 1328–1338 (2019)
[75]
Wang, X., Kong, T., Shen, C., Jiang, Y., Li, L.: SOLO: segmenting objects by locations. arXiv preprint arXiv:1912.04488 (2019)
[76]
Wu, Y., Lim, J., Yang, M.H.: Online object tracking: a benchmark. In: Computer Vision Pattern Recognition (2013)
[77]
Wu Y, Lim J, and Yang MH Object tracking benchmark PAMI 2015 37 9 1834-1848
[78]
Xiao J, Stolkin R, Gao Y, and Leonardis A Robust fusion of color and depth data for RGB-D target tracking using adaptive range-invariant depth models and spatio-temporal consistency constraints IEEE Trans. Cybern. 2018 48 2485-2499
[79]
Xu, N., Price, B., Yang, J., Huang, T.: Deep grabcut for object selection. In: Proceedings of British Machine Vision Conference (2017)
[80]
Xu, T., Feng, Z.H., Wu, X.J., Kittler, J.: AFAT: adaptive failure-aware tracker for robust visual object tracking. arXiv preprint arXiv:2005.13708 (2020)
[81]
Xu, Y., Wang, Z., Li, Z., Ye, Y., Yu, G.: SiamFC++: towards robust and accurate visual tracking with target estimation guidelines. arXiv preprint arXiv:1911.06188 (2019)
[82]
Yan, B., Wang, D., Lu, H., Yang, X.: Alpha-refine: boosting tracking performance by precise bounding box estimation. arXiv preprint arXiv:2007.02024 (2020)
[83]
Yan, B., Zhao, H., Wang, D., Lu, H., Yang, X.: Skimming-Perusal Tracking: a framework for real-time and robust long-term tracking. In: IEEE International Conference on Computer Vision (ICCV) (2019)
[84]
Yang, Z., Liu, S., Hu, H., Wang, L., Lin, S.: RepPoints: point set representation for object detection. In: The IEEE International Conference on Computer Vision (ICCV), pp. 9657–9666, October 2019
[85]
Yiming, L., Shen, J., Pantic, M.: Mobile face tracking: a survey and benchmark. arXiv:1805.09749v1 (2018)
[86]
Young, D.P., Ferryman, J.M.: PETS Metrics: on-line performance evaluation service. In: Proceedings of the 14th International Conference on Computer Communications and Networks, ICCCN 2005, pp. 317–324 (2005)
[87]
Zhang, L., Danelljan, M., Gonzalez-Garcia, A., van de Weijer, J., Khan, F.S.: Multi-modal fusion for end-to-end RGB-T tracking. In: IEEE International Conference on Computer Vision, ICCV Workshops (2019)
[88]
Zhang, P., Zhao, J., Wang, D., Lu, H., Yang, X.: Jointly modeling motion and appearance cues for robust RGB-T tracking. CoRR abs/2007.02041 (2020)
[89]
Zhang, Y., Wu, Z., Peng, H., Lin, S.: A transductive approach for video object segmentation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4000–4009, June 2020
[90]
Zhang, Y., Wang, D., Wang, L., Qi, J., Lu, H.: Learning regression and verification networks for long-term visual tracking. CoRR abs/1809.04320 (2018)
[91]
Zhang, Z., Peng, H.: Deeper and wider Siamese networks for real-time visual tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4591–4600, June 2019
[92]
Zhang, Z., Peng, H., Fu, J., Li, B., Hu, W.: Ocean: object-aware anchor-free tracking. arXiv preprint arXiv:2006.10721 (2020)
[93]
Zhu, P., Wen, L., Bian, X., Haibin, L., Hu, Q.: Vision meets drones: a challenge. arXiv preprint arXiv:1804.07437 (2018)

Cited By

View all
  • (2024)Refiner: a general object position refinement algorithm for visual trackingNeural Computing and Applications10.1007/s00521-023-09263-936:8(3967-3981)Online publication date: 1-Mar-2024
  • (2023)RGBD1KProceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence10.1609/aaai.v37i3.25500(3870-3878)Online publication date: 7-Feb-2023
  • (2023)RGBT tracking based on prior least absolute shrinkage and selection operator and quality aware fusion of deep and handcrafted featuresKnowledge-Based Systems10.1016/j.knosys.2023.110683275:COnline publication date: 5-Sep-2023
  • Show More Cited By

Index Terms

  1. The Eighth Visual Object Tracking VOT2020 Challenge Results
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Guide Proceedings
      Computer Vision – ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part V
      Aug 2020
      776 pages
      ISBN:978-3-030-68237-8
      DOI:10.1007/978-3-030-68238-5

      Publisher

      Springer-Verlag

      Berlin, Heidelberg

      Publication History

      Published: 23 August 2020

      Author Tags

      1. Visual object tracking
      2. Performance evaluation protocol
      3. State-of-the-art benchmark
      4. RGB
      5. RGBD
      6. Depth
      7. RGBT
      8. Thermal imagery
      9. Short-term trackers
      10. Long-term trackers

      Qualifiers

      • Article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 21 Oct 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Refiner: a general object position refinement algorithm for visual trackingNeural Computing and Applications10.1007/s00521-023-09263-936:8(3967-3981)Online publication date: 1-Mar-2024
      • (2023)RGBD1KProceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence10.1609/aaai.v37i3.25500(3870-3878)Online publication date: 7-Feb-2023
      • (2023)RGBT tracking based on prior least absolute shrinkage and selection operator and quality aware fusion of deep and handcrafted featuresKnowledge-Based Systems10.1016/j.knosys.2023.110683275:COnline publication date: 5-Sep-2023
      • (2023)Multi-granularity Feature Fusion for Transformer-Based Single Object TrackingRough Sets10.1007/978-3-031-50959-9_22(311-323)Online publication date: 5-Oct-2023
      • (2023)Temporal Global Re-detection Based on Interaction-Fusion Attention in Long-Term Visual TrackingImage and Graphics10.1007/978-3-031-46308-2_1(3-15)Online publication date: 22-Sep-2023
      • (2023)MFT: Multi-scale Fusion Transformer for Infrared and Visible Image FusionArtificial Neural Networks and Machine Learning – ICANN 202310.1007/978-3-031-44223-0_39(485-496)Online publication date: 26-Sep-2023
      • (2023)Siamese Network Based on MLP and Multi-head Cross Attention for Visual Object TrackingArtificial Neural Networks and Machine Learning – ICANN 202310.1007/978-3-031-44204-9_35(420-431)Online publication date: 26-Sep-2023
      • (2022)QuadTreeCapsule: QuadTree Capsules for Deep Regression TrackingProceedings of the 30th ACM International Conference on Multimedia10.1145/3503161.3548236(4684-4693)Online publication date: 10-Oct-2022
      • (2022)Tracking Small and Fast Moving Objects: A BenchmarkComputer Vision – ACCV 202210.1007/978-3-031-26293-7_33(552-569)Online publication date: 4-Dec-2022
      • (2022)The Tenth Visual Object Tracking VOT2022 Challenge ResultsComputer Vision – ECCV 2022 Workshops10.1007/978-3-031-25085-9_25(431-460)Online publication date: 23-Oct-2022
      • Show More Cited By

      View Options

      View options

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media