I am excited to announce that my very first first-author paper titled “Visual Tracking by means of Deep Reinforcement Learning and an Expert Demonstrator” was accepted at the Visual Object Tracking (VOT) 2019 Challenge. This workshop is the annual premier event in the visual tracking panorama and this year it will be held in conjunction with the International Conference on Computer Vision (ICCV) 2019.

Here is the abstract of the paper:

In the last decade many different algorithms have been proposed to track a generic object in videos. Their execution on recent large-scale video datasets can produce a great amount of various tracking behaviours. New trends in Reinforcement Learning showed that demonstrations of an expert agent can be efficiently used to speed-up the process of policy learning. Taking inspiration from such works and from the recent applications of Reinforcement Learning to visual tracking, we propose two novel trackers, A3CT, which exploits demonstrations of a state-of-the-art tracker to learn an effective tracking policy, and A3CTD, that takes advantage of the same expert tracker to correct its behaviour during tracking. Through an extensive experimental validation on the GOT-10k, OTB-100, LaSOT, UAV123 and VOT benchmarks, we show that the proposed trackers achieve state-of-the-art performance while running in real-time.

The preprint can be found on arXiv. Here below you can have a look to some videos where we show the performance of our proposed trackers.