The efficient tracking of articulated bodies over time is an essential element of pattern recognition and dynamic scenes analysis. This paper proposes a novel method for robust visual tracking, based on the combination of image-based prediction and weighted correlation. Starting from an initial guess, neural computation is applied to predict the position of the target in each video frame. Normalized cross-correlation is then applied to refine the predicted target position. Image-based prediction relies on a novel architecture, derived from the Elman’s Recurrent Neural Networks and adopting nearest neighborhood connections between the input and context layers in order to store the temporal information content of the video. The proposed architecture, named 2D Recurrent Neural Network, ensures both a limited complexity and a very fast learning stage. At the same time, it guarantees fast execution times and excellent accuracy for the considered tracking task. The effectiveness of the proposed approach is demonstrated on a very challenging set of dynamic image sequences, extracted from the final of triple jump at the London 2012 Summer Olympics. The system shows remarkable performance in all considered cases, characterized by changing background and a large variety of articulated motions.
2D recurrent neural networks for robust visual tracking of non-rigid bodies / Masala, Giovanni Luca Christian; Golosio, Bruno; Tistarelli, Massimo; Grosso, Enrico. - 629:(2016), pp. 18-34. ((Intervento presentato al convegno 17th International Conference on Engineering Applications of Neural Networks, EANN 2016 tenutosi a gbr nel 2016 [10.1007/978-3-319-44188-7_2].