Author(s): Yu, ZY (Yu, Zaiyang); Tiwari, P (Tiwari, Prayag); Hou, LY (Hou, Luyang); Li, LS (Li, Lusi); Li, WJ (Li, Weijun); Jiang, LM (Jiang, Limin); Ning, X (Ning, Xin)

Source: KNOWLEDGE-BASED SYSTEMS Volume: 283 Article Number: 111200  DOI: 10.1016/j.knosys.2023.111200  Early Access Date: NOV 2023   Published: JAN 11 2024 

Abstract: Re-identification (ReID) of occluded persons is a challenging task due to the loss of information in scenes with occlusions. Most existing methods for occluded ReID use 2D-based network structures to directly extract representations from 2D RGB (red, green, and blue) images, which can result in reduced performance in occluded scenes. However, since a person is a 3D non-grid object, learning semantic representations in a 2D space can limit the ability to accurately profile an occluded person. Therefore, it is crucial to explore alternative approaches that can effectively handle occlusions and leverage the full 3D nature of a person. To tackle these challenges, in this study, we employ a 3D view-based approach that fully utilizes the geometric information of 3D objects while leveraging advancements in 2D-based networks for feature extraction. Our study is the first to introduce a 3D view-based method in the areas of holistic and occluded ReID. To implement this approach, we propose a random rendering strategy that converts 2D RGB images into 3D multi-view images. We then use a 3D Multi-View Transformation Network for ReID (MV-ReID) to group and aggregate these images into a unified feature space. Compared to 2D RGB images, multi-view images can reconstruct occluded portions of a person in 3D space, enabling a more comprehensive understanding of occluded individuals. The experiments on benchmark datasets demonstrate that the proposed method achieves state-of-the-art results on occluded ReID tasks and exhibits competitive performance on holistic ReID tasks. These results also suggest that our approach has the potential to solve occlusion problems and contribute to the field of ReID. The source code and dataset are available at https://github.com/yuzaiyang123/MV-Reid.

Accession Number: WOS:001125956700001

ISSN: 0950-7051

eISSN: 1872-7409