Skip to main content

Research Repository

Advanced Search

TANet: Text region attention learning for vehicle re-identification

Hu, Wenbo; Zhan, Hongjian; Shivakumara, Palaiahnakote; Pal, Umapada; Lu, Yue

Authors

Wenbo Hu

Hongjian Zhan

Umapada Pal

Yue Lu



Abstract

In recent years, the challenge of distinguishing vehicles of the same model has prompted a shift towards leveraging both global appearances and local features, such as lighting and rearview mirrors, for vehicle re-identification (ReID). Despite advancements, accurately identifying vehicles remains complex, particularly due to the underutilization of highly discriminative text regions. This paper introduces the Text Region Attention Network (TANet), a novel approach that integrates global and local information with a specific focus on text regions for improved feature learning. TANet uniquely captures stable and distinctive features across various vehicle views, demonstrating its effectiveness through rigorous evaluation on the VeRi-776, VehicleID, and VERI-Wild datasets. TANet significantly outperforms existing methods, achieving mAP scores of 83.6% on VeRi-776, 84.4% on VehicleID (Large), and 76.6% on VERI-Wild (Large). Statistical tests further validate the superiority of TANet over the baseline, showcasing notable improvements in mAP and Top-1 through Top-15 accuracy metrics.

Citation

Hu, W., Zhan, H., Shivakumara, P., Pal, U., & Lu, Y. (2024). TANet: Text region attention learning for vehicle re-identification. Engineering Applications of Artificial Intelligence, 133, https://doi.org/10.1016/j.engappai.2024.108448

Journal Article Type Article
Acceptance Date Apr 11, 2024
Online Publication Date Apr 26, 2024
Publication Date 2024
Deposit Date Apr 26, 2024
Publicly Available Date Apr 27, 2026
Journal Engineering Applications of Artificial Intelligence
Print ISSN 0952-1976
Publisher Elsevier
Peer Reviewed Peer Reviewed
Volume 133
DOI https://doi.org/10.1016/j.engappai.2024.108448