Smart City Gnosys

Smart city article details

Title Bridging The Gap: Multi-Granularity Representation Learning For Text-Based Vehicle Retrieval
ID_Doc 12878
Authors Bo X.; Liu J.; Yang D.; Ma W.
Year 2025
Published Complex and Intelligent Systems, 11, 1
DOI http://dx.doi.org/10.1007/s40747-024-01614-w
Abstract Text-based cross-modal vehicle retrieval has been widely applied in smart city contexts and other scenarios. The objective of this approach is to identify semantically relevant target vehicles in videos using text descriptions, thereby facilitating the analysis of vehicle spatio-temporal trajectories. Current methodologies predominantly employ a two-tower architecture, where single-granularity features from both visual and textual domains are extracted independently. However, due to the intricate semantic relationships between videos and text, aligning the two modalities effectively using single-granularity feature representation poses a challenge. To address this issue, we introduce a Multi-Granularity Representation Learning model, termed MGRL, tailored for text-based cross-modal vehicle retrieval. Specifically, the model parses information from the two modalities into three hierarchical levels of feature representation: coarse-granularity, medium-granularity, and fine-granularity. Subsequently, a feature adaptive fusion strategy is devised to automatically determine the optimal pooling mechanism. Finally, a multi-granularity contrastive learning approach is implemented to ensure comprehensive semantic coverage, ranging from coarse to fine levels. Experimental outcomes on public benchmarks show that our method achieves up to a 14.56% improvement in text-to-vehicle retrieval performance, as measured by the Mean Reciprocal Rank (MRR) metric, when compared against 10 state-of-the-art baselines and 6 ablation studies. © The Author(s) 2024.
Author Keywords Cross-modal; Multi-granularity; Semantic association; Vehicle retrieval


Similar Articles


Id Similarity Authors Title Published
38752 View0.923Alzubi T.M.; Mukhtar U.R.Mvr: Synergizing Large And Vision Transformer For Multimodal Natural Language-Driven Vehicle RetrievalIEEE Access, 13 (2025)
39699 View0.923Du Y.; Zhang B.; Ruan X.; Su F.; Zhao Z.; Chen H.Omg: Observe Multiple Granularities For Natural Language-Based Vehicle RetrievalIEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2022-June (2022)
47389 View0.895Sadiq T.; Omlin C.W.Scene Retrieval In Traffic Videos With Contrastive Multimodal LearningProceedings - International Conference on Tools with Artificial Intelligence, ICTAI (2023)
57381 View0.875Sebastian C.; Imbriaco R.; Meletis P.; Dubbelman G.; Bondarev E.; De With P.H.N.Tied: A Cycle Consistent Encoder-Decoder Model For Text-To-Image RetrievalIEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (2021)