Pré-Publication, Document De Travail Année : 2024

Entity-aware cross-modal pretraining for Knowledge-Based Visual Question Answering

Résumé

Knowledge-Aware Visual Question Answering about Entities (KVQAE) is a recent multimodal retrieval task aiming to answer visual questions about named entities from a multimodal knowledge base. In this context, we focus more particularly on cross-modal retrieval and propose to inject information about entities in the representations of both texts and images during their building through two pretraining auxiliary tasks, namely entity-level masked language modeling and entity type prediction objectives. We show competitive performance over existing approaches on 3 KVQAE standard benchmarks, revealing the interest of raising entity awareness during cross-modal pretraining and specifically for the KVQAE task 3
Fichier principal
Vignette du fichier
adjali2025preprint_hal.pdf (919.46 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

cea-04910767 , version 1 (24-01-2025)

Licence

Identifiants

  • HAL Id : cea-04910767 , version 1

Citer

Omar Adjali, Olivier Ferret, Sahar Ghannay, Hervé Le Borgne. Entity-aware cross-modal pretraining for Knowledge-Based Visual Question Answering. 2024. ⟨cea-04910767⟩
0 Consultations
0 Téléchargements

Partager

More