Vision Transformers Need Registers - Apprentissage de modèles visuels à partir de données massives Accéder directement au contenu
Rapport Année : 2023

Vision Transformers Need Registers

Résumé

Transformers have recently emerged as a powerful tool for learning visual representations. In this paper, we identify and characterize artifacts in feature maps of both supervised and self-supervised ViT networks. The artifacts correspond to high-norm tokens appearing during inference primarily in low-informative background areas of images, that are repurposed for internal computations. We propose a simple yet effective solution based on providing additional tokens to the input sequence of the Vision Transformer to fill that role. We show that this solution fixes that problem entirely for both supervised and self-supervised models, sets a new state of the art for self-supervised visual models on dense visual prediction tasks, enables object discovery methods with larger models, and most importantly leads to smoother feature maps and attention maps for downstream visual processing.
Fichier principal
Vignette du fichier
2309.16588.pdf (4.06 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
licence : CC BY - Paternité

Dates et versions

hal-04394066 , version 1 (15-01-2024)

Licence

Paternité

Identifiants

  • HAL Id : hal-04394066 , version 1

Citer

Timothée Darcet, Maxime Oquab, Julien Mairal, Piotr Bojanowski. Vision Transformers Need Registers. Inria; Meta AI. 2023, pp.1-16. ⟨hal-04394066⟩
35 Consultations
10 Téléchargements

Partager

Gmail Facebook X LinkedIn More