Robust Deep Reinforcement Learning Through Adversarial Attacks and Training : A Survey - IRT SystemX Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2024

Robust Deep Reinforcement Learning Through Adversarial Attacks and Training : A Survey

Résumé

Deep Reinforcement Learning (DRL) is an approach for training autonomous agents across various complex environments. Despite its significant performance in well known environments, it remains susceptible to minor conditions variations, raising concerns about its reliability in real-world applications. To improve usability, DRL must demonstrate trustworthiness and robustness. A way to improve robustness of DRL to unknown changes in the conditions is through Adversarial Training, by training the agent against well suited adversarial attacks on the dynamics of the environment. Addressing this critical issue, our work presents an in-depth analysis of contemporary adversarial attack methodologies, systematically categorizing them and comparing their objectives and operational mechanisms. This classification offers a detailed insight into how adversarial attacks effectively act for evaluating the resilience of DRL agents, thereby paving the way for enhancing their robustness.
Fichier principal
Vignette du fichier
2403.00420(1).pdf (1.47 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04521876 , version 1 (26-03-2024)

Identifiants

Citer

Lucas Schott, Josephine Delas, Hatem Hajri, Elies Gherbi, Reda Yaich, et al.. Robust Deep Reinforcement Learning Through Adversarial Attacks and Training : A Survey. 2024. ⟨hal-04521876⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More