Federated learning compression designed for lightweight communications - Equipe Algorithm Architecture Interactions Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Federated learning compression designed for lightweight communications

Résumé

Federated Learning (FL) is a promising distributed method for edge-level machine learning, particularly for privacysensitive applications such as those in military and medical domains, where client data cannot be shared or transferred to a cloud computing server. In many use-cases, communication cost is a major challenge in FL due to its natural intensive network usage. Client devices, such as smartphones or Internet of Things (IoT) nodes, have limited resources in terms of energy, computation, and memory. To address these hardware constraints, lightweight models and compression techniques such as pruning and quantization are commonly adopted in centralised paradigms. In this paper, we investigate the impact of compression techniques on FL for a typical image classification task. Going further, we demonstrate that a straightforward method can compresses messages up to 50% while having less than 1% of accuracy loss, competing with state-of-the-art techniques.
Fichier principal
Vignette du fichier
kvkztxwdstzqdybrpdyqspfqhjmtrwpt.pdf (175.18 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04251969 , version 1 (20-10-2023)

Identifiants

Citer

Lucas Grativol Ribeiro, Mathieu Leonardon, Guillaume Muller, Fresse, Virginie, Matthieu Arzel. Federated learning compression designed for lightweight communications. ICECS 2023: IEEE 30th International Conference on Electronics, Circuits and Systems, Dec 2023, Istanbul, Turkey. ⟨10.1109/ICECS58634.2023.10382717⟩. ⟨hal-04251969⟩
77 Consultations
38 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More