Data di Pubblicazione:
2025
Citazione:
A defense mechanism against label inference attacks in vertical federated learning / M. Arazzi, S. Nicolazzo, A. Nocera. - In: NEUROCOMPUTING. - ISSN 0925-2312. - 624:(2025 Apr 01), pp. 129476.1-129476.13. [10.1016/j.neucom.2025.129476]
Abstract:
Vertical Federated Learning (VFL, for short) is a category of Federated Learning that is gaining increasing attention in the context of Artificial Intelligence. According to this paradigm, machine/deep learning models are trained collaboratively among parties with vertically partitioned data. Typically, in a VFL scenario, the labels of the samples are kept private from all parties except the aggregating server, that is, the label owner. However, recent work discovered that by exploiting the gradient information returned by the server to bottom models, with the knowledge of only a small set of auxiliary labels on a very limited subset of training data points, an adversary could infer the private labels. These attacks are known as label inference attacks in VFL. In our work, we propose a novel framework called KD (knowledge distillation with -anonymity) that combines knowledge distillation and -anonymity to provide a defense mechanism against potential label inference attacks in a VFL scenario. Through an exhaustive experimental campaign, we demonstrate that by applying our approach, the performance of the analyzed label inference attacks decreases consistently, even by more than 60%, maintaining the accuracy of the whole VFL almost unaltered.
Tipologia IRIS:
01 - Articolo su periodico
Keywords:
Federated learning; Vertical Federated Learning; VFL; Label inference attack; Knowledge distillation; k-anonymity
Elenco autori:
M. Arazzi, S. Nicolazzo, A. Nocera
Link alla scheda completa:
Link al Full Text: