Statistical heterogeneity, especially feature distribution skewness, among the distributed data is a common phenomenon in practice, which is a challenging problem in federated learning that can lead to a degradation in the performance of the aggregated global model. In this paper, we introduce pFedV, a novel approach that leverages a variational inference perspective by incorporating a variational distribution into neural networks. During training, we add the KL-divergence term to the loss function to constrain the output distribution of layers for feature extraction and personalize the final layer of models. The experimental results demonstrate the effectiveness of our approaches in mitigating the distribution shift in feature space in federated learning.

Citation:

Y. Mou, J. Geng, F. Zhou, O. Beyan, C. Rong, and S. Decker, “pFedV: Mitigating Feature Distribution Skewness via Personalized Federated Learning with Variational Distribution Constraints,” in Lecture notes in computer science, 2023, pp. 283–294. doi: 10.1007/978-3-031-33377-4_22.

 

More Information:

Open source: https://doi.org/10.1007/978-3-031-33377-4_22