We propose a prototype-based federated learning method tailored for embedding networks in classification or verification tasks. Our focus lies in scenarios where each client possesses data from only one class. The central challenge arises from the need to learn an embedding network capable of discriminating between different classes while respecting privacy constraints. Sharing true class prototypes with the server or other clients could potentially compromise sensitive information. To address this, we introduce a proxy class prototype that can be safely shared among clients. Our approach involves generating proxy class prototypes by linearly combining them with their nearest neighbors. This technique conceals the true class prototype while enabling clients to learn discriminative embedding networks. We compare our method against alternative techniques, including random Gaussian noise addition and random selection with cosine similarity constraints. Additionally, we evaluate the robustness of our approach against gradient inversion attacks and introduce a prototype leakage measure to quantify the extent of private information revealed when sharing the proposed proxy class prototype. Furthermore, we provide a theoretical convergence analysis of our approach. Empirical results on three benchmark datasets CIFAR-100, VoxCeleb1, and VGGFace2 demonstrate the effectiveness of our proposed method for federated learning from scratch.
Live content is unavailable. Log in and register to view live content