Skip to yearly menu bar Skip to main content


Poster

Distributed Active Client Selection With Noisy Clients Using Model Association Scores

Kwang In Kim

# 13
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Thu 3 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

ctive client selection (ACS) strategically identifies clients to participate in model updates during each training round of federated learning. In scenarios with limited communication resources, ACS emerges as a superior alternative to random client selection, significantly improving the convergence rate. However, current ACS methodologies face challenges in managing clients that provide erroneous model updates, such as those arising from noisy labels. To address this challenge, we introduce a new ACS algorithm tailored for scenarios with unknown erroneous clients. Our algorithm constructs a client sampling distribution based on the global association among model updates, which quantifies the ability of a client’s model update to align with updates from other clients. Leveraging these model associations, we efficiently identify clients whose updates may contain substantial errors, potentially disrupting the overall model training process. This approach is simple, computationally efficient, and eliminates the need for hyperparameter tuning. Our experiments, conducted on six benchmark datasets that encompass different types of erroneous and potentially malicious clients demonstrate that conventional ACS methods, not designed for erroneous clients, fail to outperform random selection. In contrast, our approach significantly enhances convergence speed while using the same communication resources.

Live content is unavailable. Log in and register to view live content