
Friday Jun 13, 2025
Group-based Client Sampling in Multi-Model Federated Learning
Federated learning (FL) allows multiple clients to collaboratively train a model without sharing their private data. In practical scenarios, clients frequently engage in training multiple models concurrently, referred to as multi-model federated learning (MMFL). While concurrent training is generally faster than training one model at a time, MMFL exacerbates traditional FL challenges like the presence of non-i.i.d. data: since each individual client may only be able to train one model in each training round due to local resource limitations, the set of clients training each model will change in each round, introducing instability when clients have different data distributions. Existing single-model FL approaches leverage inherent client clustering to accelerate convergence in the presence of such data heterogeneity. However, since each MMFL model may train on a different dataset, extending these ideas to MMFL requires creating a unified cluster or group structure that supports all models while coordinating their training. In this paper, we present the first group-based client-model allocation scheme in MMFL able to accelerate the training process and improve MMFL performance. We also consider a more realistic scenario in which models and clients can dynamically join the system during training. Empirical studies in real-world datasets show that our MMFL algorithms outperform several baselines up to 15%, particularly in more complex and statically heterogeneous scenarios.
Group-based Client Sampling in Multi-Model Federated Learning
Zejun Gong, Haoran Zhang, Carnegie Mellon University; Marie Siew, Singapore University of Technology and Design; Carlee Joe-Wong, Carnegie Mellon University; Rachid El-Azouzi, University of Avignon
No comments yet. Be the first to say something!