Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning

Part of Advances in Neural Information Processing Systems 34 pre-proceedings (NeurIPS 2021)

Paper Supplemental

Bibtek download is not available in the pre-proceeding


Authors

Xinyi Xu, Lingjuan Lyu, Xingjun Ma, Chenglin Miao, Chuan Sheng Foo, Bryan Kian Hsiang Low

Abstract

Collaborative machine learning provides a promising framework for different agents to pool their resources (e.g., data) for a common learning task. In realistic settings where agents are self-interested and not altruistic, they may be unwilling to share data or model without adequate rewards. Furthermore, as the data/model shared by the agents may differ in quality, designing rewards which are fair to them is important so that they do not feel exploited nor discouraged from sharing. In this paper, we adopt federated learning as a gradient-based formalization of collaborative machine learning, propose a novel cosine gradient Shapley value to evaluate the agents’ uploaded model parameter updates/gradients, and design theoretically guaranteed fair rewards in the form of better model performance. Compared to existing baselines, our approach is more efficient and does not require a validation dataset. We perform extensive experiments to demonstrate that our proposed approach achieves better fairness and predictive performance.