Abstract: We describe a threat model under which a split network-based federated learning system is susceptible to a model inversion attack by a malicious computational server. We demonstrate that the attack can be successfully performed with limited knowledge of the data distribution by the attacker. We propose a simple additive noise method to defend against model inversion, finding that the method can significantly reduce attack efficacy at an acceptable accuracy trade-off on MNIST. Furthermore, we show that NoPeekNN, an existing defensive method, protects different information from exposure, suggesting that a combined defence is necessary to fully protect private user data.
ICLR 2021 Workshop on Distributed and Private Machine Learning (DPML 2021); https://dp-ml.github.io/2021-workshop-ICLR/
For more details: Practical Defences Against Model Inversion Attacks for Split Neural Networks.