Pavlos Papadopoulos

Logo


Associate Lecturer / PhD Student

Cybersecurity, Distributed Ledger Technology, Privacy-Preserving Machine Learning

Edinburgh Napier University

View My LinkedIn Profile

View My Google Scholar Profile

View My University Portfolio

View My GitHub Profile

Practical Defences Against Model Inversion Attacks for Split Neural Networks

Abstract: We describe a threat model under which a split network-based federated learning system is susceptible to a model inversion attack by a malicious computational server. We demonstrate that the attack can be successfully performed with limited knowledge of the data distribution by the attacker. We propose a simple additive noise method to defend against model inversion, finding that the method can significantly reduce attack efficacy at an acceptable accuracy trade-off on MNIST. Furthermore, we show that NoPeekNN, an existing defensive method, protects different information from exposure, suggesting that a combined defence is necessary to fully protect private user data.

Practical Defences Against Model Inversion Attacks for Split Neural Networks



ICLR 2021 Workshop on Distributed and Private Machine Learning (DPML 2021); https://dp-ml.github.io/2021-workshop-ICLR/

For more details: Practical Defences Against Model Inversion Attacks for Split Neural Networks.