In order to deal with the problem of model gradients, neural network weights and other parameters exposing user privacy information and user privacy personalization under federated learning, therefore, this article proposes a personalized differential privacy method in a federated learning scenario. Before training, users upload the privacy protection level to the central server. The central server defines the amount of noise based on the total privacy budget and privacy protection level, and aggregates the training parameters using an aggregation algorithm weighted by the privacy protection level; the algorithm is verified to be effective in MNIST and CIFAR-10 data set and compared with existing localized differential privacy algorithms and centralized differential privacy algorithms, The classification accuracy is 94.12% and 45.20% respectively. Proved the usability and rationality of the algorithm in this scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.