Curated notebooks on how to train neural networks using differential privacy and federated learning.
Differential Privacy is a set of techniques for preventing a model from accidentally memorizing secrets present in a training dataset during the learning process.
The key points under Differential Privacy are:
- Make a promise to a data subject that: You won’t be affected, adversely or otherwise, by allowing your data to be used in any analysis, no matter what studies, datasets or information sources, are available.
- Ensure that the model learning from sensitive data are only learning what they are supposed to learn without accidentally learning what they are not supposed to learn from their data
Instead of bringing data all to one place for training, federated learning is done by bringing the model to the data. This allows a data owner to maintain the only copy of their information.