Skip to content

Commit

Permalink
Add new paper for awesome-pets/aml (secretflow#553)
Browse files Browse the repository at this point in the history
* Update attack_defense.md

* Update awesome-pets.md
  • Loading branch information
JdYwz authored May 23, 2023
1 parent 4275227 commit a7af2dd
Show file tree
Hide file tree
Showing 2 changed files with 25 additions and 5 deletions.
2 changes: 1 addition & 1 deletion docs/awesome-pets/awesome-pets.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Protecting training/inference data

Attacks on machine learning system

1. [General attacks and defense](papers/applications/aml/attack_defense.md) (Contributors: [@zhangxingmeng](https://www.github.com/zhangxingmeng))
1. [General attacks and defense](papers/applications/aml/attack_defense.md) (Contributors: [@zhangxingmeng](https://www.github.com/zhangxingmeng) [@JDywz](https://github.com/JdYwz))

Multimedia Privacy and Security

Expand Down
28 changes: 24 additions & 4 deletions docs/awesome-pets/papers/applications/aml/attack_defense.md
Original file line number Diff line number Diff line change
@@ -1,47 +1,66 @@
## Attack

[An Overview of Federated Deep Learning Privacy Attacks and Defensive Strategies. 2020-04-01](https://arxiv.org/pdf/2004.04676.pdf)

### Backdoor Attack

- [How To Backdoor Federated Learning](https://arxiv.org/pdf/1807.00459.pdf)

- [Can You Really Backdoor Federated Learning?](https://arxiv.org/abs/1911.07963)

- [Attack of the Tails: Yes, You Really Can Backdoor Federated Learning](https://papers.nips.cc/paper/2020/file/b8ffa41d4e492f0fad2f13e29e1762eb-Paper.pdf)

- [DBA: Distributed Backdoor Attacks against Federated Learning](https://openreview.net/pdf?id=rkgyS0VFvr)

- [CRFL: Certifiably Robust Federated Learning against Backdoor Attacks. ICML 2021.](https://arxiv.org/pdf/2106.08283.pdf)


- [NeurIPS 2020 Submission: Backdoor Attacks on Federated Meta-Learning](https://arxiv.org/pdf/2006.07026.pdf)

### Gradients Attack

- [Deep Leakage from Gradients](https://arxiv.org/pdf/1906.08935.pdf)

- [Gradient Disaggregation: Breaking Privacy in Federated Learning by Reconstructing the User Participant Matrix](https://arxiv.org/pdf/2106.06089.pdf)
-[iDLG: Improved Deep Leakage from Gradients](https://arxiv.org/pdf/2001.02610.pdf)

- [iDLG: Improved Deep Leakage from Gradients](https://arxiv.org/pdf/2001.02610.pdf)

- [Inverting Gradients - How easy is it to break Privacy in Federated Learning?](https://arxiv.org/pdf/2003.14053.pdf)

- [CAFE: Catastrophic Data Leakage in Vertical Federated Learning](https://papers.neurips.cc/paper/2021/file/08040837089cdf46631a10aca5258e16-Paper.pdf)

- [Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning.](https://arxiv.org/pdf/1702.07464.pdf)

- [A Framework for Evaluating Gradient Leakage Attacks in Federated Learning](https://arxiv.org/pdf/2004.10397.pdf)
- [Inverting Gradients - How easy is it to break privacy in federated learning? 2020-03-31](https://arxiv.org/pdf/2003.14053.pdf)

- [Gradient Inversion with Generative Image Prior. NeurIPS 2021.](https://arxiv.org/pdf/2110.14962.pdf)

- [Evaluating Gradient Inversion Attacks and Defenses in Federated Learning. NeurIPS 2021.](https://arxiv.org/pdf/2112.00059.pdf)

### Model Poison Attack

- [Analyzing Federated Learning through an Adversarial Lens](https://arxiv.org/abs/1811.12470)
- [(*) Local Model Poisoning Attacks to Byzantine-Robust Federated Learning. 2019-11-26](https://arxiv.org/pdf/1911.11815.pdf)
- [Data Poisoning Attacks on Federated Machine Learning. 2020-04-19](https://arxiv.org/pdf/2004.10020.pdf)

### Free Rider Attack

- [NeurIPS 2020 submission: Free-rider Attacks on Model Aggregation in Federated Learning](https://arxiv.org/pdf/2006.11901.pdf)
- [Free-riders in Federated Learning: Attacks and Defenses. 2019-11-28](https://arxiv.org/pdf/1911.12560.pdf)

### Membership Inference Attack

- [Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning. S&P 2019.](https://arxiv.org/pdf/1812.00910.pdf)

### Feature Inference Attack

### Label Inference Attack

- [Label Inference Attacks Against Vertical Federated Learning. USENIX Security 2022.](https://www.usenix.org/system/files/sec22-fu-chong.pdf)

## Defense

### With DP

- [Differentially Private Federated Learning: A Client Level Perspective. NIPS 2017 Workshop](https://arxiv.org/pdf/1712.07557.pdf)

- [Federated Learning with Bayesian Differential Privacy.](https://arxiv.org/pdf/1911.10071.pdf)
Expand All @@ -56,8 +75,8 @@

### With TEE


### Algorithm

- [RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets, AAAI 2019](https://arxiv.org/abs/1811.03761)

- [Towards Realistic Byzantine-Robust Federated Learning. 2020-04-10](https://arxiv.org/pdf/2004.04986.pdf)
Expand All @@ -67,6 +86,7 @@
- [Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates](https://arxiv.org/pdf/1803.01498.pdf)

## Fairness

- [Fair Resource Allocation in Federated Learning. ICLR 2020.](https://arxiv.org/pdf/1905.10497.pdf)

- [Hierarchically Fair Federated Learning](https://arxiv.org/pdf/2004.10386.pdf)
Expand Down

0 comments on commit a7af2dd

Please sign in to comment.