1

Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors

A Polynomial Time, Pure Differentially Private Estimator for Binary Product Distributions

Not All Learnable Distribution Classes are Privately Learnable

Sorting and Selection in Rounds with Adversarial Comparisons

Distribution Learnability and Robustness

Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks

Private Distribution Learning with Public Data: The View from Sample Compression

Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks

Robustness Implies Privacy in Statistical Estimation

New Lower Bounds for Private Estimation and a Generalized Fingerprinting Lemma