I am interested in issues of privacy, bias, explainability, and reliability in machine learning, in particular as they pertain to generative models. A full list of my papers can be found on my Google Scholar.
Preprints
Data Attribution-Guided Machine Unlearning. Extended Abstract. [NeurIPS ’24 GenLaw Workshop]
Machine Unlearning Fails to Remove Data Poisoning Attacks [NeurIPS ’24 GenLaw Workshop]
Phenotype Randomization Mechanisms for Private Release of Genome-Wide Association Statistics [RECOMB-PRIEQ ’24]
Private Regression in Multiple Outcomes (PRIMO) [ArXiv ’23, In Submission] Presented at TPDP@ICML.
Privacy Risks in Large Language Models: A Survey
Publications
In-Context Unlearning: Language Models are Few Shot Unlearners [ICML ‘24]
![](https://cdn.statically.io/img/sethneel.com/wp-content/uploads/2023/10/f8qicc-wgaafvkh.png?w=678)
Feature Importance Disparities for Dataset Bias Investigations [ICML ‘24] Code
![](https://cdn.statically.io/img/sethneel.com/wp-content/uploads/2023/08/screen-shot-2023-08-01-at-5.47.51-pm.png?w=287)
MusCAT: A multi-scale, hybrid federated system for privacy-preserving epidemic surveillance and risk prediction [2nd Place Grand Prize Winner, 1st Place Phase 1 of the US/UK Privacy Challenge, ’23] White House Announcement
MoPe: Perturbation-based Privacy Attacks Against Language Models [EMNLP ’23, NEURIPS SoLaR Workshop ’23]
![](https://cdn.statically.io/img/sethneel.com/wp-content/uploads/2023/10/screen-shot-2023-10-24-at-1.04.44-pm-1.png?w=344)
![](https://cdn.statically.io/img/sethneel.com/wp-content/uploads/2023/08/screen-shot-2023-08-01-at-5.48.19-pm.png?w=632)
On the Privacy Risks of Algorithmic Recourse Code [AI STATS ’23]
Adaptive Machine Unlearning [NEURIPS ’21]
Data Privacy in Practice at LinkedIn [Harvard Business School Case Study ’22]
![](https://cdn.statically.io/img/sethneel.com/wp-content/uploads/2023/08/screen-shot-2023-08-09-at-2.54.14-pm.png?w=374)
Descent-to-Delete: Gradient-Based Methods for Machine Unlearning [Algorithmic Learning Theory ’21] Code
![](https://cdn.statically.io/img/sethneel.com/wp-content/uploads/2023/08/screen-shot-2023-08-09-at-2.47.32-pm.png?w=575)
Eliciting and Enforcing Subjective Individual Fairness [FORC ’21]
Optimal, Truthful, and Private Securities Lending [ACM AI in Finance ’20, NEURIPS Workshop on Robust AI in Financial Services ’19] selected for oral presentation!
Differentially Private Objective Perturbation: Beyond Smoothness and Convexity [ICML ’20, NEURIPS Workshop on Privacy in ML ’19]
A New Analysis of Differential Privacy’s Generalization Guarantees [ITCS ’20] regular talk slot!
The Role of Interactivity in Local Differential Privacy [FOCS ’19]
How to use Heuristics for Differential Privacy [FOCS ’19] Video.
An Empirical Study of Rich Subgroup Fairness for Machine Learning [ACM FAT* ’19, ML track]
- Led development on package integrated into the IBM AI Fairness 360 package here. AIF360 development branch on my Github, with a stand-alone package developed by the AlgoWatch Team.
Fair Algorithms for Learning in Allocation Problems [ACM FAT* ’19, ML track]
Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness [ICML ’18, EC MD4SG ’18]
Mitigating Bias in Adaptive Data Gathering via Differential Privacy [ICML ’18]
Accuracy First: Selecting a Differential Privacy Level for Accuracy Constrained ERM [NIPS ’17, Journal of Privacy and Confidentiality ’19]
A Framework for Meritocratic Fairness of Online Linear Models [AAAI/AIES ’18]
Rawlsian Fairness for Machine Learning [FATML ’16]
A Convex Framework for Fair Regression [FATML ’17]
Math stuff from College & High School
Aztec Castles and the dP3 Quiver [Journal of Physics A ’15]
Mahalanobis Matching and Equal Percent Bias Reduction[Senior Thesis, Harvard ’15]
Plane Partitions and Domino Tilings [Intel Science Talent Search Semifinalist, ’11]