Vitaly Feldman

I'm a research scientist at Apple ML Research.

I work on theoretical aspects of machine learning and private data analysis. Recent topics include role of memorization in learning, tools for analysis of generalization, distributed privacy-preserving learning, privacy-preserving optimization, and adaptive data analysis. I also worked on understanding of natural learning systems: learning by the brain and evolution as learning.

Academic activities
Recent program committees:
Selected works (with strong bias for recent ones)
  1. Fast Optimal Locally Private Mean Estimation via Random Projections
    With Jelani Nelson, Huy Nguyen and Kunal Talwar. NeurIPS 2023 .
  2. Private Online Prediction from Experts: Separations and Faster Rates
    With Hilal Asi, Tomer Koren and Kunal Talwar. COLT 2023 .
  3. Stronger Privacy Amplification by Shuffling for Renyi and Approximate Differential Privacy
    With Audra McMillan and Kunal Talwar. SODA 2023 .
  4. Private Frequency Estimation via Projective Geometry
    With Jelani Nelson, Huy Nguyen and Kunal Talwar. ICML 2022 .
  5. Optimal Algorithms for Mean Estimation under Local Differential Privacy
    With Hilal Asi and Kunal Talwar. ICML 2022 .
  6. Private Stochastic Convex Optimization: Optimal Rates in ℓ1 Geometry
    With Hilal Asi, Tomer Koren and Kunal Talwar. ICML 2021 (oral presentation) .
  7. Lossless Compression of Efficient Private Local Randomizers
    With Kunal Talwar. ICML 2021 .
  8. Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling
    With Audra McMillan and Kunal Talwar. FOCS 2021 .
  9. When is Memorization of Irrelevant Training Data Necessary for High-Accuracy Learning?
    With Gavin Brown, Mark Bun, Adam Smith and Kunal Talwar. STOC 2021 .
  10. Individual Privacy Accounting via a Renyi Filter.
    With Tijana Zrnic. FORC 2021 (non-archival track), NeurIPS 2021 .
  11. What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation.
    With Chiyuan Zhang. NeurIPS, 2020 (spotlight).
  12. Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses.
    With Raef, Bassily, Cristobal Guzman and Kunal Talwar. NeurIPS, 2020 (spotlight).
  13. Private Stochastic Convex Optimization: Optimal Rates in Linear Time.
    With Tomer Koren and Kunal Talwar. STOC 2020 .
  14. Interaction is necessary for distributed learning with privacy or communication constraints.
    With Yuval Dagan. STOC 2020 .
  15. PAC learning with stable and private predictions .
    With Yuval Dagan. COLT 2020 .
  16. Does Learning Require Memorization? A Short Tale about a Long Tail.
    STOC 2020 .
  17. Private Stochastic Convex Optimization with Optimal Rates.
    With Raef Bassily, Kunal Talwar and Abhradeep Thakurta. NeurIPS 2019 (spotlight). .
  18. High probability generalization bounds for uniformly stable algorithms with nearly optimal rate.
    With Jan Vondrak. COLT 2019 .
  19. Amplification by Shuffling: From Local to Central Differential Privacy.
    With Ulfar Erlingsson, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Abhradeep Thakurta. SODA 2019.
  20. Privacy Amplification by Iteration.
    With Ilya Mironov, Kunal Talwar and Abhradeep Thakurta. FOCS 2018 .
  21. Privacy-preserving Prediction.
    With Cynthia Dwork. COLT 2018 .
  22. Calibrating Noise to Variance in Adaptive Data Analysis.
    With Thomas Steinke. COLT 2018 .
  23. A General Characterization of the Statistical Query Complexity.
    COLT 2017 .
  24. Generalization of ERM in Stochastic Convex Optimization: The Dimension Strikes Back.
    NIPS 2016 (oral presentation).
  25. The reusable holdout: Preserving validity in adaptive data analysis.
    With Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold and Aaron Roth. Science, 2015.
    IBM Research 2015 Best Paper Award.
    Based on STOC and NIPS papers below. See also my post on this work at IBM Research blog (republished by KDnuggets).
  26. Generalization in Adaptive Data Analysis and Holdout Reuse.
    With Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold and Aaron Roth. NIPS, 2015.
  27. Preserving Statistical Validity in Adaptive Data Analysis.
    With Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold and Aaron Roth. STOC 2015.
    Invited to SICOMP special issue on STOC
  28. On the Complexity of Random Satisfiability Problems with Planted Solutions .
    With Will Perkins and Santosh Vempala. STOC 2015 .
  29. Sample Complexity Bounds on Differentially Private Learning via Communication Complexity .
    With David Xiao. COLT 2014, SICOMP 2015.
  30. Optimal Bounds on Approximation of Submodular and XOS Functions by Juntas.
    With Jan Vondrak. FOCS 2013 . SICOMP 2016, Special issue on FOCS
    IBM Research 2016 Best Paper Award.
  31. Learning using Local Membership Queries.
    With Pranjal Awasthi and Varun Kanade. COLT 2013, Best Student (co-authored) Paper Award
  32. Statistical Algorithms and a Lower Bound for Detecting Planted Cliques.
    With Elena Grigorescu, Lev Reyzin, Santosh Vempala and Ying Xiao. STOC 2013 . JACM 2017 .
  33. Nearly Optimal Solutions for the Chow Parameters Problem and Low-weight Approximation of Halfspaces.
    With Anindya De, Ilias Diakonikolas and Rocco Servedio. STOC 2012; JACM 2014 .
    IBM Research 2014 Best Paper Award.
  34. Distribution-Specific Agnostic Boosting.
    ITCS (formerly ICS) 2010.
  35. A Complete Characterization of Statistical Query Learning with Applications to Evolvability.
    FOCS 2009; JCSS 2012 (Special issue on Learning Theory).
  36. Experience-Induced Neural Circuits That Achieve High Capacity..
    With Leslie Valiant. Neural Computation 21:10, 2009.
  37. New Results for Learning Noisy Parities and Halfspaces.
    With Parikshit Gopalan, Subhash Khot, and Ashok Ponnuswami. FOCS 2006; SICOMP 2009, Special issue on FOCS
  38. Hardness of Approximate Two-level Logic Minimization and PAC Learning with Membership Queries.
    STOC 2006; JCSS 75(1), 2009 (Special issue on Learning Theory)
  39. Attribute Efficient and Non-adaptive Learning of Parities and DNF Expressions.
    COLT 2005, Best Student Paper Award; JMLR 2007, Special issue on COLT
  40. The Complexity of Properly Learning Simple Concept Classes.
    With Misha Alekhnovich, Mark Braverman, Adam Klivans, and Toni Pitassi.
    FOCS 2004; JCSS 74(1), 2008 (Special issue on Learning Theory)
Slides/recordings for some recent talks
  • Efficient Algorithms for Locally Private Learning with Optimal Accuracy Guarantees. Video of a less technical talk at BIFOLD workshop 2024.
  • Does Learning Require Memorization? A Short Tale about a Long Tail:slides, long talk video courtesy of Stanford ISL Colloqium and a shorter version recorded for STOC 2020.
  • High probability generalization bounds for uniformly stable algorithms with nearly optimal rate. COLT 2019:slides, video
  • Locally Private Learning without Interaction Requires Separation. Privacy and the Science of Data Analysis workshop (Apr 2019):slides.
  • Amplification by Shuffling: From Local to Central Differential Privacy. Privacy-preserving Machine Learning workshop (Dec 2018); ITA 2019, Simons Institute seminar (Feb 2019):slides.
  • Privacy-preserving prediction. COLT 2018; Privacy in Graphs workshop (Nov 2018); JSM 2019. slides.
  • Generalization bounds for uniformly stable algorithms. Robust and High-Dimensional Statistics workshop (Oct 2018):slides, video. NIPS spotlight video.
  • Stability, Information and Generalization in Adaptive Data Analysis. Google NYC/Princeton/Penn (Apr. 2018): slides.
  • Dealing with Range Anxiety in Mean Estimation. ALT 2017 (Nov 2017): slides.
  • A General Characterization of the Statistical Query Complexity. COLT 2017 (July 2017); NYU (Feb. 2018): slides.
  • Understanding Generalization in Adaptive Data Analysis Computational Challenges in Machine Learning workshop, EPFL, and Bertinoro (2017):slides, video.
  • On the power of learning from k-wise queries. ITCS 2017: slides, video.
  • Lower bounds against convex relaxations via the statistical query complexity. Caltech/UCLA/Stanford/Harvard/MIT, 2017: slides (with some comments in the notes).
  • Generalization of ERM in stochastic convex optimization. NIPS 2016: slides and video
  • Generalization and adaptivity in stochastic convex optimization. TOCA-SV 2016: slides (with some comments in the notes).
  • Generalization in Adaptive Data Analysis via Max-Information. Simons Institute workshop on Information Theory, 2016: slides.
  • Preserving Validity in Adaptive Data Analysis. National Academy of Engineering, 2016: slides.
  • Adaptive Data Analysis without Overfitting. Workshop on Learning. NUS, 2015: slides.
  • Preserving statistical validity in adaptive data analysis. STOC 2015: slides.
  • Approximate resilience, monotonicity, and the complexity of agnostic learning. SODA 2015: slides.
  • Sample complexity bounds on differentially private learning via communication complexity. COLT 2014; ITA 2015: slides.
  • Using data privacy for better adaptive predictions. Foundations of Learning Theory workshop @ COLT 2014 : slides.
  • On the power and the limits of evolvability. Simons Institute workshop on Computational Theories of Evolution, 2014: slides.
  • Optimal bounds on approximation of submodular and XOS functions by juntas. Simons Institute workshop on Real Analysis at @FOCS 2013 : slides.

Contact: firstname.edu@gmail.com