Past Events

Workshop 14: Friday, February 17 University of Chicago (Aloni Cohen)

Presentation 1:

Differential Privacy in the 2020 Census – Assessing Bias through the Noisy Measurements Files

Presenter: Ruth Greenwood


The Census Bureau introduced a new disclosure avoidance system (DAS) for the 2020 Census that involved first applying differential privacy (DP) to the census edited file (CEF), resulting in a Noisy Measurements File (NMF), and then applying post-processing to create the 2020 Census products. Academic work reviewed demonstration products released in advance of the 2020 Census and contended that the DAS algorithm would introduce bias into the final Census, and that bias likely had disparate effects on communities of color. 

In order to determine whether and how much bias was introduced by post processing (as opposed to by DP), the Election Law Clinic (ELC) filed filed a Freedom of Information Act (FOIA) request with the Census Bureau on behalf of Professor Justin Phillips (in July 2022), requesting both the 2010 and 2020 NMFs. That FOIA was not responded to until after Prof. Phillips filed a lawsuit to enforce the FOIA. Today’s presentation will cover the course of the litigation and the implications of the possible findings for the use of the same DAS algorithm for future census products.

Bio: Ruth is the Director of the Election Law Clinic at Harvard Law School. She engages in litigation and advocacy on a variety of election law cases, while training the next generation of election lawyers.

Ruth litigated two partisan gerrymandering cases from the trial level to the Supreme Court of the United States, Gill v. Whitford and Rucho v. Common Cause. She has also litigated minority vote dilution claims under state and federal voting rights acts, racial gerrymandering claims, and cases alleging a burden on the fundamental right to vote. In addition, Ruth has advised dozens of state advocates on drafting and implementing independent redistricting commissions, state voting rights acts, and adopting ranked choice voting.


Presentation 2:

Control, Confidentiality, and the Right to be Forgotten

Presenter: Marika Swanberg


Recent digital rights frameworks give users the right to delete their data from systems that store and process their personal information (e.g., the "right to be forgotten" in the GDPR). How should deletion be formalized in complex systems that interact with many users and store derivative information? We argue that prior approaches fall short. Definitions of machine unlearning [CY15] are too narrowly scoped and do not apply to general interactive settings. The natural approach of deletion-as-confidentiality [GGV20] is too restrictive: by requiring secrecy of deleted data, it rules out social functionalities.

We propose a new formalism: deletion-as-control. It allows users' data to be freely used before deletion, while also imposing a meaningful requirement after deletion---thereby giving users more control. Deletion-as-control provides new ways of achieving deletion in diverse settings. We apply it to social functionalities, and give a new unified view of various machine unlearning definitions from the literature. This is done by way of a new adaptive generalization of history independence.

Deletion-as-control also provides a new approach to the goal of machine unlearning, that is, to maintaining a model while honoring users' deletion requests. We show that publishing a sequence of updated models that are differentially private under continual release satisfies deletion-as-control. The accuracy of such an algorithm does not depend on the number of deleted points, in contrast to the machine unlearning literature.


Based on joint work with Aloni Cohen, Adam Smith, and Prashant Nalini Vasudevan


Bio: Marika Swanberg is a fouth year PhD Candidate at Boston University, advised by Adam Smith. For the Spring 2023 semester, she is a Visiting Assistant Professor of Computer Science at Reed College. Marika is also a Hariri Institute graduate student fellow, and an affiliate with the BU Center for Antiracist Research. Her work with the BU security group spans the theory and practice of privacy, cryptography, and intersections with the law. Recently Marika has become interested in real-world deployments of differential privacy, especially since her summer internship at Tumult Labs. Her current research is focused on DP machine learning--stay tuned. Prior to joining BU, Marika received her Bachelor of Arts in Mathematics and Computer Science at Reed College in Portland, OR.

Workshop 13: Friday, January 20, 2023, MIT (Dazza Greenwood)

Workshop 12: Friday, December 16, 2022, Boston University (Ran Canetti)

Workshop 11: Friday, November 18, 2022, UCLA (John Villasenor)

Workshop 10: Friday, October 28, 2022, Cornell University (James Grimmelmann)

This is important because decisions based on algorithmic groups can be harmful. If a loan applicant scrolls through the page quickly or uses only lower caps when filling out the form, their application is more likely to be rejected. If a job applicant uses browsers such as Microsoft Explorer or Safari instead of Chrome or Firefox, they are less likely to be successful. Non-discrimination law aims to protect against similar types of harms, such as equal access to employment, goods, and services, but has never protected “fast scrollers” or “Safari users”. Granting these algorithmic groups protection will be challenging because historically the European Court of Justice has remained reluctant to extend the law to cover new groups.

This paper argues that algorithmic groups should be protected by non-discrimination law and shows how this could be achieved. Full paper available at:

Workshop 9: Friday, September 23, 2022, Organized by Northwestern University (Jason Hartline and Dan Linna)

Workshop 7: Friday, April 15, 2022, Organized by MIT (Lecturer and Research Scientist Dazza Greenwood)

Workshop 6: Friday, March 11, 2022, Organized by University of Pittsburgh (Professor Kevin Ashley)

Workshop 5: Friday, February 18, 2022, Organized by University of Chicago (Professor Aloni Cohen)

Workshop 4: Friday, January 21, 2022, Organized by UCLA (Professor John Villasenor)

Workshop 3: Friday, November 19, 2022, Organized by University of Pennsylvania (Professor Christopher S. Yoo)

Workshop 2: Friday, October 22, 2022, Organized by University of California Berkeley (Professors Rebecca Wexler and Pamela Samuelson)

Workshop 1: Friday, September 17, 2022, Organized by Northwestern University (Professors Jason Hartline and Dan Linna)