Past Events

Workshop 16: Friday, April 21 Berkeley (Rebecca Wexler)

The Integrity of Our Convictions

Workshop 15: Friday, March 17 UPenn (Christopher Yoo

One obvious countermeasure would be to require Internet sites to strongly authenticate their users, but this is not an easy problem. Furthermore, while that would provide accountability for the immediate upload, such a policy would cause other problems—the ability to speak anonymously is a vital constitutional right. Also, it often would not help identify the original offender—many people download images from one site and upload them to another, which adds another layer of complexity.

We instead propose a more complex scheme, based on a privacy-preserving cryptographic credential scheme originally devised by Jan Camenisch and Anna Lysyanskaya. We arrange things so that three different parties must cooperate to identify a user who uploaded an image. We perform a legal analysis of the acceptability of this scheme under the First Amendment and its implied guarantee of the right to anonymous speech, show how this must be balanced against the victim's right to sexual privacy, discuss the necessary changes to §230 (and the constitutional issues with these changes), and the legal standards for obtaining the necessary court orders—or opposing their issuance.

Workshop 14: Friday, February 17 University of Chicago (Aloni Cohen)

Workshop 13: Friday, January 20, 2023, MIT (Dazza Greenwood)

Workshop 12: Friday, December 16, 2022, Boston University (Ran Canetti)

Workshop 11: Friday, November 18, 2022, UCLA (John Villasenor)

Workshop 10: Friday, October 28, 2022, Cornell University (James Grimmelmann)

This is important because decisions based on algorithmic groups can be harmful. If a loan applicant scrolls through the page quickly or uses only lower caps when filling out the form, their application is more likely to be rejected. If a job applicant uses browsers such as Microsoft Explorer or Safari instead of Chrome or Firefox, they are less likely to be successful. Non-discrimination law aims to protect against similar types of harms, such as equal access to employment, goods, and services, but has never protected “fast scrollers” or “Safari users”. Granting these algorithmic groups protection will be challenging because historically the European Court of Justice has remained reluctant to extend the law to cover new groups.

This paper argues that algorithmic groups should be protected by non-discrimination law and shows how this could be achieved. Full paper available at:

Workshop 9: Friday, September 23, 2022, Organized by Northwestern University (Jason Hartline and Dan Linna)

Workshop 7: Friday, April 15, 2022, Organized by MIT (Lecturer and Research Scientist Dazza Greenwood)

Workshop 6: Friday, March 11, 2022, Organized by University of Pittsburgh (Professor Kevin Ashley)

Workshop 5: Friday, February 18, 2022, Organized by University of Chicago (Professor Aloni Cohen)

Workshop 4: Friday, January 21, 2022, Organized by UCLA (Professor John Villasenor)

Workshop 3: Friday, November 19, 2022, Organized by University of Pennsylvania (Professor Christopher S. Yoo)

Workshop 2: Friday, October 22, 2022, Organized by University of California Berkeley (Professors Rebecca Wexler and Pamela Samuelson)

Workshop 1: Friday, September 17, 2022, Organized by Northwestern University (Professors Jason Hartline and Dan Linna)