CS+Law
Research Workshop

When: Third Friday of each month at 1PM Central Time (sometimes fourth Friday; next workshop: Friday, October 25, 1:00 to 3:00 p.m. Central Time) 

What: First 90 minutes: Two presentations of CS+Law works in progress or new papers with open Q&A. Last 30 minutes: Networking.

Where: Zoom

Who: CS+Law faculty, postdocs, PhD students, and other students (1) enrolled in or who have completed a graduate degree in CS or Law and (2) engage in CS+Law research intended for publication.

A Steering Committee of CS+Law faculty from Berkeley, Boston U., U. Chicago, Cornell, Georgetown, MIT, North Carolina Central, Northwestern, Ohio State, Penn, Technion, and UCLA organizes the CS+Law Monthly Workshop. A different university serves as the chair for each monthly program and sets the agenda.

Why: The Steering Committee’s goals include building community, facilitating the exchange of ideas, and getting students involved. To accomplish this, we ask that participants commit to attending regularly.

Computer Science + Law is a rapidly growing area. It is increasingly common that a researcher in one of these fields must interact with the other discipline. For example, there is significant research in each field regarding the law and regulation of computation, the use of computation in legal systems and governments, and the representation of law and legal reasoning. There has been a significant increase in interdisciplinary research collaborations between researchers from CS and Law. Our goal is to create a forum for the exchange of ideas in a collegial environment that promotes building community, collaboration, and research that helps to further develop CS+Law as a field.

Workshop 28: Friday, October 25, 1:00 to 3:00 p.m. Central Time

Please join us for our next CS+Law Research Workshop online on Friday, October 25, 1:00 to 3:00 p.m. Central Time (Chicago Time).


Workshop 28 Organizer:  U. of California Berkeley (Pamela Samuelson and Rebecca Wexler)


Agenda:

20-minute presentation - Katrina Geddes

10-minute Q&A

20-minute presentation - Peter Henderson

10-minute Q&A

30-minute open Q&A about both presentations

30-minute open discussion 


Presentation 1:


Presenter: Katrina Geddes, Joint Postdoctoral Fellow, Cornell Tech and NYU Law


Papers: How Art Became Posthuman: Copyright, AI, and Synthetic Media


Abstract: 

In response to the threats posed by new copy-reliant technologies, copyright owners often demand stronger rights. Frequently this results in the overprotection of copyrighted works and the suppression of lawful user expression. Generative AI is shaping up to be no different. Owners of copyrighted training data have asked the courts to find AI outputs to be infringing in the absence of substantial similarity, and to prohibit unlicensed training despite its extraction of unprotectable metadata. Service providers automatically block or modify user prompts that retrieve copyrighted content even though fair use is a fact-specific inquiry. These trends threaten to undermine the democratic and egalitarian potential of generative AI. Generative AI has the capacity to democratize cultural production by distributing powerful and accessible tools to previously excluded creator communities. Ordinary individuals can now create sophisticated synthetic media by modifying, remixing, and transforming cultural works without any artistic training or skills. This radically expands the range of individuals who can engage in aesthetic practice, irrespective of the legal status or exchange value of the resulting output. To date, however, the democratic and egalitarian character of generative AI has been relatively under-theorized. Lawmakers are focused on averting two possible outcomes: the extinction of human artists, or the flight of technological capital to low-IP jurisdictions. As copyright owners and technology firms dominate public discourse, relatively little attention is paid to the expressive interests of users. This Article remedies that neglect by directing scholarly attention to the democratizing effects of generative AI. It suggests that jurists should not rush to pacify owners of copyrighted training data by enjoining generative models, or pressuring service providers to adopt unnecessary use restrictions. Instead, Congress should embrace the democratic and egalitarian potential of generative AI by protecting users from the chilling effects of infringement liability. This Article canvasses a range of options directed towards this objective, including a non-commercial use provision, a compulsory licensing regime, a DMCA-style safe harbor, and a presumption of user authorship of AI generations.


Presentation 2:


Presenter: Peter Henderson, Assistant Professor, Princeton


Paper: The Mirage of Artificial Intelligence Terms of Use Restrictions


Abstract

Artificial intelligence (AI) model creators commonly attach restrictive terms of use to both their models and their outputs. These terms typically prohibit activities ranging from creating competing AI models to spreading disinformation. Often taken at face value, these terms are positioned by companies as key enforceable tools for preventing misuse, particularly in policy dialogs. The California AI Transparency Act even codifies this approach, mandating certain responsible use terms to accompany models.

 

But are these terms truly meaningful, or merely a mirage? There are myriad examples where these broad terms are regularly and repeatedly violated. Yet except for some account suspensions on platforms, no model creator has actually tried to enforce these terms with monetary penalties or injunctive relief. This is likely for good reason: we think that the legal enforceability of these licenses is questionable. This Article provides a systematic assessment of the enforceability of AI model terms of use and offers three contributions.

First, we pinpoint a key problem with these provisions: the artifacts that they protect, namely model weights and model outputs, are largely not copyrightable, making it unclear whether there is even anything to be licensed.

 

Second, we examine the problems this creates for other enforcement pathways. Recent doctrinal trends in copyright preemption may further undermine state-law claims, while other legal frameworks like the DMCA and CFAA offer limited recourse. And anti-competitive provisions likely fare even worse than responsible use provisions.

 

Third, we provide recommendations to policymakers considering this private enforcement model. There are compelling reasons for many of these provisions to be unenforceable: they chill good faith research, constrain competition, and create quasi-copyright ownership where none should exist. There are, of course, downsides: model creators have even fewer tools to prevent harmful misuse. But we think the better approach is for statutory provisions, not private fiat, to distinguish between good and bad uses of AI and restrict the latter. And, overall, policymakers should be cautious about taking these terms at face value before they have faced a legal litmus test.

Join us to get meeting information

Join our group to get the agenda and Zoom information for each meeting and engage in the CS+Law discussion.

Interested in presenting?

Submit a proposed topic to present. We strongly encourage the presentation of works in progress, although we will consider the presentation of more polished and published projects.

2023-24 Series Schedule

Tuesday, September 26, 1:00 to 3:00 p.m. Central Time (Organizer: Northwestern)

Monday, October 23, 2:00 to 4:00 p.m. Central Time (Organizer: UCLA)

Friday, November 17, 11:30 to 1:30 p.m. Central Time (Organizer: Boston University)

Friday, December 15, 1:00 to 3:00 p.m. Central Time (Organizer: Penn)

Thursday, January 18, 1:00 to 3:00 p.m. Central Time (Organizer: Georgetown)

Friday, February 23, 1:00 to 3:00 p.m. Central Time (Organizer: Berkeley)

Friday, March 22, 1:00 to 3:00 p.m. Central Time (Organizer: Cornell)

Friday, April 19, 1:00 to 3:00 p.m. Central Time (Organizer: Ohio State)

Friday, May 17, 1:00 to 3:00 p.m. Central Time (Organizer: Tel Aviv + Hebrew Universities)

Steering Committee

Ran Canetti (Boston U.)

Bryan Choi (Ohio State)

Aloni Cohen (U. Chicago)

April Dawson (North Carolina Central)

James Grimmelmann (Cornell Tech)

Jason Hartline (Northwestern)


Dan Linna (Northwestern)

Paul Ohm (Georgetown)

Pamela Samuelson (Berkeley)

Inbal Talgam-Cohen (Technion - Israel Institute of Technology)

John Villasenor (UCLA)

Rebecca Wexler (Berkeley)

Christopher Yoo (Penn)

Background - CS+Law Monthly Workshop

Northwestern Professors Jason Hartline and Dan Linna convened an initial meeting of 21 CS+Law faculty at various universities on August 17, 2021 to propose a series of monthly CS+Law research conferences. Hartline and Linna sought volunteers to sit on a steering committee. Hartline, Linna, and their Northwestern colleagues provide the platform and administrative support for the series.