CS+Law
Research Workshop

When: Third Friday of each month at Noon Central Time (sometimes fourth Friday; next workshop: Friday, February 23, 1:00 to 3:00 p.m. Central Time) 

What: First 90 minutes: Two presentations of CS+Law works in progress or new papers with open Q&A. Last 30 minutes: Networking.

Where: Zoom

Who: CS+Law faculty, postdocs, PhD students, and other students (1) enrolled in or who have completed a graduate degree in CS or Law and (2) engage in CS+Law research intended for publication.

A Steering Committee of CS+Law faculty from Berkeley, Boston U., U. Chicago, Cornell, Georgetown, MIT, North Carolina Central, Northwestern, Ohio State, Penn, Technion, and UCLA organizes the CS+Law Monthly Workshop. A different university serves as the chair for each monthly program and sets the agenda.

Why: The Steering Committee’s goals include building community, facilitating the exchange of ideas, and getting students involved. To accomplish this, we ask that participants commit to attending regularly.

Computer Science + Law is a rapidly growing area. It is increasingly common that a researcher in one of these fields must interact with the other discipline. For example, there is significant research in each field regarding the law and regulation of computation, the use of computation in legal systems and governments, and the representation of law and legal reasoning. There has been a significant increase in interdisciplinary research collaborations between researchers from CS and Law. Our goal is to create a forum for the exchange of ideas in a collegial environment that promotes building community, collaboration, and research that helps to further develop CS+Law as a field.

Workshop 23: Friday, February 23, 1:00 to 3:00 p.m. Central Time 

Please join us for our next CS+Law Research Workshop online on Friday, February 23, 1:00 to 3:00 p.m. Central Time (Chicago Time).


Workshop 23 organizer: Cal Berkeley (Rebecca Wexler)

Link to join on Zoom: Will be circulated to Google Group


Agenda:

30-minute presentation - Colleen V. Chien and Miriam Kim 

30-minute presentation - Yonadav Shavit, et. al.

30-minute presentation - Sarah Barrington, Hany Farid & Rebecca Wexler

Q&A


Presentation 1:

100 Ways that Generative AI Can Address the Access to Justice Gap (working title) 


Presenter: Colleen V. Chien and Miriam Kim 

 

Abstract: How can AI tools be used to address the access to justice gap - the 90% of low-income Americans that lack adequate legal assistance? To find out, we surveyed 200 legal aid professionals about their usage and attitudes towards AI tools, and conducted a randomized controlled trial of 91 individuals. While all participants were given free access to paid generative artificial intelligence tools, a subset of participants, chosen randomly, were provided “concierge” services including peer use cases, office hours, and assistance. Before the trial, women in the pilot were less likely to have used the tools or think they were beneficial. At the end of the trial, however, male and female participants reported almost no significant differences across a wide variety of metrics reflecting usage patterns, benefits, and planned use. In addition, participants that received “concierge” services reported statistically significant better outcomes on a range of metrics as compared to the control group, suggesting that assistance in the rollout of AI tools can improve experiences with them. We discuss our findings and, to support the broader use of these tools, we publish a companion database of over 100 helpful use cases and existing use policies, including prompts and outputs, provided by legal aid professionals in the trial. 

 

Presentation 2:

Practices for Governing Agentic AI Systems


Presenter: Yonadav Shavit, et. al.

 

Abstract: Agentic AI systems—AI systems that can pursue complex goals with limited direct supervision—are likely to be broadly useful if we can integrate them responsibly into our society. In this talk, we will overview a recent policy whitepaper on best practices for governing agentic AI systems and addressing some of the specific risks specifically caused by agentic AI systems. We will start by defining agentic AI systems and the parties in the agentic AI system life-cycle, and highlight the necessity of agreeing on a set of baseline responsibilities and safety best practices for each of these parties. We will then discuss an initial set of practices for keeping AI agents’ operations safe and accountable, which we hope can serve as building blocks in the development of agreed baseline best practices. The paper itself also enumerate the questions and uncertainties around operationalizing each of these practices that must be addressed before such practices can be codified, as well as a range of indirect impacts from the wide-scale adoption of agentic AI systems which may not be addressed by such best practices.

If you'd like to read ahead of time, the whitepaper is here: https://cdn.openai.com/papers/practices-for-governing-agentic-ai-systems.pdf


 

Presentation 3:

AI Baselines in Evidence Law


Presenter: Sarah Barrington, Hany Farid & Rebecca Wexler

 

Abstract: AI forensic systems -- from face recognition to gunshot detection to probabilistic DNA analysis and more -- have potential to outperform human expert witnesses in analyzing evidence from crime scenes. Yet they also have potential to err, with high stakes consequences for the safety of communities and the life and liberty of the accused. What standards should apply to help ensure that accurate, reliable, and accountable AI evidence can be admitted in court while minimizing the harm from erroneous results? This talk will first present results from a study showing that a state-of-the-art AI system for photogrammetry and human expert photogrammetrists both performed worse at identifying a person's height and weight from a photograph than did non-experts hired on mechanical turk to perform the same task. The talk will then argue that, before admitting evidence from either AI or human experts, courts should assess not merely the proficiency of the AI or human expert, but also the proficiency of non-experts performing the same task. If AI or human experts do not substantially improve over non-expert baselines, courts should exclude the expert evidence as failing to satisfy Federal Rule of Evidence 702's requirement that experts must "help" the trier of fact. 

Join us to get meeting information

Join our group to get the agenda and Zoom information for each meeting and engage in the CS+Law discussion.

Interested in presenting?

Submit a proposed topic to present. We strongly encourage the presentation of works in progress, although we will consider the presentation of more polished and published projects.

2023-24 Series Schedule

Tuesday, September 26, 1:00 to 3:00 p.m. Central Time (Organizer: Northwestern)

Monday, October 23, 2:00 to 4:00 p.m. Central Time (Organizer: UCLA)

Friday, November 17, 11:30 to 1:30 p.m. Central Time (Organizer: Boston University)

Friday, December 15, 1:00 to 3:00 p.m. Central Time (Organizer: Penn)

Thursday, January 18, 1:00 to 3:00 p.m. Central Time (Organizer: Georgetown)

Friday, February 16, 1:00 to 3:00 p.m. Central Time (Organizer: Berkeley)

Friday, March 22, 1:00 to 3:00 p.m. Central Time (Organizer: Cornell)

Friday, April 19, 1:00 to 3:00 p.m. Central Time (Organizer: Ohio State)

Friday, May 17, 1:00 to 3:00 p.m. Central Time (Organizer: Tel Aviv + Hebrew Universities)

Steering Committee

Ran Canetti (Boston U.)

Bryan Choi (Ohio State)

Aloni Cohen (U. Chicago)

April Dawson (North Carolina Central)

Dazza Greenwood (MIT)

James Grimmelmann (Cornell Tech)

Jason Hartline (Northwestern)


Dan Linna (Northwestern)

Paul Ohm (Georgetown)

Pamela Samuelson (Berkeley)

Inbal Talgam-Cohen (Technion - Israel Institute of Technology)

John Villasenor (UCLA)

Rebecca Wexler (Berkeley)

Christopher Yoo (Penn)

Background - CS+Law Monthly Workshop

Northwestern Professors Jason Hartline and Dan Linna convened an initial meeting of 21 CS+Law faculty at various universities on August 17, 2021 to propose a series of monthly CS+Law research conferences. Hartline and Linna sought volunteers to sit on a steering committee. Hartline, Linna, and their Northwestern colleagues provide the platform and administrative support for the series.