The Future of Law in Technology and Governance

Emerging technologies – from artificial intelligence to blockchain to Big Data – pose enormous challenges to the roles and functions of law in society. Spanning governments and governance, this series features big thinking, emerging thinking, and critical thinking about blends of law, computing, markets, and politics.

This seminar series on the Future of Law in Technology and Governance, organized and moderated by Michael Madison, Professor of Law and John E. Murray Faculty Scholar at the University of Pittsburgh School of Law, is co-sponsored with the Future Law Project at the University of Pittsburgh Law School and the Center for Governance and Markets. Each one-hour seminar includes a 25-minute presentation by the author followed by 30 minutes for questions and discussion. Virtual rooms will remain open for an additional 30 minutes should any participants want to continue the conversation beyond the hour. 

All seminars are open to the public, but registration is required. 

Fall 2023 

September 21, 3 p.m. ET: Salomé Viljoen, University of Michigan 
Researcher Access to Social Media Data: Lessons from Clinical Trial Data Sharing 

As the problems of misinformation, child welfare, and heightened political polarization on social media platforms grow more salient, lawmakers and advocates are pushing to grant independent researchers access to social media data to better understand these problems. Yet researcher access is controversial. Privacy advocates and companies raise the potential privacy threats of researchers using such data irresponsibly. In addition, social media companies raise concerns over trade secrecy: the data these companies hold and the algorithms powered by that data are secretive sources of competitive advantage. This Article shows that one way to navigate this difficult strait is by drawing on lessons from the successful governance program that has emerged to regulate the sharing of clinical trial data. Like social media data, clinical trial data implicates both individual privacy and trade secrecy concerns. Nonetheless, clinical trial data’s governance regime was gradually legislated, regulated, and brokered into existence, managing the interests of industry, academia, and other stakeholders. The result is a functionally successful (if yet imperfect) clinical trial data-sharing ecosystem. Part I sketches the status quo of researchers’ access to social media data and provides a novel taxonomy of the problems that arise under this regime. Part II reviews the legal structures governing how clinical trial data is shared and traces the history of scandals, investigations, industry protest, and legislative response that gave rise to the mix of mandated sharing and experimental programs we have today. Part III applies lessons from clinical trial data sharing to social media data, and charts a strategic course forward. Two primary lessons emerge: First, law without institutions to implement the law is insufficient, and second, data access regimes must be tailored to the data they make available. 


October 5, 3 p.m. ET: Katherine Haenschen, Northeastern University 
Texting, Texting: The Effect of Text Messages On Voting, Volunteering, and Giving

Text messages have become ubiquitous in the world of politics, but are they actually doing anything other than cluttering up your phone? Political communication scholar Katherine Haenschen shares her own experimental work and that of others showing how text messages are effective at increasing voter turnout, volunteer participation, and donations. 
 

November 9, 3 p.m. ET: Orly Lobel, University of San Diego 
The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future

At a time when AI and digital platforms are under fire, in “The Equality Machine” Orly Lobel defends technology as a powerful tool we can harness to achieve equality and a better future. Much has been written about the challenges tech presents to equality and democracy. But we can either criticize big data and automation or steer it to do better. She makes a compelling argument that while we cannot stop technological development, we can direct its course according to our most fundamental values.  Lobel shows that digital technology frequently has a comparative advantage over humans in detecting discrimination, correcting historical exclusions, subverting long-standing stereotypes, and addressing the world’s thorniest problems: climate, poverty, injustice, literacy, accessibility, speech, health, and safety. Her examples—from labor markets to dating markets—provide powerful evidence for how we can harness technology for good. 


December 7, 3 p.m. ET: Charlotte Tschider, Loyola University Chicago 
Humans Outside the Loop


Artificial Intelligence is not all artificial. After all, despite the need for high-powered machines that can create complex algorithms and routinely improve them, humans are instrumental in every step used to create it. Through data selection, decisional design, training, testing, and tuning to managing AI’s developments as it is used in the human world, humans exert agency and control over these choices and practices. AI is now ubiquitous: it is part of every sector and, for most people, their everyday lives. When AI development companies create unsafe products, however, we might be surprised to discover that very few legal options exist to actually remedy any wrongs. 

This paper introduces the myriad of choices humans make to create safe and effective AI products, then explores key issues in existing liability models. Significant issues in negligence and products liability negligence schemes, including contractual limitations on liability, separate organizations creating AI products from the actual harm, obscure the origin of issues, and reduce the likelihood of plaintiff recovery. Principally, AI offers a unique vantage point for analyzing the relative limits of tort law in these types of technologies, challenging long-held divisions and theoretical constructs, frustrating its goals. From the perspectives of both businesses licensing AI and AI users, this paper identifies key impediments to realizing tort goals and proposes an alternative regulatory scheme that reframes liability from the human in the loop to the humans outside the loop.
 

Spring 2024
 

January 25, 3 p.m. ET: Kristen Eichensehr, University of Virginia
Resilience in the Digital Age

This presentation identifies tactics to bolster resilience against digitally enabled threats across three temporal phases: anticipating and preparing for disruptions, adapting to and withstanding disruptions, and recovering from disruptions. A resilience agenda is an essential part of protecting national security in a digital age. Digital technologies impact nearly all aspects of everyday life, from communications and medical care to electricity and government services. Societal reliance on digital tools should be paired with efforts to secure societal resilience. A resilience agenda involves preparing for, adapting to, withstanding, and recovering from disruptions in ways that advance societal interests, goals, and values. Emphasizing resilience offers several benefits: 1) It is threat agnostic or at least relatively threat neutral; 2) its inward focus emphasizes actions under the control of a targeted nation, rather than attempting to change behaviors of external adversaries; and 3) because resilience can address multiple threats simultaneously, it may be less subject to politicization. A resilience strategy is well-suited to address both disruptions to computer systems—whether from cyberattacks or natural disasters—and disruptions to the information environment from disinformation campaigns that sow discord. A resilience agenda is realistic, not defeatist, and fundamentally optimistic in its focus on how society can withstand and move forward from adverse events.

 

February 22, 3 p.m. ET: Kim Krawiec, University of Virginia 
Gametes: Commodification and The Fertility Industry

In August of 2021, the American Society for Reproductive Medicine published its most recent opinion on the financial compensation of oocyte (egg) donors. For those not steeped in the historical controversy surrounding egg donor compensation in the United States, the document likely appears unexceptional. Within historical context, however, the guidelines represent an important change in conceptions of oocyte commodification. First, and most importantly, the most recent guidelines contain no mention of acceptable or recommended compensation levels, nor do they analogize egg donation to sperm donation for purposes of payment comparison. The guidelines thus showcase the final abandonment of a decades-long attempt by the fertility industry to control egg donor compensation. Second, after more than twenty years of promoting ethical worries about oocyte commodification, the guidelines explicitly acknowledge -- for the first time -- that commodification concerns are rarely raised in the context of sperm donation. Finally, the guidelines emphasize that a failure to treat egg donors as adult women capable of making their own risk-return tradeoffs regarding their bodies and livelihoods would be demeaning and unfair. This chapter uses the development and eventual abandonment of these ASRM pricing guidelines over more than twenty-five years as a lens through which to understand commodification debates in both the sperm and egg markets.


March 21, 3:30 p.m. ET: Sara Gerke, Penn State University
Liability Aspects of Using Artificial Intelligence in Healthcare

Artificial Intelligence (AI) is rapidly entering healthcare and changing the practice of medicine. But who will likely be held liable for patient harm caused by AI? The physician, hospital, manufacturer, and/or no one? This presentation tries to answer these questions, looking at U.S. tort liability and new developments in the European Union.

Register here
 

April 4, 3 p.m. ET: Kristelia Garcia, Georgetown University 
Copyright Enforcement Decision-Making

In private law, private rights of action afford rights holders the authority—but not the obligation—to enforce a claim for remedies against a wrongdoer. This allows different rights holders to make different enforcement decisions in different circumstances and vis-à-vis different wrongdoers. In copyright law, the enforcement decision can be especially variable. Some copyright owners enforce against one alleged infringer, while declining to enforce against another. Some copyright owners delegate their enforcement decisions to an algorithm, which may or may not consistently apply the criteria it is given (and which criteria may or may not comply with legal requirements). Others wield the threat of enforcement to accomplish ends either wholly or largely unrelated to the alleged infringement.  Relatively little scholarly attention has been paid to the enforcement decision-making process. Part of the challenge for study in this area is that private rights of action do not require an explanation; copyright owners may elect to enforce, or forbear, for a variety of reasons, or for no reason.  Does enforcement necessarily imply wrongdoing? Does lack of enforcement necessarily suggest no harm? Is infringement necessarily harmful? Should we be as concerned about enforcement abuses in private law as we are in public law? More concerned?  The theory of selective enforcement developed here reveals the underappreciated role that private parties play in policymaking.

 

Spring 2023

January 19, 3 p.m. ET: Carla Reyes, Southern Methodist University Dedman School of Law
Computational Entities for Regular People

This project explores whether and how regular people, the group of non-crypto enthusiast business owners that make-up the majority of LLC members, can take advantage of the rise of computational LLCs. The Article argues that the road to mass adoption of computational LLCs runs through entrepreneurs with little to no prior knowledge of coding, computational law, or blockchain technology and the DAOs that generate the most interest among law-makers and the media. Arguing computational LLCs offer benefits to even the smallest business owner, this Article proceeds in three parts. Part I examines the rise of computational LLCs, the new laws designed to enable their formation, and common objections to both. Section II answers those objections by detailing key legal and business advantages of computational LLCs for regular people. Section II also explores current models for computational LLC code, and reveals the obstacles those models present for most entrepreneurs and their lawyers. Section III solves those obstacles by introducing a form operating agreement for a single member computational LLC, written in natural language code and then considers the broader implications of computational LLCs for business law and entrepreneurial lawyers.


January 26, 3 p.m. ET: Michael Sinha, St. Louis University School of Law
Data Privacy and Security Concerns after Roe v. Wade

In June 24, 2022, the US Supreme Court issued its opinion in Dobbs v. Jackson Women’s Health Organization, overturning nearly 50 years of precedent established in its 1973 decision in Roe v. Wade. By eliminating a federal constitutional right to abortion, Dobbs effectively reverted the decision to the states. Almost immediately, several state statutes took effect, some going as far as to ban abortion and criminalize those who aid or abet the process. In Texas, ordinary citizens are now empowered to surveil pregnant persons through the provision of bounties in exchange for information that leads to prosecution. In Nebraska, a Facebook Messenger conversation between a mother and her daughter as to the proper use of medication abortion led to criminal charges. These instances and others have raised concerns about the extent to which our data – health-related or otherwise – can be accessed and misused for malicious purposes. Major gaps in the current US data privacy infrastructure have far-reaching consequences beyond abortion policy, and I will discuss these issues in the context of broader data privacy reform proposals.

 

February 16, 3 p.m. ET:  Dan Rodriguez, Northwestern University Pritzker School of Law
Judging the Black Box: AI and Administrative Law

With the steady increaIFramese in the use of AI/ML mechanisms in regulatory decisionmaking at the federal and state level, important questions arise about how best to use and adapt administrative law rules to agency decisionmaking. Some reforms look at changing internal processes and structures. Rodriguez's focus is on external oversight, especially the role of reviewing courts in so-called “hard look” review.

 

March 23, 3 p.m. ET: Jessica Silbey, Boston University School of Law; Sarah Newman, Harvard University metaLAB; and Halsey Burgund
Artificial Justice

This is a presentation and discussion on Artificial Justice, an ongoing experimental project that explores the complex intersections of Generative AI & the Law. This is a collaboration between professor Jessica Silbey (BU Law), artist & creative technologist Halsey Burgund (MIT Open Docs/metaLAB Harvard), and artist and AI researcher Sarah Newman (metaLAB Harvard/BKC), and is supported by a grant from the Notre Dame Tech Ethics Lab. The work interrogates the intersection of emerging technologies, language, and "justice." As part of the presentation, we ask participants to read short text passages and answer questions about them as they relate to these themes. No expertise is required. We will also share responses from participants in previous workshops.

 

March 30, 3 p.m. ET:  Jane Winn, University of Washington School of Law and University of Pittsburgh School of Law; and Pam Dixon, World Privacy Forum
Using Information Privacy Standards to Build Governance Markets 

 

April 13, 3 p.m. ET:  Ravit Dotan, University of Pittsburgh Center for Governance and Markets
Introduction to AI Ethics 

AI tools can be helpful when used well, but they are dangerous when used irresponsibly. AI ethics is the field that aims to understand and manage the opportunities and risks of using AI. This talk introduces the audience to prominent AI risks, the current state of AI ethics, the landscape of AI regulation worldwide, and what organizations should do to develop and use AI responsibly.

 

Fall 2022

September 15, 2022, 3 p.m. ET: Michal Gal, University of Haifa
Algorithmic Cartels 

Michal Gal is Professor and Director of the Center for Law and Technology at the Faculty of Law, University of Haifa, Israel, and is the elected President of the International Academic Society for Competition Law Scholars (ASCOLA). She was a Visiting Professor at NYU, Columbia, University of Chicago, Georgetown, Melbourne, National University of Singapore, and Bocconi. Professor Gal is the author of several books, including Competition Policy for Small Market Economies (Harvard University Press). She also published numerous scholarly articles on the intersection of competition law and intellectual property, on law and technology, on the effects of the size of the market on regulation, and on algorithms and big data.

 

October 27, 2022, 3 p.m. ET: Felix Chang and Erin McCabe, University of Cincinnati
Modeling the Caselaw Access Project 

Felix B. Chang serves as the Associate Dean for Faculty and Research at the University of Cincinnati College of Law. He is a Professor of Law, Co-Director of the Corporate Law Center, and Director of the Corporate Law Concentration. Professor Chang’s writings span broad aspects of markets, inheritance, and inequality. In antitrust and financial regulation, his prior scholarship examined the balance between competition and systemic risk in the derivatives markets. Along with an interdisciplinary team, he is currently developing new tools for antitrust research through topic modeling.  In the areas of wealth and racial inequality, Professor Chang has written on the redistributive potential of legal rules in trusts and estates, as well as the parallels between Roma inclusion and the U.S. Civil Rights Movement. Currently, he is working on how inheritance laws affect inequality in China and the United States.

Erin McCabe is Digital Scholarship Library Fellow at the University of Cincinnati. She joined the Digital Scholarship Center as the Digital Scholarship Library Fellow (one of several Mellon grant-funded positions supporting research on machine learning and data visualization) in 2018. She now works on several research teams across disciplines and acts as liaison between academic and technology units. She previously worked on data analysis projects with academic publishers at JSTOR and in reference services at the Brooklyn branch of Long Island University. 

 

November 10, 2022, 3 p.m. ET: Valerie Racine, Western New England University
Can Blockchain Solve the Dilemma in the Ethics of Genomic Biobanks? 

Valerie Racine completed her Ph.D. in History and Philosophy of Science at ASU's Center for Biology and Society in 2016. Her dissertation project studied the development of particular research programs in molecular genetics and genomics during the 20th century. After a short stay as a Visiting Fellow at the Konrad Lorenz Institute for Evolution and Cognition Research in Klosterneuburg, Austria, she joined Western New England University as Assistant Professor of Philosophy in 2017. She was tenured and promoted to Associate Professor in 2022 but decided to leave academia soon after. She continues to research topics in bioethics, data ethics, and AI ethics as she pursues a new career trajectory in software development.

 

December 1, 2022, 3 p.m. ET: Emily Postan, University of Edinburgh Law School
Embodied Narratives: Protecting Identity Interests through Ethical Governance of Bioinformation 

Emily Postan is a Chancellor's Fellow in Bioethics at the University of Edinburgh Law School and a Deputy Director of the Mason Institute for Medicine, Life Sciences and the Law, with lead responsibility for the Institute’s policy engagement portfolio. Emily is an interdisciplinary bioethicist with a background in philosophy.  Her main research focus lies in interrogating the roles played by biomedical technologies, personal information, and health informatics in our identities, and in characterizing the ethical significance of these roles. Dr. Postan’s monograph “Embodied Narratives: Protecting Identity Interests through Ethical Governance of Bioinformation” was published by Cambridge University Press in July 2022. This book establishes the ethical imperative for information disclosure practices to take seriously the impacts on our identity-constituting narratives of our encounters with bioinformation about ourselves. 

Spring 2022

January 20, 2022, 3 p.m. ET: M. R. Sauter, University of Maryland
Every Rotten Idea Since Adam: How ERISA Reform Made Modern Venture Capital

 

March 3, 2022, 3 p.m. ET: Teresa Scassa, University of Ottawa
The Surveillant University 


May 5, 2022, 3 p.m. ET: Danielle Citron, University of Virginia
The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age 


April 7, 2022, 3 p.m. ET: Alicia Solow-Niederman, Harvard Law School
Information Privacy and the Inference Economy


Fall 2021

September 9, 2021, 3 p.m. ET: Ryan Abbott, University of Surrey
The Reasonable Robot: Artificial Intelligence and the Law

 

October 7, 2021, 3 p.m. ET: Annette Vee, University of Pittsburgh
NFTs, Digital Scarcity, and the Computational Aura 

 

November 4, 2021, 3 p.m. ET: Sarah Lawsky, Northwestern University
Coding the Code: Catala and Computationally Accessible Tax Law 

 

December 9, 2021, 3 p.m. ET: Saba Siddiki, Syracuse University and Christopher Frantz, Norwegian Institute of Science and Technology
The Institutional Grammar Research Initiative, Institutional Grammar 2.0: A specification for encoding and analyzing institutional design 

 

Spring 2021

January 21, 2021, 3 p.m. ET: Frank Fagan, EDHEC Business School
Competing Algorithms for Law: Sentencing, Admissions, and Employment 

 

March 4, 2021, 3 p.m. ET: Brett Frischmann, Villanova University
To What End? On Infrastructural Governance 

 

April 1, 2021, 3 p.m. ET: Margaret Hu, Pennsylvania State University
The Big Data Constitution