CAIR Lab

The Collaborative AI Responsibility (CAIR) Lab

Mission

The use of AI technologies is revolutionizing the world. Ensuring that AI systems are developed and deployed in socially responsible ways is a critical challenge as the use of AI systems can be both incredibly helpful and harmful.

The Collaborative AI Responsibility (CAIR) Lab aims to empower organizations to work with AI in socially responsible ways, including developing, using, buying, and investing in AI responsibly. We do this by bringing together diverse stakeholders –  such as tech companies, investors, public administrators, and academics – and collaboratively creating actionable research and resources that can help govern AI in ways that mitigate potential harm and harness AI for social good.

Our approach

Ecosystem focus - We believe that responsible AI takes a village. All actors in the ecosystem need to get involved, and we are focusing on supporting those who develop AI, buy AI, and invest in AI.

Engaged research - We collaborate with practitioners at all research stages, including forming research questions and methodologies, data analysis, and drawing conclusions.

Local activism, globally – Creating positive social impact is as important to us as our research impact. We test and apply our theories in our city, aiming to make Pittsburgh an exemplar of responsible AI. In addition, we collaborate with teams worldwide to establish similar initiatives in other regions.

Diverse perspectives – We collaborate with researchers and practitioners in multiple sectors, disciplines, institutions, countries, and demographics.

Our research questions

Developing AI responsibly

  1. How do we know whether an organization is developing AI responsibly?
    We are developing a scale to measure organizations’ responsible AI maturity
  2. What are effective strategies for helping organizations develop AI more responsibly?
    We will experiment with organizations to test which interventions increase organizations’ responsible AI maturity
  3. What is the current state of responsible AI maturity?
    We will map ecosystems to create benchmarks for responsible AI maturity

Investing in AI responsibly

  1. What is the current state of responsible AI investing?
    We will map responsible AI investment strategies among asset managers and owners
  2. What does it mean to invest in AI responsibly?
    We will develop investment frameworks for different kinds of investors

Buying AI responsibly

  1. What is the current state of AI procurement?
    We will map AI procurement strategies in the private and public sectors
  2. What does it mean to procure AI responsibly?
    We will develop a procurement framework for the private and public sectors

Our Team

Lab Director: Ravit Dotan, PhD

Co Directors: Ilia Murtazashvili, PhDMichael Madison, JD


Activities

The CAIR Lab will produce scholarly research, actionable recommendations for practitioners (such as whitepapers and handbooks), and events (such as in-person and online panels and webinars).

Working groups lie at the heart of our mission. These groups provide a forum for collaborations to engage in impactful research, inform policy, and build community. Our current groups are as follows. 

Private sector group

This group aims to empower private sector organizations to develop, deploy, and finance AI in socially responsible ways. To achieve this goal, this group seeks to understand how AI is governed in tech companies, how to measure the responsibility of AI governance in tech companies, and how AI governance can be improved. In addition, this group seeks to build on these insights to create resources that investors, buyers of AI, and others can use to support companies in developing AI in socially responsible ways. The research this group conducts is grounded in multiple disciplines, including AI ethics, philosophy, and organizational psychology. Learn more and meet the team


Public sector group

This group aims to help public administrators ensure that the AI they use generates social value. To achieve this goal, this group seeks to understand the extent and consequences of the adoption of AI in the public sector. The research this group produces is grounded in theories of public administration.

Current projects
Understanding the state of the academic literature about AI in public administration 

Future projects
Understanding what public administrators currently do when they procure AI and what resources they need to incorporate social responsibility considerations when they do

AI governance and knowledge commons group

This group aims to ensure that the governance of knowledge advances the goals of responsible AI development and use. The group focuses on knowledge governance in law, public policy, management, and practices of communities and collectives that build, use, and rely on AI systems. This group utilizes the Governing Knowledge Commons (GKC) research framework, which is a well-established tool for investigating the dynamics of shared knowledge and information resources.

Strategy group

The goal of this group is to plan the long-term activity of the lab and oversee its progress.