UC Develops Principles to Guide Use of AI

A group of students work together with laptops at UC Berkeley

By Brandie Nonnecke, Ph.D. Faced with tightening budgets and staffing constraints, public universities are increasingly turning to artificial intelligence (AI) to improve the efficiency and effectiveness of their operations. As such, AI-enabled tools are beginning to take on a central role within classrooms, admissions offices, and human resources departments. While AI offers great promise for higher ed, ill-conceived deployments may impose disproportionate harm.

To guide the appropriate oversight and implementation of AI within the UC system’s operations, UC President Michael Drake and former UC President Janet Napolitano formed the UC Presidential Working Group on AI in August 2020. The interdisciplinary working group had representatives from all ten UC campuses and was composed of 32 members, including faculty and staff from diverse disciplines and representatives from UC Legal; Ethics, Compliance and Audit Services; Procurement; and the office of the vice president for Information Technology Services.

The working group undertook a thorough literature review of AI governance strategies. In addition, it conducted interviews with dozens of experts and stakeholders across the UC system, and administered a survey to campus chief information officers and chief technology officers. The interviews and survey results made it clear there is widespread concern about the risks of AI enabled tools, particularly risks of bias and discrimination. Further, there is a need for UC wide guidance and lesson-sharing across campuses. In particular, guidance on appropriate oversight for procurement of AI enabled tools stood out as a priority issue.

The working group published its final report in October 2021, which outlines a set of “UC Responsible AI Principles” and provides guidance on how to operationalize the principles in areas that pose high risk to individual rights, including health services; human resources; student experience, such as admissions, grading, and proctoring; and university policing.

The “UC Responsible AI Principles” draw upon a growing consensus in responsible AI principles implemented across the public and private sectors and will help ensure UC responsibly procures, develops, and implements AI-enabled tools:

  1. Appropriateness: The potential benefits and risks of AI and the needs and priorities of those affected should be carefully evaluated to determine whether AI should be applied or prohibited.
  2. Transparency: Individuals should be informed when AI-enabled tools are being used. The methods should be explainable, to the extent possible, and individuals should be able to understand AI-based outcomes, ways to challenge them, and meaningful remedies to address any harms caused.
  3. Accuracy, Reliability, and Safety: AI-enabled tools should be effective, accurate, and reliable for the intended use and verifiably safe and secure throughout their lifetime.
  4. Fairness and Non-Discrimination: AI-enabled tools should be assessed for bias and discrimination. Procedures should be put in place to proactively identify, mitigate, and remedy these harms.
  5. Privacy and Security: AI-enabled tools should be designed in ways that maximize privacy and security of persons and personal data.
  6. Human Values: AI-enabled tools should be developed and used in ways that support the ideals of human values, such as human agency and dignity, and respect for civil and human rights. Adherence to civil rights laws and human rights principles must be examined in consideration of AI adoption where rights could be violated.
  7. Shared Benefit and Prosperity: AI-enabled tools should be inclusive and promote equitable benefits (e.g., social, economic, environmental) for all.
  8. Accountability: The University of California should be held accountable for its development and use of AI systems in service provision in line with the above principles.

The final report also outlined four priority recommendations, which UC is now taking steps to implement:

  1. Institutionalize the “UC Responsible AI Principles” in procurement and oversight practices
  2. Establish campus-level councils and systemwide coordination to further the principles and guidance from the working group
  3. Develop a risk and impact assessment strategy to evaluate AI-enabled technologies during procurement and throughout a tool’s operational lifetime
  4. Document AI-enabled technologies that pose greater than minimal risk to individual rights in a public database to promote transparency and accountability

As UC moves forward in implementing the UC Responsible AI Principles and priority recommendations, collaboration with UC IT professionals will be integral to success. If you would like to contribute to the discussions and collaborations, please contact Dr. Brandie Nonnecke.

We can all take pride knowing UC is demonstrating a strong commitment to establishing appropriate oversight and guardrails so that its community may harness the full potential of this transformative technology.

Brandie Nonnecke, Ph.D., is director, CITRIS Policy Lab, UC Berkeley, and co-chair of the UC Presidential Working Group on AI.Brandie Nonnecke, Ph.D., is director, CITRIS Policy Lab, UC Berkeley, and co-chair of the UC Presidential Working Group on AI

Leave a Comment

Your email address will not be published.