2024 UC Academic Congress on Artificial Intelligence (AI) – Summary Report

UC AI Congress Panel Discussion

February 2024

Summary Report

In February, UC leaders attended the systemwide Academic Congress on Artificial Intelligence at UCLA. The University of California’s Provost, Katherine S. Newman, and Chief Information Officer, Van Williams, hosted “What the Future Holds: A UC Congress on the Impact and Promise of Artificial Intelligence.” This two-day event featured keynote speakers who presented research on AI’s effect on labor markets and the broader economy, the role of UC in protecting data privacy, algorithmic bias and how to best prepare students for the workforce given these important new developments.

Conference overview

UC at the vanguard of AI research and application 

UC has been in the vanguard of AI research and application for decades: UC Berkeley professors Stuart Russell and Peter Norvig wrote the fundamental textbook for AI in 1995, used by more than 1,500 schools worldwide. For over three decades the university has been at the forefront of this field, developing expertise in machine learning, robotics and computer vision, fields related to AI. The results have made an impact for students, patients and society in areas as diverse as healthcare diagnostics, automated driving, and crop harvesting; UC faculty members have been awarded patents for discoveries and applications in each of these fields and more. 

Governance to support expanding range of applications

The university has demonstrated its leadership in addressing the governance of AI within the university context, starting with the UC Presidential Working Group on Responsible Artificial Intelligence, launched by President Drake in fall 2020. This effort resulted in a report that highlighted four high-impact areas of implementation: (1) student experience; (2) health; (3) human resources; and (4) policing and set out 10 UC Ethical AI Principles. Since then the AI Council has continued to refine areas of guidance for the university by involving a larger group of legal, privacy and procurement experts, in addition to faculty members and high-level staff.

Opportunities and Challenges that lie ahead

AI innovations promise to relieve drudgery and monotony while allowing individuals to extend creativity and elevate the quality of their work and lives. Along with hopeful optimism, many concerns remain about AI’s role in job destruction, privacy, ethics, learning, and intellectual property rights, human creativity, learning and development, emotional health, manipulation, among others. “The fears we have today were ignited by earlier revolutions [and they are still] open for abuse,” as one presenter synthesized this list of concerns.

This UC AI Congress session covered the interplay between AI advancements and societal dynamics. While AI holds promise for enhancing productivity, accuracy and efficiency, addressing key challenges such as excessive automation and information monopolization and manipulation is crucial to ensure that AI benefits society as a whole. By fostering a nuanced understanding of AI’s scope, policymakers, businesses, academia and individuals can navigate the future of work, education, research and patient care in the era of AI more effectively.

Conference presentations

Keynote Speaker: Daron Acemoglu, MIT

Author and MIT professor Daron Acemoglu highlighted the significance of AI not only in shaping the economy but also in influencing broader aspects of humanity. As AI continues to advance rapidly, with trillions of dollars being invested in its development, questions arise about its implications for work, economic production, democracy, and societal participation.

The discussion centered on two competing visions regarding the role of AI: autonomous machine intelligence and machine usefulness. While the former envisions machines achieving human-like capabilities with minimal human intervention, the latter emphasizes machines as complementary tools to enhance human decision-making and productivity. Prof. Acemoglu identified four key roadblocks hindering the realization of a more positive vision for AI integration:

  1. Excessive Automation: The rapid adoption of AI-driven automation threatens to replace human labor in various tasks, leading to concerns about job displacement and economic inequality.
  2. Loss of Information Diversity: The proliferation of AI technologies may narrow the diversity of information sources and perspectives, potentially limiting creativity and innovation.
  3. Misalignment of Human-AI Interactions: As AI systems become increasingly autonomous, there is a risk of misalignment between human cognitive processes and AI algorithms, potentially leading to unintended consequences and errors.
  4. Monopolization of Information: The concentration of control over AI technologies and data by a few powerful entities raises concerns about monopolistic practices and unequal access to information.

Despite these challenges, Acemoglu acknowledged the potential benefits of AI in improving human task performance and decision-making. Studies have shown that providing humans with access to generative AI tools can lead to productivity gains in certain tasks. However, he cautioned against unrealistic expectations and emphasized the need for careful consideration of the societal implications of AI integration.

Discussant: Ramesh Srinivasan, UCLA 

Author and UCLA professor Ramesh Srinivasan emphasized the importance of popular democratic will and action by scholars, public institutions such as universities, and an engaged citizenry to shape the development of AI and the social contract. Policy and governance proposals are under discussion that would better align AI with the public good.

The increasing mediation of our lives by data and digital footprints raises concerns about the lack of transparency and accountability in data collection by companies, as well as the potential for corporate and state control over personal data. As this data is fed into AI systems, it is crucial to examine the philosophical question of whether data can truly capture and represent human knowledge. The impact of AI extends beyond the realm of knowledge representation, as algorithmic AI systems invisibly shape critical life outcomes in areas such as loans and healthcare. These systems rely on various types of intelligence, highlighting the need to understand the underlying mechanisms and limitations of different AI approaches.

The development and deployment of AI technologies have significant political and economic implications, including the potential to exacerbate job displacement, expand the gig economy, and widen income inequality. Considerations of environmental costs associated with AI infrastructure must be addressed. Rather than focusing on far-off doomsday scenarios, the attention should be directed towards the real, current effects of AI on society. Key questions arise concerning the vision for AI in supporting humanity and who governs and benefits from these technologies. (Proposals such as creating a form of universal basic income may be part of the solution and compensate for the value creation of individuals who unwittingly contribute to refining these AI systems.) The AI conversation ultimately leads to broader questions about the desired world and values, such as the balance between homogenization and the preservation of plural worlds and diversity, as well as the governance of technologies to serve public interests rather than a world and its people being directed by machines.

To navigate these challenges, public engagement is crucial. The importance of popular democratic will and action by scholars, universities, and citizens cannot be overstated. Policy and governance proposals exist to align AI with the public good, and it is through collective efforts that society can shape the trajectory of AI and redefine the social contract in the age of artificial intelligence.

First Panel: Research Frontiers

UC is home to some of the leading AI researchers in the world, its students are actively engaged in developing, testing and using new AI applications. How can researchers in a range of disciplines harness the promise of AI and high-performance computing to accelerate discoveries while also attending to data security and questions of research integrity? Panelists discussed their research in cutting edge areas of genetics, cybersecurity and real-time monitoring and prediction of wildfire. 

Moderator: Theresa Maldonado (UCOP): Vice President for Research and Innovation.

Panelists: 1) William Wang (UCSB): Director, Center for Responsible Machine Learning; 2) Ilkay Altintas (UCSD, SDSC): Chief Data Science Officer, UCSD Supercomputer Center; 3) Ashish Atreja (UC Davis Health): Chief Information and Digital Health Officer; 4) David Danks (UCSD): Professor, Data Science & Philosophy

Key Takeaways:

1. Leverage UC resources: With ten campuses, three national labs, and other entities under UC, panelists emphasized that infrastructure is crucial but expensive. Providing the right level of services around resources ensures sustainability and scalability through increased usage.

2. Cost considerations: Significant costs are associated with implementing large language model platforms like OpenAI at scale. While such tools may be freely available for individuals, enterprise-level usage incurs substantial expenses, emphasizing the need to consider costs in scaling infrastructure.

3. Human investment: Beyond physical infrastructure, panelists emphasized the importance of investing in people. This includes providing training, education, and support for individuals integrating AI into their workflows or teaching.

Second Panel: Pedagogy and Innovation Frontiers

Artificial intelligence promises to transform the academic landscape for teachers and learners, both in the classroom and in entrepreneurial activities. How can UC best leverage advances in AI to boost students’ academic success and provide opportunities for them to explore new career paths in innovation? At the same time, AI accelerators, industry partnerships and supportive tech transfer offices can help to amplify the innovative research happening across the system. 

Moderator: Richard Lyons (UCB), Chief Innovation and Entrepreneurship Officer, UC Berkeley; Chair, UC President’s Council for Entrepreneurship. 

Panelists: 1) Jill Miller (UC Berkeley): Professor, Art Practice | Founding Director, Platform Artspace; 2) Rosibel Ochoa (UCR): Associate Vice Chancellor, Technology Partnerships; 3) Brian Spears (LLNL): AI Innovation Incubator; 4) Tamara Tate (UCI): Project Scientist, Digital Learning Lab; 5) Zac Zimmer (UCSC): Associate Professor, Literature

The panel discussion on AI and education, featuring experts from several UC campuses and the Lawrence Livermore National Laboratory, delved into the profound consequences and massive scale of the AI revolution, particularly in the context of pedagogy, innovation, and national security. The panelists emphasized the need for academia to keep pace with industry in terms of ambition and scale while critically examining the ethical implications and unintended consequences of AI.

Panelists shared examples of integrating AI into education, such as Jill Miller’s art class that explored the representation of diverse objects in 3D model marketplaces and Zach Zimmer’s writing class that discussed the optimization of writing for AI audiences. Tamara Tate introduced Papyrus AI, a platform that scaffolds the use of generative AI for writing, while Rosibel Ochoa highlighted the challenges of technology transfer and startup support in the rapidly evolving AI landscape.

Brian Spears brought a national security perspective, discussing the dual-use nature of AI technologies, the need for educating people about boundaries and risks, and the massive investments being made by industry. He called for academic institutions to think at the same scale as industry and engage in initiatives like Frontiers in AI for Science and Security Technology (FASST) to transform large-scale science and prepare students for the AI revolution.

The discussion also touched on the future of writing education, with panelists encouraging collaborative creative processes and emphasizing the importance of developing students’ own voices. Accessibility and inclusion were highlighted as critical considerations, with a call to ensure that AI tools can help people with disabilities express themselves in new ways.

Funding and public-private partnerships for AI research emerged as key challenges, with concerns raised about the allocation of resources and the need for a coalition of national labs and universities to secure access to compute power and data for students and faculty. The threat of publishers curtailing rights to text and data mine academic corpora was also discussed, with panelists emphasizing the importance of pushing back against restrictions on data access.

Actionable suggestions for the UC system included implementing an AI literacy requirement to provide students with a foundation in history, ethics, and operational tools, as well as providing university-controlled AI tools that protect privacy and allow for the building of new models. The ultimate goal is to educate students to be literate in AI and empower them to shape the future as they enter the world.

Third Panel: Application Frontiers

As artificial intelligence becomes increasingly important in higher education, not only in research and teaching but in the operations of the enterprise itself, it raises novel and critical questions about privacy, intellectual property, creativity, and individual rights and equity. How will universities evaluate and incorporate the rapid advances in AI and assess what the technology means for teaching, research, and healthcare services? What are the implications for higher ed institutions and our role in promoting original research and creative work, upholding principles of equity, and ensuring data security and privacy? 

Moderator: Camille Crittenden, CITRIS and the Banatao Institute 

Panelists: 1) Lucy Avetisyan (UCLA): AVC & CIO; 2) Janet Napolitano (UC Berkeley): Director, Center for Security in Politics; 3) Brandie Nonnecke (CITRIS Policy Lab): Director; 4) Jonathan Porat (California State): Chief Technology Officer

The Applications Frontiers panel at the AI Congress delved into the responsible deployment of AI technologies, considering unintended consequences and the need to enhance productivity without sidelining marginal populations. Panelists from UCLA, the State of California, the Department of Homeland Security (former Secretary), and the CITRIS Policy Lab shared their experiences and insights on AI governance, risk assessment, and the importance of interdisciplinary collaboration.

Lucy Avetisyan, CIO at UCLA, shared her institution’s journey in exploring AI capabilities, highlighting the challenges faced since the launch of GPT and OpenAI in November 2022. UCLA has taken steps to support its campus community by hosting events with tech giants, raising awareness about data privacy, and planning to launch a number of AI-focused initiatives. 

Jonathan Porat, CTO of the State of California, discussed the state’s deliberative approach to implementing generative AI, guided by Governor Newsom’s executive order. The state focuses on guidelines, risk assessment, and outcomes-focused governance, with an emphasis on scalable applications that prioritize workforce support and data privacy.

Janet Napolitano, former UC President and Secretary of Homeland Security, addressed the security implications of AI, warning about the potential for data poisoning and the need for data certification and proactive measures by private actors. Brandie Nonnecke, Director of the CITRIS Policy Lab, cautioned against hyper-focusing on generative AI and neglecting other AI applications. She highlighted the UC Presidential Working Group on AI’s efforts to develop the UC Responsible AI Principles and encouraged leveraging the UC system’s power when working with private sector entities.

During the panel discussion, participants shared promising AI application areas and potential policies or recommendations for the UC system.The panelists also addressed questions from the audience, covering topics such as creating a productive culture around AI, using AI to compare terms and conditions of service, the impact of AI on the job market, validating AI predictions, and engaging the community to identify key AI use cases.

Throughout the panel, the need for responsible AI deployment, considering unintended consequences, educating students, and upskilling the workforce was highlighted. The UC system’s leadership in establishing AI principles and councils, along with its potential to shape AI practices through the “UC effect,” were seen as crucial in navigating the AI revolution. The panelists emphasized the importance of interdisciplinary collaboration, continuous monitoring, and human oversight in ensuring that AI technologies are developed and deployed in a manner that aligns with societal values and promotes the public good.

AI Council Update

Alex Bui, Co-Chair of the UC AI Council, provided an overview of the council’s efforts to institutionalize the UC Responsible AI Principles across the system. The council aims to establish a baseline set of principles for AI governance, coordinate with various stakeholders, harmonize definitions across campuses, and provide a central resource for AI-related information through its newly launched website.

Fireside Chat: Safiya Noble, UCLA

In this engaging fireside chat with UCLA Provost Darnell Hunt, MacArthur Award winner and UCLA professor Safiya Noble shared her intellectual journey and insights on the intersection of race, gender, and technology. Noble’s interest in studying the harmful dimensions of technology on vulnerable people and society stems from her experience in the advertising industry and the disconnect she observed between industry and academia regarding responsible development of emerging technologies like search engines and social media.

Noble’s groundbreaking book, “Algorithms of Oppression,” explores the discrimination and bias embedded in search algorithms, particularly in relation to women of color. Through her research, she demonstrates how algorithms can prioritize certain values over others and perpetuate societal biases. Noble emphasized the importance of intersectionality in understanding the disparate impact of these systems on historically oppressed identities.

The discussion delved into the challenges posed by generative AI and large language models like ChatGPT. Noble highlighted the environmental impact, exploitative labor practices, and potential for misleading or incorrect responses associated with these technologies. She stressed the need for deep expertise to discern the accuracy of the information provided by AI systems.

In the context of education, Noble shared her experiences using search engines to teach critical media literacy and the importance of students being able to differentiate between accurate and inaccurate information. She also touched on the role of AI in creative industries, such as Hollywood, and the significance of the recent actors and writers strike in preserving human creativity.

To mitigate bias and address the challenges posed by AI, Noble emphasized the role of regulatory agencies in creating guardrails and the responsibility of the University of California to critically assess the adoption of these technologies. She advocated for demanding tech companies to pay their fair share of taxes to fund education and public infrastructure and urged the UC system to take a leadership role in shaping the conversation around AI’s societal impact.

The fireside chat also featured thought-provoking questions from the audience, touching on the factors driving problematic search engine results and the balance between regulation and accessibility of AI tools. Noble underscores the importance of considering the societal and environmental costs of AI technologies and recognizing the interdependence of various factors in the ecosystem.

Throughout the discussion, Safiya Noble’s insights illuminated the complex interplay between technology, society, and ethics. Her work serves as a clarion call for critically examining the development and deployment of AI technologies, prioritizing the preservation of human creativity, and actively working towards mitigating the harmful effects of algorithmic bias on vulnerable communities.

Key Takeaways from breakout groups

At the UC AI Congress, participants met in breakout groups to discuss AI topics related to their work and interests. Each group submitted write ups about their key takeaways and goals for the UC system. Breakout session topics included: 

  1. AI and Climate / Ag Tech (led by Josh Viers, UC Merced, AI Institute for Transforming Workforce and Decision Support (AgAID)
  2. AI for National Security and Cybersecurity (led by Brian Spears, LLNL) 
  3. AI for Health Care (led by Alpesh Amin, UC Irvine)
  4. AI for Teaching and Learning (led by Tamara Tate, UCI)
  5. AI and the Creative Sector (led by Jeff Burke, UCLA) 
  6. AI in Computational Research Applications and Research Integrity (led by Ilkay Altintas, UCSD)
  7. Ethical Review and Human Subjects Research Involving AI (led by Ida Sim, UCSF)

Breakout 1 – AI and Climate/Ag Tech (led by Joshua Viers, UC Merced)

1. Leverage AI to become more proactive than reactive in AgTech

  1. Predict invasion of pests, fungi, etc. and genetically enhanced crops to be resilient to these threats.
  2. Predict land fallowing and broader impact on communities

2. Increase dispersion of knowledge. Every county could have a tech translator who can make information available for the general public.

3. UC GPT to address data quality

  • Ground our own models to use our own institutional knowledge by deploying a UC GPT or Research-assisted generation (RAG) tool.
  • Note that this is time consuming work, and not incentivized or compensated. Could create a requirement for PIs to share that data. Perhaps replicate a center within ANR something like the Center for Data Driven Insights for UC Health.

Breakout 2 – AI for National Security and Cybersecurity (led by Brian Spears, LLNL) 

UC needs to invest in:

  1. Transformational infrastructure in response to AI to have world-leading operations and research.
    1. World-leading federated compute resource – tens of thousands of GPUs
    2. Capability for executing quality control on UC’s models and data
    3. Policies to standardize and guide risk assessment and tolerance
  2. A council to guide it with a strong organizational structure that can keep up with the pace needed

Breakout 3 – Healthcare (led by Alpesh Amin, UC Irvine)

Integration and Potential Benefits: AI is being actively integrated into healthcare by medical professionals to improve the quality of care, reduce costs, and enhance patient and physician experiences. Current use cases at UC include the use of AI to simplify clinical workflows and the implementation of AI scribes to minimize administrative burdens on healthcare providers.

Challenges and Concerns: While AI offers numerous advantages, there are significant challenges such as potential biases in algorithms, privacy risks, and the over-reliance on AI for medical diagnoses. Healthcare professionals express concerns about the need for human judgment in conjunction with AI to ensure balanced and effective patient care.

Future Directions and Governance: Discussions highlight the need for robust governance frameworks, regulatory oversight akin to an FAA model for AI in healthcare, and the importance of setting clear visions and timelines for AI adoption. The emphasis should be on ensuring safety, consistency, and the ethical use of AI in healthcare, alongside fostering innovation and maintaining patient empowerment. 

Breakout 4 – Learning and Teaching (led by Tamara Tate, UC Irvine)

  • Significant Playground/Sandbox Space in classrooms. There are numerous examples of faculty thoughtfully applying LLM/Generative AI in the curriculum, both as a way to learn the mechanisms of the tools while also promoting critical conversation.  
  • Make sure you are between the AI and your students. Harnessing the AI requires significant scaffolding to be successful, or it can be detrimental. Make students consider: is the AI better than all human options available at that time and place? Help students use AI with purpose, but not reliance. 
  • Scale the thoughtfulness from #1 across the system. We have been using AI for many years: admissions, EdTech vendors, early alert…and there has not been clarity around equity of access or protocols for use from the system. Leading with compassion and concern is critical. 

Breakout 5 – AI and Creativity (led by Jeff Burke, UCLA)

  1. Highlight a systemwide push for AI literacy that has a hands-on component.
  2. Make infrastructure available for faculty and students to think about AI as new creative material that they can participate in designing.
  3. Encourage porosity across disciplines, particularly between STEM and arts/humanities, including funding that enables graduate students from different disciplines to come together.

Breakout 6 – Research Applications (led by Ilkay Altintas, UC San Diego)

  • AI in research spans all aspects of the scientific process, from research proposals to data collection and analysis to publication of results. AI can possibly support research infrastructure and be a research subject itself (e.g., cognitive impact of AI on youth).
  • AI has implications on research integrity, both as something that needs to be validated and then also a potential tool to examine for research falsification. In addition, IRBs are not equipped to consider AI implications on human subjects. 
  • AI infrastructure can be costly – how do we consider the costs for data storage, including opportunities to collect information today that may not be imminently useful but possibly useful for future research. In addition, how do we address pending restrictions from publishers that may limit access for articles which would limit the benefit or increase the costs of using AI. 

Breakout 7 – Ethical Review and Human Subjects Research Involving AI (led by Ida Sim, UCSF)

  1. The existing Institutional Review Board (IRB) process needs to have clearer policies and guard rails including a risk-based framework that could leverage work that’s ongoing mostly by the Presidential Working Group on Al. IRBs will need more resources and tools in anticipation of more studies involving Al that need additional review, including faculty expertise, staff support and new expertise that has yet to be developed so that IRBs can more effectively ensure ethics of human subjects research involving Al.
  2. The use of Al is experimental, humans (e.g., staff, students, patients) are subjects in that process, and there are risks that are not subject to IRB review. This presents risks and the UC should think about how we can manage these risks.
  3. Research involving Al also presents risks at the community and societal level. We should consider convening a process to assess and mitigate community and societal-level risks.

Closing Remarks with Katherine Newman and Gene Block

Katherine Newman laid out the following considerations to guide us going forward:

  1. Because this space is moving so fast and moving very fast in health care and other fields, if we want to be a leader, we’re going to need to change how we do our business.
  2. How can we think differently and organize ourselves differently? 
  3. Can we collaborate across campuses and do things in new ways without stymieing innovation at the local campuses?
  4. If we are going to be a world leader, much will be involved in the effort, including creating all the infrastructure and the public private partnerships, and overall significant investment. On the other hand,  if we don’t get it right, the cost of not getting it right is going to be so high, it will drive us all out of business
  5. How do we capture the strengths of the new technology and try to mitigate the challenges/downsides of the new technology

Gene Block, Chancellor of UCLA, shared one of his biggest hopes and dreams for AI: democratizing education. He said, “The dream is we can begin to overcome some of the deficiencies because of the inequities that we have in our educational system and begin to improve outcomes even at later stages, by using the power of AI”. 

About the event, its participants and staying connected

This event, the first of its kind at the university, included leaders and experts from UC (UC regents, faculty, administrative leaders, students, alumni), as well as the government and the private sector. Its objective was to galvanize a community of stakeholders to identify opportunities to leverage AI across a wide range of use-cases, while understanding, and responding to, the ongoing  challenges of ensuring safe, ethical and equitable approaches to its uses. 

Nearly 500 distinguished guests from across the university attended this invitation only event, about 50% in person and 50% streaming online. UC Regents and C-suite executives included chief information officers, department chairs, and distinguished faculty across non-technical disciplines, among others. 

Learn more about the event, its organizers by visiting the UC AI Congress event landing page. To learn more, feel free to review this UC AI Resources list. To join our growing community, please visit the UC AI Council landing page, and share your contact information and area of interest by completing this UC AI interest form.

Program Committee

  • Chair: Camille Crittenden – Executive Director, CITRIS and the Banatao Institute
  • Lucy Avetisyan – Chief Information Officer, UCLA
  • Jenae Cohn – Executive Director, UC Berkeley Center for Teaching and Learning 
  • Kristin Cordova – Chief of Staff, Information Technology Services, UC Office of the President
  • David Danks – Professor, UC San Diego, data science & philosophy 
  • Yvette Gullatt – Chief Diversity Officer, UC Office of the President, Graduate and Undergraduate Affairs
  • Cora Han – Chief Health Data Officer, UC Office of the President, University of California Health
  • Elizabeth Joh – Faculty Advisory Board, UC Davis Law 
  • Jenny Lofthus – General Compliance Manager, UC Office of the President, Ethics and Compliance
  • Rich Lyons – Chief Innovation and Entrepreneurship Officer, UC Berkeley
  • Katherine S. Newman – Provost and Executive Vice President of Academic Affairs of the University of California
  • Mark Nitzberg – Executive Director of the UC Berkeley Center for Human-Compatible AI, Head of Strategic Outreach at the Berkeley AI Research Lab, and Director of Technology Research at the Berkeley Roundtable on the International Economy (BRIE)
  • Brandie Nonnecke – Director, CITRIS Policy Lab
  • Van Williams – Vice President, Information Technology and Chief Information Officer, University of California

Contributing Staff

  • Fatima Azam – Communications Coordinator for Graduate, Undergraduate and Equity Affairs
  • Whit Bastian – Administrative Specialist, UC Office of the President
  • Stephanie Beecham – Chief of Staff, Academic Affairs, University of California Office of the President
  • Alissa Moe – Director, Outreach Events and Communications, Graduate, Undergraduate and Equity Affairs, UC Office of the President
  • Laurel Skurko – Marketing & Communications Specialist, Information Technology Services, UC Office of the President
  • Ghanya Thomas – Executive Assistant, Information Technology Services, UC Office of the President

Authors

Camille Crittenden, Ph.D., is executive director of CITRIS and the Banatao Institute, and co-founder of the CITRIS Policy Lab and the Women in Tech Initiative at UC.
Camille Crittenden, Ph.D.
Executive director of CITRIS and the Banatao Institute
Co-founder of the CITRIS Policy Lab and the Women in Tech Initiative at UC
Laurel Skurko, Marketing & Communications, IT Services, UC Office of the President
Laurel Skurko
Marketing & Communications
UC Office of the President

Laurel Skurko