White House Executive Order on trustworthy artificial intelligence provides opportunities for higher education 

White House Executive Order on Trustworthy AI

By Camille Crittenden, Ph.D. President Biden released an Executive Order (EO) this week, on Monday, October 30, with guidelines to promote “safe, secure and trustworthy artificial intelligence.” The EO comes on the heels of a set of “voluntary commitments” wrested from tech company leaders this summer and follows previous executive orders along the same lines. Monday’s announcement offers details about the Biden administration’s whole-of-society approach involving a range of federal agencies, including the Departments of Energy, Homeland Security, and Commerce, as well as partnerships with funding agencies such as the National Science Foundation and National Institutes of Health to insure compliance and promote innovation.

Institutions of higher education stand to benefit from the priorities outlined in the EO. Investments in research, education and workforce training, and responsible governance are clear points of mutual benefit where universities can serve the national interest. Less apparent areas of intersection are in the calls to relax some immigration policies for students and workers in tech sectors, encourage support for entrepreneurship and commercialization activities, and require institutions to demand vendors and other providers of AI technology certify that they have tested their applications for bias and potential harms or other vulnerabilities that would threaten the privacy and civil liberties of individuals.

Investing in AI research

The EO invokes the need for research on trustworthy AI throughout the document. One mechanism for advancing this agenda has been created already through the National AI Research Institutes; 25 of these multi-campus entities have been established, and the EO calls for four more (5.2.a.iii.). The EO also considers the infrastructure for advanced AI research and requires a report on large-scale computing clusters and foundation models for dual-use technology (4.2.ii.). Similarly, it encourages support for innovation and workforce development in the semiconductor industry as one that underpins AI research (5.3.b.). The University of California and other academic entities operate supercomputing facilities, nanofabrication laboratories, and training in semiconductor engineering and research that would be relevant to these priorities. 

The EO calls on the President’s Council of Advisors on Science and Technology to “submit to the President and make publicly available a report on the potential role of AI, especially given recent developments in AI, in research aimed at tackling major societal and global challenges” (5.2.h.). Many research centers and academic programs on UC campuses and other higher ed institutions are fostering research and training to deploy data science and AI to address such societal challenges; these include UC’s Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS) and UC Berkeley’s new College of Computing, Data Science and Society, among others.

Transforming education

Before students reach a college classroom, their learning may already be shaped by AI applications. AI offers opportunities to transform the delivery of K-12 education and improve equitable access to customized learning. The Secretary of Education is directed to “develop resources, policies, and guidance regarding AI” that will address “safe, responsible, and nondiscriminatory uses of AI in education, including the impact AI systems have on vulnerable and underserved communities, and shall be developed in consultation with stakeholders as appropriate” (8.d.). Academic institutions, especially those with Schools of Education, should be involved in helping to develop the “AI Toolkit” suggested in the EO, and in developing and evaluating new interactive learning platforms.

Strengthening workforce training

Closely tied to the goals of education, the report describes specific efforts to expand and strengthen the workforce needed to fulfill the promise of AI (while at the same time offering protections for those whose jobs may be displaced by the new technology). The Department of Energy and National Science Foundation are directed to “establish a pilot program to enhance existing successful training programs for scientists, with the goal of training 500 new researchers by 2025 capable of meeting the rising demand for AI talent” in areas of high-performance and data-intensive computing (5.2.b.). Not only does the EO call on educational entities to provide advanced training in AI but the federal government itself is expanding its capacity as an employer of workers with AI skills (10.2.). The EO also encourages upskilling current employees with training on emerging AI applications (10.2.g).

Expanding access to AI careers and safeguarding IP rights

One strategy recommended for promoting innovation and enhancing national competitiveness is improving immigration practices, such as shortening waiting times for visa appointments and approvals for applicants with expertise in AI and recommending modifications of the requirements and terms for J-1, F-1 and H-1B visas (5.1.). UC hosts thousands of students and postdocs holding such visas (there were 7,000 research scholars, short-term scholars and professors in California in 2022 on a J-1 visa and 37,000 in the U.S. overall). 

The EO also recommends clarifying guidelines related to concerns of intellectual property and copyright (5.2.c.); UC was granted 570 patents in 2022, more than any other university, and of course UC (through UC Press) and its faculty and staff generate thousands of pages of copyrighted material each year.

Improving AI governance, health equity and operations

Perhaps less apparent are the implications of the EO for university operations. Woven throughout the document are issues of data privacy and cybersecurity, concerns for any large public organization (4.3.), including those working on biological data and generative AI that may pose biosecurity risks (4.4). Regarding healthcare delivery, the EO aims to accelerate grants to advance health equity and researcher diversity (UC Health includes six academic health systems and 20 health professional schools and related clinics) (5.2.e.iii). Finally, each federal agency is required to designate a Chief AI Officer “who shall hold primary responsibility in their agency, in coordination with other responsible officials, for coordinating their agency’s use of AI, promoting AI innovation in their agency, managing risks from their agency’s use of AI,” and other responsibilities. A few universities have created such positions and likely more will follow, given the precedent being set here.

AI and the new EO provides UC an opportunity to lead the way

The sweeping EO will shape policies and practices in AI research and applications throughout the public and private sectors in the near term and for years to come. It highlights myriad roles for higher education to accelerate research, strengthen privacy and security, harness the power of innovation, and expand a future-ready workforce. As the largest institution of public higher education in the world in a state that is home to the leading tech companies creating and selling AI products and platforms, the University of California has an opportunity to lead the way in productively addressing the priorities articulated by the Biden administration.

Author

Camille Crittenden, Ph.D.
Executive Director of CITRIS and the Banatao Institute
Co-Founder of the CITRIS Policy Lab and the Women in Tech Initiative at UC

Related Reading

AI Resources shared among colleagues at University of California

[Please send the UC Tech News team additional recommended reading on AI for the community.]