02.12.2024

|

Updates

Pursuant to President Biden’s signing of the White House Executive Order on Artificial Intelligence (AI EO), certain federal agencies were given 90 days to complete a variety of safety- and security-related steps. The AI EO is the United States’ most extensive federal policy directive governing the development and use of AI. It directed federal agencies to conduct a range of actions to address key potential safety and security risks arising from the use of AI in foreign affairs, national security, and the administration of the federal government.

Recently, the White House published a fact sheet outlining the recent actions taken by the affected federal agencies, which we summarize below.

Mitigating Potential Risks of AI

  • Mandating continuous reporting of vital information from developers of powerful AI. Pursuant to its authority under the Defense Production Act, the U.S. Department of Commerce is targeting AI models that can perform well on a wide range of tasks, some of which may pose serious risks to security, health, or safety, and that require large amounts of data and computing power to train and protect. Companies must provide the government with information, reports, or records on an ongoing basis regarding activities related to training, developing, or producing such AI models, the ownership and protection of the model weights, and the results of any testing of such models’ performance and safety. Additionally, a company, individual, or other entity that acquires, develops, or possesses a potential large-scale computing cluster must report any such transaction, including the location and power of the computing cluster.
  • Requiring U.S. cloud companies providing computing power for foreign AI training to report such activity. A proposed rule by the Department of Commerce would require U.S. cloud providers to submit a report when a foreign person transacts with them to train a large AI model that could be used for malicious cyber-enabled activity. The report must include the identity of the foreign person and the existence of any training run of any such powerful AI model.
  • Completing risk assessments covering AI’s use in every critical infrastructure sector. The U.S. Department of Defense (DOD), U.S. Department of Transportation, U.S. Department of Treasury, and U.S. Department of Health and Human Services (HHS) are among nine agencies that have submitted their risk assessments to the U.S. Department of Homeland Security. The risk assessments will provide the basis for continued federal action in ensuring that the United States is integrating AI safety into its critical infrastructure, such as the electric grid.

Using AI To Innovate for Good

  • Launching a pilot of the National Artificial Intelligence Research Resource, consisting of 11 federal agency partners and 25 private sector partners. This pilot, managed by the National Science Foundation (NSF), aims to create a public-private partnership for national infrastructure that would deliver computing power, data, software, and access to open source and proprietary models, as well as to provide AI training resources to researchers and students.
  • Commencing new efforts to recruit AI talent to the federal government. The National AI Talent Surge will accelerate the hiring of AI professionals across the federal government, including through a large-scale hiring action for data scientists and government-wide tech talent programs such as the Presidential Innovation Fellows, U.S. Digital Corps, and U.S. Digital Services.
  • Investing in early AI education. The NSF will promote AI-related workforce development through its EducateAI Initiative by providing funding and professional development opportunities to help educators create high-quality, inclusive, AI educational opportunities.
  • Funding new AI-focused research and development efforts. The NSF will launch new AI-focused Regional Innovation Engines, which are regional coalitions of public and private sector researchers with long-term funding from the NSF. For example, the Piedmont Triad Regenerative Medicine Engine will use the world’s largest regenerative medicine cluster to create and scale breakthrough clinical therapies through the use of AI.
  • Establishing an AI task force at HHS. Established in collaboration with the DOD and the U.S. Department of Veterans Affairs, the AI task force will develop policies and provide regulatory clarity for AI innovation in healthcare by creating methods of evaluating AI-enabled tools and frameworks for AI’s use in advancing drug development, bolstering public health, and improving healthcare delivery. Additionally, it will publish guiding principles for addressing racial biases in healthcare algorithms.

Takeaways and Next Steps

The foregoing federal agencies’ actions signal progress in achieving the AI EO’s mandate to protect the United States from the potential risks of AI systems while promoting innovation in these systems.

 © 2024 Perkins Coie LLP


 

Sign up for the latest legal news and insights  >