04.08.2024

|

Updates

The Office of Management and Budget (OMB), part of the Executive Office of the President, recently issued a memorandum (the Memorandum) containing requirements and recommendations for executive agencies regarding those agencies’ use of artificial intelligence (AI). It marks the first effort to establish an AI governance structure for the federal government, which was mandated by the October 2023 Executive Order on AI that we have previously discussed here and here. That Executive Order directed OMB to “issue guidance to agencies to strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI in the Federal Government.” Notably, OMB does not address issues associated with information systems more generally, such as cybersecurity and privacy. The Memorandum requirements apply to system functionality that “implements or is reliant on” AI that is “developed, used, or procured by” covered agencies. Agency activity that merely relates to AI, such as regulatory actions regarding nonagency AI use and investigations of AI applications as part of an enforcement action, is not within the Memorandum’s scope. In this Update, we provide an overview of the three core areas of focus in the Memorandum: governance, innovation, and risk management.

Governance

OMB advises federal agencies that robust governance is necessary to accomplish legal and policy compliance while pursuing innovation in the use of AI. To advance agency-level oversight of AI use, the Memorandum requires that covered agencies appoint a chief AI officer (CAIO), who will “bear primary responsibility . . . for implementing the memorandum.” (We note that the U.S. Department of Homeland Security was the first federal agency to take this step on October 30, 2023, followed by the U.S. Department of Justice on February 22, 2024). CAIO responsibilities include addressing the other topics covered by the Memorandum, as well as AI innovation, risk management, and coordination of its use by their agency.

In addition, within 60 days of the Memorandum’s issuance, federal agencies covered by the Chief Financial Officers (CFO) Act must convene AI governance bodies that meet at least semi-annually. Within 180 days, each covered agency is to publicly share a plan to comply with the Memorandum or deny that it uses or plans to use covered AI. Agencies (except for national security-related agencies) are required to “individually inventory each of [their] AI use cases” on an annual basis. They must submit these inventories to OMB and must also post a separate public version. Further, these agencies must identify inventoried use cases that are “safety-impacting” or “rights-impacting,” the risks posed by those uses, and how those risks are being managed. More generally, agencies must internally coordinate their adoption and management of AI. Chosen coordination mechanisms should align with each agency’s needs based on its need for and use of AI, as well as the risks posed by such use.

Innovation

The Memorandum requires agencies to develop, procure, and adopt AI to “benefit the public and increase mission effectiveness” where appropriate. To kick off agency AI use, CFO Act-covered agencies (except for national security-related agencies) must adopt a strategy for removing barriers to responsible AI use and improving AI maturity. The strategies, which must be publicly released, will include each agency’s assessment of its current AI capabilities and plans for developing its capacity for AI innovation.

To enable responsible and efficient AI use, agencies are encouraged to dismantle unnecessary barriers to use and equip practitioners with necessary tools. For example, the Memorandum advises agencies to obtain adequate computing infrastructure and software tools needed to develop and adopt AI applications. Agencies must also develop a capacity to responsibly obtain, share, and curate data, including publicly available data, for training, testing, and operating AI. Additionally, agencies are “strongly encouraged to prioritize recruiting, hiring, developing, and retaining” AI talent.

Lastly, the innovation portion of the Memorandum addresses the sharing of technology and information, both with the public and within the federal government. Noting that the sharing and reuse of AI can enhance innovation and transparency, OMB instructs agencies to share AI code, models, and data. When not otherwise prohibited, agencies must share those assets publicly as open-source software (OSS). And where release to the public or release of an entire project is not possible, agencies should share whatever portion is able to be released as extensively as possible. Agencies should take sharing of AI resources into consideration when they procure custom-developed code and when they choose data formats. Furthering such information-sharing efforts, OMB and the Office of Science and Technology Policy will convene an interagency council whose goal is to promote consistent and efficient AI adoption among federal agencies.

Risk Management

The Memorandum requires agencies to implement a set of “minimum practices . . . to manage risks from safety-impacting AI and rights-impacting AI” by December 1, 2024, and stop using all noncompliant AI on that date. Notably, OMB does not generally require intelligence agencies to implement these practices but encourages them to do so.

To carry this out, agencies must first identify “safety-impacting” and “rights-impacting” AI systems. OMB defines “safety-impacting” AI as an AI whose output may significantly impact the safety of: (1) human life, (2) the climate or the environment, (3) critical infrastructure, or (4) strategic assets or resources. OMB defines “rights-impacting” AI as an AI whose output may significantly affect: (1) civil rights, civil liberties or privacy; (2) equal opportunities; or (3) access or ability to apply for critical government resources or services. OMB also includes in the Memorandum an extensive list of purposes that are presumed to be safety-impacting or rights-impacting.

Agencies must review their current and planned uses of AI and determine which could affect safety or rights. By December 1, 2024, and annually thereafter, agencies must certify and publicly release (as law permits) these determinations. An agency CAIO, along with other officials, may determine that an AI component subject to the presumption does not actually affect safety or rights and is not subject to the minimum practices.

Agencies must adhere to a set of minimum practices for AI affecting rights or safety, including conducting AI impact assessments, documenting AI's purpose and benefits, independent evaluations, ongoing monitoring of AI systems, regular risk assessments, mitigation of emerging risks, human training, additional oversight for significant decisions, and providing public notice and plain-language documentation. Additionally, for rights-impacting AI, OMB requires that agencies assess AI's effect on equity and fairness, consult with and incorporate feedback from affected communities, provide opt-out options, conduct ongoing monitoring to mitigate AI-enabled discrimination, and notify negatively affected individuals with a remedy process including timely human consideration. Agencies must comply by the deadline and cease noncompliant AI use until compliant.

OMB also provides recommendations to agencies for “responsible procurement of AI” to supplement the agencies’ required risk management practices for AI affecting safety or rights. For example, OMB recommends agencies ensure that procured AI is “consistent” with the Constitution and other applicable laws and policies, that their procurement practices promote competition among contractors, and that they include risk management requirements in contracts for generative AI.

© 2024 Perkins Coie LLP


 

Sign up for the latest legal news and insights  >