04.24.2023

|

Updates

The recent dramatic growth of artificial intelligence (AI) technologies continues to be a focus of the Biden administration. The National Telecommunications and Information Administration (NTIA), a federal agency within the U.S. Department of Commerce, recently issued a Request for Comment (RFC) that seeks public comment on system accountability measures and policies for AI systems.

As discussed in a previous Update, the “White House Blueprint for an AI Bill of Rights” (Blueprint), published by the Office of Science and Technology Policy (a part of the Executive Office of the President), described a framework of principles for developing safeguards for effective AI systems. The Blueprint calls for protections against algorithmic discrimination and greater transparency in how automated systems function. Consistent with the Blueprint’s goals, the NTIA intends to use the public’s feedback from this RFC to draft a report on AI accountability policy development, which will inform the federal government’s regulatory approach to AI-related risks and opportunities.

In the RFC, the NTIA discusses:

  • The definition and objectives of trustworthy AI.
  • The mechanisms, assessments, and other policy considerations for trustworthy AI.
  • Specific questions on AI accountability.

Comments are due by June 12, 2023.

Defining Trustworthy AI and Determining Objectives

The NTIA solicits input from a variety of experts across different industries and fields on how to develop a productive accountability ecosystem for “trustworthy AI.” The NTIA draws parallels between the Blueprint’s five principles on safety, anti-discrimination, data privacy, notice, and human considerations and the 2019 AI principles published by the Organisation for Economic Cooperation and Development. In doing so, the NTIA encourages stakeholders to comment on the following:

  • Gaps and barriers to creating adequate accountability for AI systems.
  • Any trustworthy AI goals that might not be amendable to requirements or standards.
  • How accountability measures might mask or minimize risks.
  • The value of accountability mechanisms to compliance efforts.
  • How governmental and nongovernmental bodies might support AI accountability practices.

Considering Accountability Mechanisms, Assessments, and Other Policies for Trustworthy AI

Focusing on accountability, the NTIA discusses AI governance tools that governmental and nongovernmental organizations currently use, the effectiveness of audits and assessments, and potential policy challenges of accountability ecosystems.

Accountability Mechanisms

The NTIA outlines mechanisms that various entities have deployed thus far toward establishing trust between communities and AI systems. The NTIA observes that the United States is beginning to require accountability mechanisms, including some limited to initiatives aimed at addressing data privacy and consumer protection concerns. The NTIA also cites the EU’s Digital Services Act, the draft EU Artificial Intelligence Act, and New York City Law 144, which all use audits or assessments as an accountability mechanism. Outside of governmental entities, the NTIA notes that a growing number of nonprofits and corporations are also developing their own accountability tools and policies.

Audits and Assessments

The NTIA discusses different types and common areas of focus for AI audits and assessments. In particular, the NTIA’s discussion highlights how AI audits and assessments tend to vary:

  • Purpose. AI audits and assessments can focus on a variety of different issues, including bias, effectiveness, data protection, transparency, and explainability. Use-specific areas of focus may include impacts related to miscommunication, disinformation, deep fakes, and other content-related issues.
  • Scope. While audits may be conducted internally or by an independent third party, audit parameters call for varying levels of cooperation by the audited organization and may be subjected to public disclosure. Additionally, audits may be conducted by professional experts or affected laypeople.
  • Context. Audits and assessments may be limited to technical aspects of a model, or they may require broad mapping of the governance, purpose, or interactions with other stakeholders.
  • Establishing a baseline. Accountability mechanisms may be tied to a legal standard or to guidelines and tools developed specifically for AI models.

Policy Considerations

With respect to potential AI accountability ecosystems, the NTIA notes that policymakers must evaluate several trade-offs and challenges, including the following:

  • Validity concerns. Considering the differing goals and deployment contexts for AI accountability mechanisms, the NTIA predicts that the implementation of these mechanisms will vary widely in the near term and notes that narrowly scoped measures risk creating a “false sense of assurance” with AI systems. Given this risk, the NTIA emphasizes that “it is imperative that those performing AI accountability tasks are sufficiently qualified to provide credible evidence that systems are trustworthy.”
  • Potential trade-offs. The NTIA notes that certain accountability mechanisms may conflict with other values, such as data privacy and security, and force policymakers to consider difficult trade-offs.
  • Timing challenges. The NTIA discusses how AI system lifecycles and AI value chains may affect accountability. An AI system at the predesign stage requires different assessments than a system that is at the post-deployment stage.
  • Standardization challenges. Given the wide range of entities that are implementing or considering AI accountability measures, including various types of government and private organizations, the NTIA is concerned that it will be difficult to harmonize accountability standards.

In light of these considerations, the NTIA asks for input on several proposals by stakeholders to help address the complexities of an AI accountability ecosystem:

  • Mandating impact assessments and audits.
  • Defining “independence” for third-party audits.
  • Setting procurement standards.
  • Incentivizing effective audits and assessments through bounties and subsidies.
  • Creating access to data necessary for AI audits and assessments.
  • Creating consensus standards for AI assurance.
  • Providing auditor certifications.
  • Making test data available for use.

Questions Regarding AI Accountability

To help guide the commentary, the NTIA asks commenters to provide input on a long series of specific questions regarding AI accountability, which span the following six categories:

  1. AI Accountability Objectives
  2. Existing Resources and Models
  3. Accountability Subjects
  4. Accountability Inputs and Transparency
  5. Barriers to Effective Accountability
  6. AI Accountability Policies

The NTIA emphasizes that the list of questions is not exhaustive and encourages commenters to provide any other relevant input, including any specific actionable proposals, rationales, or relevant facts.

Takeaways

The NTIA’s RFC comes at a time of heightened scrutiny for AI systems, as government and private actors alike increasingly turn their attention towards supporting policies to balance the impact of AI technologies. Notably, as the RFC was still hot off the press, Senate Majority Leader Chuck Schumer publicized his efforts to establish a Congressional regulatory framework for AI. And, this March, the Future of Life Institute circulated a widely publicized open letter that called for a six-month moratorium on developing advanced AI systems.

The NTIA’s broad inquiry aims to capture a wide set of issues and draw commentary from a breadth of expertise with the ultimate goal of determining the NTIA’s role in establishing AI accountability measures. In its press release accompanying the RFC, the NTIA notes that this commentary “will inform the Biden Administration’s ongoing work to ensure a cohesive and comprehensive federal government approach to AI-related risks and opportunities.” The RFC is not a rulemaking proceeding and will not directly result in binding federal rules published in the Code of Federal Regulations. But it is likely to influence future federal and state regulatory and legislative efforts in the United States, if not other countries.

The RFC represents the most significant effort yet by a U.S. federal agency to consider an AI regulatory regime following the White House’s Blueprint published last year. Given the growing public attention towards AI technologies, further developments from regulators can be expected later this year. For assistance in preparing persuasive comments to file in the RFC proceeding, contact experienced counsel. In the meantime, the Artificial Intelligence, Machine Learning & Robotics industry group at Perkins Coie will continue to monitor changes to the AI regulatory landscape to better help clients manage potential legal and regulatory issues during the development, testing, and launch of AI and ML products and services.

© 2023 Perkins Coie LLP


 

Sign up for the latest legal news and insights  >