The U.S. National Telecommunications and Information Administration (NTIA) recently released its Artificial Intelligence (AI) Accountability Policy Report (Report), which presents eight recommendations to improve the accountability of AI developers and deployers. At a high level, the Report calls for improved transparency into AI systems, independent evaluations to verify the claims made about these systems, and consequences for introducing unacceptable risks or making unfounded claims. Noting AI’s transformational potential as well as its risks, NTIA states that strong accountability policies will support the creation of a robust, innovative, and informed marketplace that empowers all participants in the AI ecosystem.

The Report positions the recommendations as a “natural follow-on” to the Biden Administration’s earlier work on AI, including the AI Executive Order (October 2023), the White House Blueprint for an AI Bill of Rights (October 2022), and the National Institute of Standards and Technology’s (NIST’s) AI Risk Management Framework (January 2023). The eight recommendations are divided into three categories: (1) guidance, or the creation of standards and best practices; (2) support, or capacity building and enrichment of resources; and (3) regulatory requirements. Each recommendation is summarized below.

 

Guidance

Audit practices

NTIA recommends that federal agencies work with stakeholders to establish basic guidelines for AI audits and auditors that address what an independent audit covers and how it is conducted, including the development of audit methodologies and auditor certifications to promote consistency. NTIA also suggests that AI audits consider an AI system’s fitness for purpose, performance, processes, and controls across the AI system lifecycle, including risk management, data, documentation, and governance. The Report notes that, while NIST and other organizations are still developing consensus standards, AI auditors and evaluators should document their work so that it can also be evaluated based on standards developed in the future. The Report also clarifies that independent audits and evaluations are not a substitute for regulatory inspection and oversight.

Disclosure and access

NTIA calls on federal agencies to collaborate with stakeholders to improve standard information disclosures. The Report cites a need for greater transparency about AI systems, including model architecture and training, input and output data, performance, model limitations, and testing. The Report emphasizes the benefits of standardization, plain language, and audience-appropriate disclosures. NTIA suggests that model and system cards or AI “nutrition labels” might be a useful tool for baseline disclosures and urges support for NIST’s efforts to develop standardized baseline disclosures for AI systems.

Liability rules

The Report encourages the federal government to engage stakeholders both on whether to apply existing liability rules and standards to AI systems and on the potential need for new frameworks. Recognizing that courts will progressively clarify AI legal standards by resolving legal disputes and that federal and state legislators may enact new “liability contours,” NTIA notes an important role for regulatory agencies to influence the conversation by collecting, studying, and reporting stakeholder and agency input.

 

Support

People and tools

The Report highlights the need for increased technical capacity within the federal government in terms of both people and systems to meet the national demand for independent evaluation of AI systems. NTIA recommends that federal agencies invest in (1) personnel with appropriate sociotechnical expertise to conduct and review AI evaluations and other AI accountability inputs and (2) technical capacity such as datasets and compute and cloud infrastructure. The Report calls on Congress to appropriate funds for the U.S. AI Safety Institute and establish the National AI Research Resource to support increased federal capacity and international engagement on standards.

Research

NTIA recommends that federal agencies expand research and development on AI testing and evaluation and tools to facilitate research and evaluation of AI systems. Suggested areas of research include evaluation methodologies that address model capabilities and limitations, safety, and trustworthiness; watermarking and other provenance methods; and technical tools to facilitate model access while maintaining sensitive private and proprietary information. The Report also urges government to build on investments already underway at the recently established U.S. AI Safety Institute at NIST and the National Science Foundation.

 

Regulatory Requirements

Audits and independent evaluations

NTIA recommends that federal agencies require independent audits, regulatory inspections, and potentially certification of AI systems that pose a high risk of harm to rights or safety, which it states should happen before release or deployment (as well as on an ongoing basis in some cases). To facilitate evaluation, federal agencies may also need to mandate documentation and disclosure requirements for AI systems. The Report notes that some federal agencies may not have legal authority to implement these requirements. NTIA also urges federal regulators to pursue alignment with global allies on inspection regimes and the use of international standards.

Cross-sectoral government capacity

NTIA recommends that the federal government increase its “horizontal capacity” to develop common baseline requirements and best practices across industries, facilitate coordination between sectoral regulators, and address cross-sectoral risks. Potential cross-sectoral initiatives may include a national registry of high-risk AI deployments, adverse incidents, and system audits; certification and oversight of AI systems; and international coordination.

Contracting

NTIA recommends that federal agencies revise procurement practices to promote trustworthy AI by including requirements to adopt sound AI governance and other assurance practices in contract and grant terms. By leveraging its procurement power, the federal government can accelerate the use of AI standards and implementation of recognized risk management practices.

 

Next Steps

The Report represents another step in efforts to pursue a U.S. regulatory framework for the responsible development and deployment of AI tools. In the absence of congressional action, federal AI policymaking in 2024 will likely continue to be driven by agency-level actions stemming from the AI Executive Order.

 

 

Authored by Katy Milner, Mark Brennan, Ryan Thompson, and Ambia Harper.

Search

Register now to receive personalized content and more!