OAKMOREL Forensic Intelligence // [email protected]
15 U.S.C. § 278h1 15 u.s.c. · national institute of standards and tech · title 15
15 U.S.C. § 278h1
Standards for artificial intelligence
Title 15 USC
● ACTIVE
Ch. 7
Jurisdiction Federal — United States
Chapter NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY
Primary Source uscode.house.gov ↗
Federation ID OM-USC15-SEC-64FEB0
STATUTORY TEXT primary source · verbatim · uscode.house.gov

U.S.C. Title 15 - COMMERCE AND TRADE 15 U.S.C. United States Code, 2023 Edition Title 15 - COMMERCE AND TRADE CHAPTER 7 - NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY Sec. 278h-1 - Standards for artificial intelligence From the U.S. Government Publishing Office, www.gpo.gov

§278h–1. Standards for artificial intelligence

(a) Mission The Institute shall— (1) advance collaborative frameworks, standards, guidelines, and associated methods and techniques for artificial intelligence; (2) support the development of a risk-mitigation framework for deploying artificial intelligence systems; (3) support the development of technical standards and guidelines that promote trustworthy artificial intelligence systems; and (4) support the development of technical standards and guidelines by which to test for bias in artificial intelligence training data and applications. (b) Supporting activities The Director of the National Institute of Standards and Technology may— (1) support measurement research and development of best practices and voluntary standards for trustworthy artificial intelligence systems, which may include— (A) privacy and security, including for datasets used to train or test artificial intelligence systems and software and hardware used in artificial intelligence systems; (B) advanced computer chips and hardware designed for artificial intelligence systems; (C) data management and techniques to increase the usability of data, including strategies to systematically clean, label, and standardize data into forms useful for training artificial intelligence systems and the use of common, open licenses; (D) safety and robustness of artificial intelligence systems, including assurance, verification, validation, security, control, and the ability for artificial intelligence systems to withstand unexpected inputs and adversarial attacks; (E) auditing mechanisms and benchmarks for accuracy, transparency, verifiability, and safety assurance for artificial intelligence systems; (F) applications of machine learning and artificial intelligence systems to improve other scientific fields and engineering; (G) model documentation, including performance metrics and constraints, measures of fairness, training and testing processes, and results; (H) system documentation, including connections and dependences within and between systems, and complications that may arise from such connections; and (I) all other areas deemed by the Director to be critical to the development and deployment of trustworthy artificial intelligence;

(2) produce curated, standardized, representative, high-value, secure, aggregate, and privacy protected data sets for artificial intelligence research, development, and use; (3) support one or more institutes as described in section 9431(b) of this title for the purpose of advancing measurement science, voluntary consensus standards, and guidelines for trustworthy artificial intelligence systems; (4) support and strategically engage in the development of voluntary consensus standards, including international standards, through open, transparent, and consensus-based processes; and (5) enter into and perform such contracts, including cooperative research and development arrangements and grants and cooperative agreements or other transactions, as may be necessary in the conduct of the work of the National Institute of Standards and Technology and on such terms as the Director considers appropriate, in furtherance of the purposes of this division.1 (c) Risk management framework Not later than 2 years after January 1, 2021, the Director shall work to develop, and periodically update, in collaboration with other public and private sector organizations, including the National Science Foundation and the Department of Energy, a voluntary risk management framework for trustworthy artificial intelligence systems. The framework shall— (1) identify and provide standards, guidelines, best practices, methodologies, procedures and processes for— (A) developing trustworthy artificial intelligence systems; (B) assessing the trustworthiness of artificial intelligence systems; and (C) mitigating risks from artificial intelligence systems;

(2) establish common definitions and characterizations for aspects of trustworthiness, including explainability, transparency, safety, privacy, security, robustness, fairness, bias, ethics, validation, verification, interpretability, and other properties related to artificial intelligence systems that are common across all sectors; (3) provide case studies of framework implementation; (4) align with international standards, as appropriate; (5) incorporate voluntary consensus standards and industry best practices; and (6) not prescribe or otherwise require the use of specific information or communications technology products or services. (d) Participation in standard setting organizations (1) Requirement The Institute shall participate in the development of standards and specifications for artificial intelligence. (2) Purpose The purpose of this participation shall be to ensure— (A) that standards promote artificial intelligence systems that are trustworthy; and (B) that standards relating to artificial intelligence reflect the state of technology and are fit-for-purpose and developed in transparent and consensus-based processes that are open to all stakeholders. (e) Data sharing best practices Not later than 1 year after January 1, 2021, the Director shall, in collaboration with other public and private sector organizations, develop guidance to facilitate the creation of voluntary data sharing arrangements between industry, federally funded research centers, and Federal agencies for the purpose of advancing artificial intelligence research and technologies, including options for partnership models between government entities, industry, universities, and nonprofits that incentivize each party to share the data they collected. (f) Best practices for documentation of data sets Not later than 1 year after January 1, 2021, the Director shall, in collaboration with other public and private sector organizations, develop best practices for datasets used to train artificial intelligence systems, including— (1) standards for metadata that describe the properties of datasets, including— (A) the origins of the data; (B) the intent behind the creation of the data; (C) authorized uses of the data; (D) descriptive characteristics of the data, including what populations are included and excluded from the datasets; and (E) any other properties as determined by the Director; and

(2) standards for privacy and security of datasets with human characteristics. (g) Testbeds In coordination with other Federal agencies as appropriate, the private sector, and institutions of higher education (as such term is defined in section 1001 of title 20), the Director may establish testbeds, including in virtual environments, to support the development of robust and trustworthy artificial intelligence and machine learning systems, including testbeds that examine the vulnerabilities and conditions that may lead to failure in, malfunction of, or attacks on such systems. (h) Authorization of appropriations There are authorized to be appropriated to the National Institute of Standards and Technology to carry out this section— (1) $64,000,000 for fiscal year 2021; (2) $70,400,000 for fiscal year 2022; (3) $77,440,000 for fiscal year 2023; (4) $85,180,000 for fiscal year 2024; and (5) $93,700,000 for fiscal year 2025.

(Mar. 3, 1901, ch. 872, §22A, as added Pub. L. 116–283, div. E, title LIII, §5301, Jan. 1, 2021, 134 Stat. 4536; amended Pub. L. 117–167, div. B, title II, §10232(b), Aug. 9, 2022, 136 Stat. 1484.)

Editorial Notes

References in Text This division, referred to in subsec. (b)(5), probably means div. E of Pub. L. 116–283, Jan. 1, 2021, 134 Stat. 4523, which is classified principally to chapter 119 of this title.

Amendments 2022—Subsecs. (g), (h). Pub. L. 117–167 added subsec. (g) and redesignated former subsec. (g) as (h).

1 See References in Text note below.

Source: uscode.house.gov — public domain Official Source ↗
ROOT-LD ENTITY DATA machine-readable · federation graph · v1.0
Federation ID
OM-USC15-SEC-64FEB0
Entity Class
STATUTE / FEDERAL-CODE-SECTION
Domain Signature
oakmorel.com
Spec Version
Root-LD v1.0
Source
PRIMARY-SOURCE
Content Hash
facf06c03fb1bd78...
Source Verified
✓ TRUE
Semantic Edges
PENDING — corpus passes queued
The statutory text of 15 U.S.C. § 278h1 is reproduced from the official United States Code as published by the Office of the Law Revision Counsel of the U.S. House of Representatives (uscode.house.gov).
OakMorel Law
15 U.S.C.
Citation
15 U.S.C. § 278h1
Status
● ACTIVE
Chapter
7 — NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY
Title
Commerce and Trade
Jurisdiction
Federal
Federation ID
OM-USC15-SEC-64FEB0
Root-LD Spec
v1.0
► Forensic Services
Procurement fraud, platform integrity, litigation support. First conversation free.
► CONTACT OAKMOREL →
↑↓ Scroll ENTER Select ESC Exit
Commerce and Trade — 15 U.S.C. § 278h1