A recently introduced Senate bill would, if passed, direct the National Institute of Standards and Technology (NIST) to develop standards for third-party audits of AI.
The bill — sponsored by Sen. John Hickenlooper, D-Colorado, and Shelley Moore Capito, R-West Virginia — specifically would require the Director of the NIST to develop voluntary guidelines and specifications for internal and external assurances of artificial intelligence systems, and for other purposes. These specifications would require considerations for data privacy protections, mitigations against potential harms to individuals from an AI system, dataset quality, and governance and communications processes of a developer or deployer throughout the AI systems’ development lifecycles.
It would also establish a collaborative Advisory Committee to review and recommend criteria for individuals or organizations seeking to obtain certification of their ability to conduct internal or external assurance for AI systems, and require NIST to conduct a study examining various aspects of the ecosystem of AI assurance, including the current capabilities and methodologies used, facilities or resources needed, and overall market demand for internal and external AI assurance.
“AI is moving faster than any of us thought it would two years ago,” said Sen. Hickenlooper, who serves as the chair of the Senate Subcommittee on Consumer Protection, Product Safety and Data Security. “But we have to move just as fast to get sensible guardrails in place to develop AI responsibly before it’s too late. Otherwise, AI could bring more harm than good to our lives.”
The bill defines AI along the lines of the National Artificial Intelligence Initiative Act of 2020, which said AI means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments, using machine and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.
This represents but the latest in a series of actions to encourage regulation and oversight of artificial intelligence, not least of which was the executive order from the White House at the beginning of this year. Others include the Algorithmic Accountability Act of 2023, the Federal Artificial Intelligence Risk Management Act of 2023, the Artificial Intelligence Environmental Impacts Act of 2024 and the No Robot Bosses Act. Across the ocean, the EU has also been very interested in AI regulation, as evidenced in its EU Artificial Intelligence Act.