The authoritative news source for communications regulation
‘Get Started Soon'

NTIA’s Davidson Seeks Federal Audits of ‘High-Risk’ AI Systems

The federal government should set auditing standards for evaluating “high-risk” AI systems and impose liability for tech companies failing to honor their terms of service, NTIA Administrator Alan Davidson told reporters Tuesday in remarks embargoed until Wednesday, when the agency released its AI Accountability Report.

TO READ THE FULL STORY
Start A Trial

In the report, NTIA recommends “independent audits of high-risk AI systems.” Such high-risk systems include those that affect rights and safety, the agency said in its report. The government can use auditing standards modeled on those used in the financial sector to establish a regulatory framework for AI systems, he said.

It will take “years” to build a “strong auditing mechanism” for AI, but “we have to get started soon,” said Davidson. Ultimately, private entities should face accountability when their systems fail to work as publicly stated. If companies can prove their systems work as intended, then it will help set clear guidelines for innovation, while eliminating harms such as bias, he said.

The agency collected more than 1,400 comments about how policymakers can ensure AI systems are trustworthy. It also received feedback about the development of AI audits, assessments and certifications in the comments.

Commenters have said it’s difficult to know whether AI systems are operating as intended, said Davidson. Yet Energy Star ratings administered by the Environmental Protection Agency and the Department of Energy are good examples of the government helping consumers find trustworthy products, he said: The same can be done with AI technology. The consequences of AI “can’t just be the consumer’s responsibility,” he said. “It can’t just be the end user’s responsibility.” AI developers and deployers need to share it, he said.

NTIA should be careful about framing regulatory schemes for AI systems that haven’t come to market, Information Technology and Innovation Foundation Vice President Daniel Castro said Wednesday during a Broadband Breakfast panel. He warned against the pitfalls of “pre-emptive” regulation as seen in the EU. Businesses are facing an increasing number of regulatory schemes from the U.S. and international enforcers, he said. Each regulator is proposing slightly different guardrails, and as these measures mount, only companies of a certain size can support legal teams that can handle all of them, he said.

Silicon Valley is realizing that its mentality of moving fast and breaking things, as Meta CEO Mark Zuckerberg said, might not be the best approach, said Chris Chambers Goodman, law professor at Pepperdine. “We’re at the point where things are actually broken, and there’s a problem,” she said. Accelerating AI innovation without proper guardrails, training and full knowledge of a system’s capabilities is more “dangerous than innovative,” she said.

Regulatory frameworks are necessary for things like nuclear reactors and at-home energy appliances, and a similar framework is needed for widespread AI foundation models, said OpenAI Policy Planning Head David Robinson during an NTIA event hosted by Yale Law School on Wednesday. AI disclosure standards should be at the forefront of the conversation, he said.

Comments were due Wednesday on NTIA’s exploration of the risks and benefits of open-source development models in AI (see 2402230039). The agency received 230 comments as of Wednesday afternoon, but none was posted publicly. It solicited comments in part to receive guidance on how to implement directives in President Joe Biden’s executive order on AI. Joshua Landau, Computer & Communications Industry Association senior counsel-innovation policy, on Wednesday urged the agency to “recognize the risks and benefits of public innovation but not put a thumb on the scale in favor of either closed models or public innovation.” He encouraged NTIA to work with the National Institute on Standards and Technology on using a “risk-based approach.”