YOU ARE AT:AI-Machine-LearningBiden administration seeks input on AI accountability

Biden administration seeks input on AI accountability

NTIA request for comment focuses on guardrails for AI

The Biden administration is seeking public comment on developing “AI audits” for artificial intelligence, amid increasing concern about the role of artificial intelligence as more sophisticated AI engines are becoming publicly available.

The “AI accountability” request for comment comes through the National Telecommunications and Information Administration and seeks input on what types of safety assessments should be done by companies developing AI, what type of data access would be necessary to complete an audit, how to incentivize responsible and trustworthy AI developmen and how that might look different across various industries making use of AI. Comments are due by June 10.

A group of AI experts and industry executives, including Elon Musk and Steve Wozniak, have publicly called for a pause on the development and training of AI systems more advanced than GPT-4, in an open letter published late last month that has gained more than 20,000 signatures. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter says. “This confidence must be well justified and increase with the magnitude of a system’s potential effects.”

The letter also calls for developing a “set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts … [to] ensure that systems adhering to them are safe beyond a reasonable doubt” and urges policy development in order to “dramatically accelerate development of robust AI governance systems.”

The letter said that at a minimum, such AI governance should include “new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.”

“While people are already realizing the benefits of AI, there are a growing number of incidents where AI and algorithmic systems have led to harmful outcomes,” the Biden administration says in the NTIA call for public comment. “There is also growing concern about potential risks to individuals and society that may not yet have manifested, but which could result from increasingly powerful systems. Companies have a responsibility to make sure their AI products are safe before making them available. Businesses and consumers using AI technologies and individuals whose lives and livelihoods are affected by these systems have a right to know that they have been adequately vetted and risks have been appropriately mitigated.”

The NTIA release also says that “Just as food and cars are not released into the market without proper assurance of safety, so too AI systems should provide assurance to the public, government, and businesses that they are fit for purpose.”

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” said Alan Davidson, assistant Secretary of Commerce for communications and information and NTIA administrator. “Our inquiry will inform policies to support AI audits, risk and safety assessments, certifications, and other tools that can create earned trust in AI systems.”

ABOUT AUTHOR

Kelly Hill
Kelly Hill
Kelly reports on network test and measurement, as well as the use of big data and analytics. She first covered the wireless industry for RCR Wireless News in 2005, focusing on carriers and mobile virtual network operators, then took a few years’ hiatus and returned to RCR Wireless News to write about heterogeneous networks and network infrastructure. Kelly is an Ohio native with a masters degree in journalism from the University of California, Berkeley, where she focused on science writing and multimedia. She has written for the San Francisco Chronicle, The Oregonian and The Canton Repository. Follow her on Twitter: @khillrcr