Start your day with intelligence. Get The OODA Daily Pulse.
The US Department of Commerce’s Bureau of Industry and Security (BIS) plans to introduce mandatory reporting requirements for developers of advanced AI models and cloud computing providers. The proposed rules would require companies to report on development activities, cybersecurity measures, and results from red-teaming tests, which assess risks such as AI systems aiding cyberattacks or enabling non-experts to create chemical, biological, radiological, or nuclear weapons. “This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security,” Gina M. Raimondo, secretary of commerce, said in a statement. The proposed regulations follow a pilot survey by the BIS earlier this year and come amid global efforts to regulate AI. After the EU’s landmark AI Act, countries such as Australia have introduced their own proposals to oversee AI development and usage. For enterprises, these could increase costs and slow down operations. “Enterprises will need to invest in additional resources to meet the new compliance requirements, such as expanding compliance workforces, implementing new reporting systems, and possibly undergoing regular audits,” said Charlie Dai, VP and principal analyst at Forrester. From an operational standpoint, companies may need to modify their processes to gather and report the required data, potentially leading to changes in AI governance, data management practices, cybersecurity measures, and internal reporting protocols, Dai added.
Full report : US proposes requiring reporting for advanced AI, cloud providers.