The US government issued new rules on Thursday, requiring federal agencies to exercise more caution and transparency when using artificial intelligence. These rules are necessary to protect the public as AI technology continues to advance rapidly. However, the policy also includes provisions that aim to promote AI innovation within government agencies when the technology can be utilized for the public good.
The US aims to establish itself as a global leader in government AI with these new regulations. Vice President Kamala Harris stated during a news briefing that the administration intends for these policies to set an example for international action. She emphasized that the US will urge other nations to prioritize the public interest in their use of AI.
The new policy, issued by the White House Office of Management and Budget, will govern the use of AI across the federal government. It mandates increased transparency regarding the government’s utilization of AI and encourages further development of the technology within federal agencies. The administration is striving to strike a balance between mitigating risks associated with deeper AI integration (which are not fully understood) and leveraging AI tools to address critical issues like climate change and disease.
This announcement is part of a series of actions taken by the Biden administration to both embrace and regulate AI. In October, President Biden signed an executive order on AI that promotes the expansion of AI technology by the government while also requiring transparency from those creating large AI models for national security reasons.
In November, the US, along with the UK, China, and EU members, signed a declaration acknowledging the risks posed by rapid AI advancements and advocating for international cooperation. Harris also revealed a nonbinding declaration on military AI use, signed by 31 nations, which establishes basic safeguards and calls for the deactivation of AI systems exhibiting unintended behavior.
The new AI policy for US government use announced on Thursday outlines several steps agencies must take to prevent unintended consequences of AI implementation. Agencies must ensure that the AI tools they employ do not endanger Americans. For instance, the Department of Veterans Affairs must confirm that AI used in its hospitals does not produce racially biased diagnoses. Studies have shown that AI systems and algorithms can perpetuate historical discrimination in healthcare.
If an agency cannot guarantee these safeguards, they must discontinue using the AI system or provide justification for its continued use. US agencies have until December 1 to comply with these new requirements.
The policy also mandates increased transparency around government AI systems, requiring agencies to disclose government-owned AI models, data, and code unless such disclosure poses a public or government threat. Agencies must annually report on their AI usage, potential risks associated with their systems, and how they are addressing these risks.
Additionally, federal agencies are required to enhance their AI expertise by appointing a chief AI officer to oversee AI usage within the agency. This role focuses on promoting AI innovation while also monitoring potential risks.
Officials believe these changes will eliminate barriers to AI utilization in federal agencies, potentially enabling more responsible experimentation with AI. The technology has the potential to assist agencies in assessing damage after natural disasters, predicting extreme weather, tracking disease spread, and managing air traffic.
Countries worldwide are taking steps to regulate AI. The EU passed its AI Act in December, which governs the creation and use of AI technologies and was formally adopted earlier this month. China is also working on comprehensive AI regulation.