Sweeping New AI Rules for Federal Agencies

The Biden Administration finalized sweeping new rules on artificial intelligence (AI) to help federal agencies safely implement the burgeoning technology.

The guidance, from the Office of Management and Budget (OMB), is billed as the first “government-wide policy to mitigate risks of artificial intelligence (AI) and harness its benefits” and delivers on a key part of President Biden’s executive order on AI.

The administration first issued the guidance as a draft guidance in November.

“This policy is a major milestone for President Biden’s landmark AI executive order, and it demonstrates that the federal government is leading by example in its own use of AI,” said OMB Director Shalanda D. Young.

Vice President Kamala Harris told reporters that the guidance is binding.

“When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people,” said Vice President Harris.

Compliance Date Pushed to December

The policy pushes the date for agencies to comply with AI risk management practices to December 1, 2024 from August 1, 2024.

The guidance states that agencies must comply or cease to use certain AI technology that impacts a person’s safety or rights.

The White House cited examples such as travelers at airports having the ability to opt-out of AI facial recognition without losing their place in line and human overview of AI diagnostic decisions in the federal healthcare system.

“If the Veterans Administration wants to use AI in VA hospitals, to help doctors diagnose patients, they would first have to demonstrate that AI does not produce racially biased diagnoses,” said Vice President Harris.

In addition, OMB is urging agencies to “consult federal employee unions and adopt the Department of Labor’s forthcoming principles on mitigating AI’s potential harms to employees.”

It also gives advisement on how to use AI in the federal procurement process.

Transparency

In addition to compliance, agencies are required to release expanded inventories of AI cases each year. That data will include cases that impact rights or safety as well as information on actions the government is taking to mitigate such risks. Agencies will also have to release government-owned AI code, models, and data unless it meets certain national security exceptions.

AI Leadership

The guidance requires agencies to designate a Chief Artificial Intelligence Officer (CAIO) to oversee the agency’s use of the technology. Agencies have 60 days from the release of the memo to name that individual.

AI Governance Boards will also be established to coordinate AI policy across agencies.

“This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use,” said Vice President Harris.


Previous
Previous

Overworked, Under-Resourced Employees Could Cause Increased Vulnerability for FEMA Managers

Next
Next

SSA Faces Dire Staffing Situation, Requests Consistent Funding