The U.S. government should have a leading role in the development and governance of artificial intelligence, as it has done in the past with technologies including the internet, GPS and mapping the human genome, House Republicans and Democrats said in a rare example of bipartisan consensus on Capitol Hill.

What action to take, and how quickly to act, is where the two sides find less agreement, though not necessarily along the traditional lines between government and free markets.

The Biden administration is moving rapidly to regulate AI via an executive order and pending follow-up regulations, a research strategic plan, a risk-management framework, a list of competencies for job seekers and a blueprint for an AI Bill of Rights. Critics say such regulatory actions may stifle the deployment of a promising technology that could transform the world.

At a hearing of the Cybersecurity, Information Technology, and Government Innovation panel of the House Oversight committee on Wednesday, Rep. Gerry Connolly, D-Va., argued that while the government is often labeled risk-averse and bureaucratic, sometimes that narrative is “really skewed.”

“We have to acknowledge that the federal government has done some spectacular things,” he said. “We would not have the internet but for what was called ARPANET for 25 years — a 100% federally funded R&D project. We would not have mapped the human genome without 100% federally funded research projects, which is going to transform medicine. We would not have GPS, which is now universal. We wouldn’t have radar. There’s a whole string.”

As AI expands, agencies have begun identifying ways the technology can make government work more efficient or accurate. Raj Iyer, former chief information officer of the U.S. Army and now head of global public sector at ServiceNow, said in a statement issued after the hearing ended that nearly every new contract signed by the federal government in 2024 will likely have a generative AI component.

Samuel Hammond, a senior economist for the Foundation for American Innovation, testified that progress in AI is accelerating so quickly that the current forecast for an intelligence system that can match, or outperform, a human is as soon as 2026.

In light of that, regulators and Congress are faced with questions about how to write policy and risk management frameworks aimed at a constantly moving target. Lawmakers from both parties also questioned whether too many rules might scare industry away or allow adversaries to get ahead.

“I just wonder if because of the restraints that we have, the strength of our government itself, and our desire in government to make sure that individual rights are respected in this technological process, whether or not we we forfeit too much in terms of allowing China to get very far ahead of us,” said Rep. Stephen Lynch, D-Mass, at the hearing.

“I’m concerned that businesses will just relocate abroad if our regulatory framework becomes overly complex or burdensome,” said Rep. William Timmons, R-S.C.

Though studies have shown bipartisan support for regulating the development of AI, some voiced concerns that further extensions of these efforts may be too late or too heavy-handed. Where Republicans and Democrats sometimes differ is whether to fund development federally, according to one poll by Ipsos, a market research firm.

Understanding AI

Another challenge brought up at the hearing was the need to recruit and retain a technologically savvy workforce to implement top-down directives.

“Government cannot govern AI if it does not understand AI,” said Daniel Ho, a professor at Stanford Law School, at the hearing.

President Biden’s Oct. 30 executive order on AI lays out 150 requirements, as tracked by Stanford, meaning there are major workforce demands for implementation. Yet among other persistent skills gaps in cyber and IT, Ho said that fewer than 1% of AI PhDs pursue a career in public service.

There are also gaps in who will lead this workforce once it’s brought on. The Office of Management and Budget proposed a requirement for nominating chief AI officers in its draft guidance, which closed for public feedback this week.

As the role of AI leadership in government takes form, witnesses said there still may be some variation in the actual structure of these offices, as there is with chief diversity officers.

Ho said it may not always make sense for a chief AI officer to report to the chief information officer, depending on the resources available. Depending on OMB’s final guidance, agencies may also have the option to appoint an existing official, like the data or technology officer.

“There has to be some systematized set of standards and management practices, principles and titles with comparable responsibility,” Connolly said.

Molly Weisner is a staff reporter for Federal Times where she covers labor, policy and contracting pertaining to the government workforce. She made previous stops at USA Today and McClatchy as a digital producer, and worked at The New York Times as a copy editor. Molly majored in journalism at the University of North Carolina at Chapel Hill.

Share:
More In Congress