As artificial intelligence advances there has been an increased push to incorporate it into defense technology. Recently, the Department of Defense ordered the creation of the Joint Artificial Intelligence Center, which will be a hub for AI research. The JAIC is not the first attempt to incorporate AI into the DoD, however. Project Maven, which was established by the DoD to integrate machine learning and big data, will continue as part of JAIC after Google announced it will end its participation in the program.

Timothy Persons, chief scientist of applied research and methods for the Government Accountability Office, testified June 26 before the House of Representatives Subcommittees on Research and Technology and Energy about AI’s implications for policy and research.

In his testimony, Persons highlighted some suggestions for government policy and research on AI that came from a March 2018 GAO report:

Improve data collection and incentivize data sharing

Data is essential to teaching and improving AI. In his testimony Persons emphasized the importance of collecting and labeling high-quality data that can improve machine learning.

It is also important that data can be shared safely without compromising sensitive information such as intellectual property or brand information. Persons pointed to an instance when MITRE, a nonprofit that handles federally funded research, credited data-sharing in the aviation industry with lowering the number of accidents.

Data-sharing, which could include establishing training environments that protect sensitive data, could include nationwide data standardization projects by agencies such as the Bureau of Justice Statistics.

Address cybersecurity threats

AI systems are both vulnerable to and can be used for cybersecurity attacks. The costs of cyberattacks on networks and information are high and unevenly distributed between manufacturers and users, Persons said. He suggested policy changes that share the costs of cyberattacks and protecting against them more evenly.

Update the approach to regulations

While AI technology is still developing at a rapid pace, Persons encouraged policymakers to avoid prematurely establishing regulations and instead update the regulatory structure. He pointed to the potential use of AI by law enforcement and the evolving technology in automated vehicles as two potential areas in need of regulation.

Another way to update how AI is regulated could be through establishing regulatory sandboxes, which allow regulators to experiment on a small scale.

Better understand AI’s impact on employment

AI will certainly impact employment across industries. However, right now it is hard to determine which industries will see job loss and which will see job growth in the future. Changes across the workforce caused by AI will require reevaluation of training and education, Persons said.

Explore computational ethics and explainable AI

In the future, AI systems will have to be designed that can operate in environments where not all potential events can be anticipated. Persons emphasized the importance of developing ethical processes for AI and big data research. One of the main concerns, Persons said, is that the ethical standards of those developing AI may not be compatible with the rest of society or those who use the technology.

Maddy is a senior at George Washington University studying economics.

More In IT and Networks