The Intelligence Advanced Research Projects Activity is the intelligence community's high-risk, high-payoff science lab. It's here that researchers are testing out and evaluating futuristic technologies that can crowdsource arguments forecasting future events — including potential cyberattacks — secure manufacturing of computer parts or, notably, anticipate what traditionally could not be predicted: strategic surprise.
In carrying out the testing and evaluation, some programs are set up as tournaments for research teams to solve a problem — say, identifying exactly where a particular photo is taken, or matching video from a large open-source collection. Other programs focus on forecasting challenges, with researchers competing against each other in forecasting outcomes, such as political elections, disease outbreak or whether a treaty will get signed.
In the past few years, IARPA has launched a program targeting the calculation and prediction of the unpredictable. The Office for Anticipating Surprise aims to predict intelligence and solve problems that haven't happened yet, particularly those that pose threats to U.S. interests.
"Anticipatory intelligence is part of our national intelligence strategy; it means gaining a lead time on warning against certain kinds of events or conditions that might change in the world," said Jason Matheny, IARPA director. "The kinds of events we're interested in anticipating include things like political instability, disease events, economic crises and cyberattacks. So we've focused on those events in our programs."
In science, one way of testing a theory is to see whether it explains what's happened historically, even using that history to explain events that haven't yet happened. IARPA's programs take that approach, testing theories by seeing how well their methods fare against what unfolds in the world.
In one example, a recently completed program known as ACE (Aggregative Contingent Estimation) collected some 2 million forecasts from more than 15,000 people over a four-year period — about elections, treaties, weapons tests, interstate conflict ... hundreds of world events. Each judgment was scored for accuracy, Matheny said.
"We looked at patterns of what sort of expertise seemed important to get something right or wrong, some sort of characteristic correlated with people who were more accurate," Matheny said. "Surprisingly, the most accurate were not necessarily domain experts, but those who scored really well on tests of critical thinking and problem-solving ability. Also, people who revised judgments, were fairly self-critical, people who looked for information that challenged what they predicted and who then changed their judgment based on new facts."
Those are the types of people that IARPA could use — as many as possible, in fact, since another factor in the best forecasts was that they weren't generated by a single individual, but by combining judgments.
"It's the idea of crowd wisdom, that you could do better by combining multiple independent judgments from people with different sources of information and different beliefs about the world," Matheny said.
A critical area for tapping that wisdom is cybersecurity. As high-profile cyberattacks become increasingly common, including in government, the prospect of predicting them before they occur is promising.
IARPA so far has a handful of these programs, including the CAUSE program, or Cyber-attack Automated Unconventional Sensor Environment, which seeks to predict coming cyberattacks by studying behavioral data from unconventional sources. The Security and Privacy Assurance Research (SPAR) program targets encryption of data even when users need to manipulate the data, such as running queries against it, something that's been tough to do in the past.
Programs like CAUSE and SPAR undergo rigorous testing and evaluation at IARPA, but once the recommendations are handed off — including to government agencies that in many cases put the solutions to use — it's on to the next project.
"The IARPA mission begins and ends with research itself," Matheny said. "What the [agencies] then do is really not up to us; it's up to them to decide how to best tailor the research methods that are developed under our programs to their particular need. Cybersecurity is one where the conditions on the networks and in the organizations are different, the actors are different, but the approaches we try to develop are meant to be generalizable or highly tunable to any particular organization that might need it."