Unmanned

DARPA wants commanding robots to work like a video game

In a fake city in Mississippi, DARPA is training robots for war. In December 2019, at a camp southeast of Hattiesburg, hundreds of robots gathered to scout an urban environment, and then convert that scouting data into useful information for humans.

Conducted at Camp Shelby Joint Forces Training Center, the exercise was the third test of DARPA’s OFFensive Swarm-Enable Tactics (OFFSET) program. OFFSET is explicitly about robots assisting humans in fighting in urban areas, with many robots working together at the behents of a small group of infantry to provide greater situational awareness than a human team could achieve on its own.

The real-time nature of the information is vital to the vision of OFFSET. It is one thing to operate from existing maps, and another entirely to operate from recently mapped space, with continuing situational awareness of possible threats and other movement through the space.

Dating back to at least 2017, OFFSET is in part an iterative process, with contractors competing for and receiving awards for varioussprints,’ or narrower short-turnaround developments in coding capabilities. Many of these capabilities involve translating innovations from real-time strategy video games into real life, like dragging-and-dropping groups units to give them commands.

For the exercise at Camp Shelby, the swarms involved both ground and flying robots. These machines were tasked with finding specific items of interest located in buildings at Camp Shelby’s Combined Arms Collective training Facility. To assist the robots in the field experiment, organized seeded the environment with AprilTags. These tags, which are similar to QR codes but trade complexity of data stored for simplicity and robustness in being read at difference, were used to mark the sites of interest, as well as hazards to avoid.

In practical use, hazards seldom if ever arrive with barcodes explicitly labeling themselves as hazards, but for training the AprilTags provide a useful scaffolding while the robots coordinate in other ways.

“As the swarm relayed information acquired from the tags,” wrote DAPRA, “human swarm tacticians adaptively employed various swarm tactics their teams had developed to isolate and secure the building(s) containing the identified items.”

That information is relayed in various ways, from updated live maps on computer screens to floating maps displayed in real time in augmented reality headsets.

As foreshadowed by countless works of cyberpunk fiction, these “human swarm tacticians” interfaced with both the real world and a virtual representation of that world at once. Commanding robots to move in real space by manipulating objects in a virtual environment, itself generated by robots exploring and scouting the real space, blurs the distinction between artificial and real environments. That these moves were guided by gesture and haptic feedback only further underscores how deeply linked commanding robots can be to augmented reality.

The gesture and haptic feedback command systems were built through sprinter contracts by Charles River Analytics, Inc., Case Western University, and Northwestern University, with an emphasis on novel interaction for human-swarm teaming.

Another development, which would be as at home in the real-time strategy game series Starcraft as it is in a DARPA OFFSET exercise, is the operational management of swarm tactics from Carnegie Mellon University and Soar Technology. Their developments allowed the swarm to search and map a building on its own, and to automate resource allocation in the process of accomplishing tasks.

For now, the heart of the swarm is as a scouting organism built to provide information to human operators.

Watch it in action below:

Recommended for you
Around The Web
Comments