Even Flying Robots Need Friends
Pilotless aircraft let the military quickly gather intelligence about hot spots without having to put pilots at risk or wait for the next imaging satellite flyover. But many tasks, both military and civilian, can be accomplished better by teams of Unmanned Aerial Vehicles (UAVs) programmed to collaborate. Multiple autonomous UAVs can cover more ground than a single plane, and with their own smarts, they demand less human attention.
“Current UAVs are radio-controlled drones that need to be piloted remotely, one-on-one,” explains mechanical engineering professor Karl Hedrick. “Instead, we’d like humans to interact with them at a higher level. We should be able to say ‘Here’s an area, and I’d like to know what’s in it.’”
To develop this technology, Hedrick co-directs the Center for Collaborative Control of Unmanned Vehicles (C3UV), along with CEE professor Raja Sengupta. Much of their work is funded by the U.S. Navy’s Office of Naval Research, and Hedrick also envisions the technology locating humans in rescue operations, fighting forest fires and patrolling borders.
The main problem in collaborative UAV control isn’t the flying itself; the group’s prototype UAVs rely on an autopilot system that’s commercially available, the Cloud Cap Piccolo. The tricky part is determining how the planes should divide their tasks. A typical mission consists of multiple “visit” tasks, each of which means traveling to and capturing data from a particular location or area. Finding the most efficient route through more than a handful of places on a map is a notoriously difficult problem in computer science. For example, with the three cities A, B and C, there are just six possible routes through them all: A-B-C, A-C-B, B-A-C, B-C-A, C-A-B and C-B-A. But with ten cities, the number of possible routes exceeds 3.5 million. For the UAVs, calculating the shortest possible route among millions or billions of possibilities won’t fly; it doesn’t make sense expending three days of computer time determining the optimal route for a four-hour mission.
Instead, the UAVs follow an auction-based algorithm that plans a reasonably short, if not the shortest possible, route. The UAV’s onboard computers divvy up the mission’s tasks based on who can do which ones fastest, while also balancing the workload so they’ll all finish at approximately the same time. “The UAVs decide how to do it based on where they are, how much fuel they each have and things like that,” Hedrick explains. “The negotiation typically takes 20 to 30 seconds.”
During the mission itself, the UAVs communicate via Wi-Fi to update each other on their status. This lets them fill in for each other if needed, just like people would. “If you had five humans, you’d collaborate.” Hedrick says. “If your sensor broke, you’d call me and say, hey, you need to cover for me.”
For the human interface, the team has built a desktop application that lets the user draw visit points, lines and areas on Google Earth, then build a flowchart to specify one-time visits and ongoing patrols as sequential, parallel or conditional tasks. “How does a human communicate a mission to a machine?” Hedrick asks. “It shouldn’t be something only a computer scientist can use. A regular guy should be able to run it on his PDA.”
The group’s first experimental UAV teams relied entirely on GPS to determine their locations. One trial, back in 2005, was pure Hollywood: a “human evader” carried a GPS transponder in a backpack while trying to run away from four autonomous UAVs programmed to track him down. As expected from the simulations, escape was impossible. “If you know where he starts and his maximum velocity running, you know you can find him,” says Hedrick, “even though the UAVs have a maximum turn rate and the evader can turn on a dime.”
But relying solely on GPS for real-world systems is risky, because it’s easily jammed and won’t work in urban settings when lines of sight are blocked. To add a second method of determining location, Raja Sengupta has enlisted the UAVs onboard cameras to not only take pictures for intelligence gathering, but also recognize features of the terrain using a computer vision system. They started with the basics: distinguishing asphalt from ground for locating runways and recognizing water vs. land to track coastlines. As DARPAs Grand Challenge contests have sparked rapid progress in driverless cars over the past few years, Sengupta and others have applied its lessons to pilotless aircraft.
Now the group is looking to another source of inspiration: animal predators. Hedrick, along with professors Shankar Sastry and Claire Tomlin, are part of a multidisciplinary, multi-university research project called HUNT (Heterogeneous Unmanned Networked Teams), also administered by the Office of Naval Research. Roboticists have already experimented with swarm approaches based on insect, fish and bird behavior for autonomous vehicles, but these aren’t relevant to search-and-localization tasks. The HUNT researchers instead model the ways lions, hyenas, wolves and other sophisticated predators communicate, form coalitions, adapt their behaviors and otherwise work together while hunting.
“They all know what their roles are,” Hedrick sums up. “Can we take advantage of umpteen millions of years of evolution and learn how they do it?”