The Air Force Wants You to Trust Robots–Should You?

A British fighter jet was returning to its base in Kuwait after a mission on the third day of the 2003 Iraq War when a U.S. anti-missile system spotted it, identified it as an enemy missile, and fired. The two men in the plane were both killed. A week and a half later, the same system—the vaunted Patriot—made the same mistake. This time, it was an American plane downed, and an American pilot killed.

The missile battery that targeted the two jets was almost entirely automated. There were people there watching over it, but they deferred to the machine’s analysis, and these friendly fire deaths were the result. A decade later, the issue underlying the problem hasn’t gone away. Indeed, it’s become more pressing as the military comes to rely more and more on automation, and spends huge sums of money researching and developing artificial intelligence. In the future that could include what it calls autonomous weapons systems, one of the most controversial fields of artificial intelligence.

Heather Roff, a professor at the University of Colorado who studies ethics and military technology, says those friendly fire incidents highlight what experts call automation bias. “There’s a pop-up screen that says: if you take no action I will fire,” says Roff. The human operators took no action in the small window of time they had in which to prevent the weapon from firing, and didn’t.

Automation goes beyond selecting targets to kill. An Air Force unit called the Human Trust and Interaction Branch, based out of Wright-Patterson Air Force Base near Dayton, Ohio, is researching how humans and machines interact to develop new forms of reconnaissance and intelligence-gathering tools.

It doesn’t say so in the branch’s name, but the “interaction” the unit studies is—at least partially—between humans and robots. That research is one of the military’s many projects aimed at developing new kinds of automation and pushing robotics into the future. A recently posted government contract pre-solicitation titled “Trust in Autonomy for Human Machine Teaming” gives a glimpse of what that future might look like.

“The Air Force has proposed human-machine teaming as a primary research thrust,” the posting reads. “Yet to achieve this ambitious vision we need research on how to harness the socio-emotional elements of interpersonal team/trust dynamics and inject them into human-robot teams.” The Air Force, which says this research is needed to develop “shared awareness and shared intent between the humans and machine,” estimates the contract will be for $7.5 million.

Daryl Mayer, an Air Force spokesman, tells Vocativ that the work they’re doing is centered around how humans use machines. “Our research centers on the trust calibration process, so rather than focus on simply ‘more’ trust we focus on ‘appropriate’ trust, which means we do research to understand when humans use technology appropriately and when they don’t,” says Mayer.

The kind of language used in the contract solicitation—”socio-emotional,” “shared awareness,” “shared intent”—is an unhelpful way to think about how humans interact with machines, argues Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield. “When did machines get intention and awareness – have I been asleep for 100 years or what?” Sharkey asks in an email to Vocativ.

Sharkey says people already trust machines too much, and echoes Roff’s concerns about automation bias: that is, if the machine says something, it must be correct. “People need to trust machines less. One of the biggest problems with military personnel (or anyone) is automation biases of various kinds,” says Sharkey. “So for military purposes we need a lot more research on how the human can stay in deliberative control (particularly of weapons) and not just fall into the trap of trusting machines.”

Automation extends far beyond the battlefield, sometimes with profound implications. Peter Asaro, a philosopher who studies artificial intelligence, says questions need to be asked about how and when humans transfer control to machines in their every day lives.

Asaro points to self-driving cars as a perfect example of that tension. “Presumably, the car is driving itself so the person behind the wheel can devote attention to something else: email, entertainment, a nap, etc. So how and when should the car inform the person that they are in a critical situation and the car cannot figure out on its own what to do?” he asks. “Should it wake them up from a nap just before an accident, or just try to do its best?  And what if someone just woke up from a nap and sees a truck in the oncoming lane that poses no threat but the person’s natural reaction is to swerve violently away?  Should the car allow them to over-ride the automatic control when that poses a danger?”

There’s also evidence that humans sometimes form pet-like attachments to their robots.  One study found that some soldiers who used explosive-disposal robots “formed such a strong bond with their explosive- disposal robots that they insist getting the same robot back after it is repaired or become sad if their damaged robot cannot be repaired.” Another reported that some Roomba owners cleaned up before turning the robot on as “a token of their appreciation for the hard cleaning work” Roomba did.

Some experts argue that a robot’s ability to deceive human beings is on the horizon. Asaro isn’t concerned about lying robots, but he does note that robots might be able to get humans to do something they don’t want to do, including, perhaps, things that many people would see as positive, like getting elderly people to take needed medications.

“We are probably a long way from robots that could trick people into doing things that are good for them—they would need much better capabilities for reading social cues, using social cues, as well as deciphering and manipulating human desires and incentives,” Asaro says. But, he notes, “There could, of course be all sorts of nefarious computers and robots that manipulate people , such as to get their credit card info and so forth.” There could also be what he describes as “subtle manipulations, like product placements or in-app purchases.”

“For instance,” he says, “if you spill wine on your carpet and your house-cleaning robot starts recommending specific products to clean it, is that because it is the best cleaning method or is it due to a commercial agreement between the robot manufacturer and the carpet stain remover manufacturer?  In such cases, will we require transparency about such agreements?”

The main issue for Roff, the professor at the University of Colorado, is that people need to remember what distinguishes them from robots. “I think technology is great,” she says. “But we have to be guarded not to be automatons ourselves.”

http://www.scientificamerican.com/article/the-air-force-wants-you-to-trust-robots-should-you/