{"id":12,"date":"2015-09-05T14:25:04","date_gmt":"2015-09-05T12:25:04","guid":{"rendered":"http:\/\/aireligion.org\/?p=12"},"modified":"2015-09-05T21:20:42","modified_gmt":"2015-09-05T19:20:42","slug":"the-air-force-wants-you-to-trust-robots-should-you","status":"publish","type":"post","link":"https:\/\/aireligion.org\/?p=12","title":{"rendered":"The Air Force Wants You to Trust Robots&#8211;Should You?"},"content":{"rendered":"<p><img loading=\"lazy\" class=\"alignleft\" src=\"http:\/\/www.scientificamerican.com\/sciam\/cache\/file\/437EB260-D53F-4B67-AD339EB6300A0568_article.jpg?6AB0C\" alt=\"\" width=\"277\" height=\"350\" \/><\/p>\n<p>A British fighter jet was returning to its base in Kuwait after a mission on the third day of the 2003 Iraq War when a U.S. anti-missile system spotted it, identified it as an enemy missile, and fired. The two men in the plane were both killed. A week and a half later, the same system\u2014the vaunted Patriot\u2014made the same mistake. This time, it was an American plane downed, and an American pilot killed.<\/p>\n<p>The missile battery\u00a0that targeted\u00a0the two jets was almost entirely automated. There were people there watching over it, but they deferred to the machine\u2019s analysis, and these friendly fire deaths were the result. A decade later, the issue underlying the problem hasn\u2019t gone away. Indeed, it\u2019s become more pressing as the military comes to rely more and more on automation, and spends huge sums of money researching and developing artificial intelligence. In the future\u00a0that could include what it calls autonomous weapons systems, one of the most controversial fields of artificial intelligence.<\/p>\n<p><!--more--><\/p>\n<p>Heather Roff, a professor at the University of Colorado who studies ethics and military technology, says those friendly fire incidents highlight what experts call automation bias.\u00a0\u201cThere\u2019s a pop-up screen that says: if you take no action I will fire,\u201d says Roff. The human operators took no action in the small window of time they had in which to prevent the weapon from firing, and didn\u2019t.<\/p>\n<p>Automation goes beyond selecting targets to kill. An Air Force unit called the Human Trust and Interaction Branch, based out of Wright-Patterson Air Force Base near Dayton, Ohio, is researching how humans and machines interact to develop new forms of\u00a0reconnaissance and intelligence-gathering tools.<\/p>\n<p>It doesn\u2019t say so in the branch\u2019s name, but the \u201cinteraction\u201d the unit studies is\u2014at least partially\u2014between humans and robots. That research is one of the military\u2019s many projects aimed at developing new kinds of automation and pushing robotics into the future. A recently posted government contract pre-solicitation titled\u00a0<a href=\"https:\/\/www.fbo.gov\/index?s=opportunity&amp;mode=form&amp;id=c69b617294a123346363a15302513711&amp;tab=core&amp;_cview=1\" target=\"_blank\">\u201cTrust in Autonomy for Human Machine Teaming\u201d<\/a>\u00a0gives a glimpse of\u00a0what that future might look like.<\/p>\n<p>\u201cThe Air Force has proposed human-machine teaming as a primary research thrust,\u201d the posting reads. \u201cYet to achieve this ambitious vision we need research on how to harness the socio-emotional elements of interpersonal team\/trust dynamics and inject them into human-robot teams.\u201d The Air Force, which says this research\u00a0is needed to develop \u201cshared awareness and shared intent between the humans and machine,\u201d estimates the contract will be for $7.5 million.<\/p>\n<p>Daryl Mayer, an Air Force spokesman, tells Vocativ that the work they\u2019re doing is centered around how humans use machines.\u00a0\u201cOur research centers on the trust calibration process, so rather than focus on simply \u2018more\u2019 trust we focus on \u2018appropriate\u2019 trust, which means we do research to understand when humans use technology appropriately and when they don\u2019t,\u201d says Mayer.<\/p>\n<p>The kind of language used in the contract solicitation\u2014\u201dsocio-emotional,\u201d \u201cshared awareness,\u201d \u201cshared intent\u201d\u2014is an unhelpful way to think about how humans interact with machines, argues Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield. \u201cWhen did machines get intention and awareness \u2013 have I been asleep for 100 years or what?\u201d Sharkey asks in an email to Vocativ.<\/p>\n<p>Sharkey says people already trust machines too much, and echoes Roff\u2019s concerns about\u00a0automation bias: that is, if the machine says something, it must be correct. \u201cPeople need to trust machines less. One of the biggest problems with military personnel (or anyone) is automation biases of various kinds,\u201d says Sharkey. \u201cSo for military purposes we need a lot more research on how the human can stay in deliberative control (particularly of weapons) and not just fall into the trap of trusting machines.\u201d<\/p>\n<p>Automation extends far beyond the battlefield, sometimes with profound implications. Peter Asaro, a philosopher who studies artificial intelligence, says questions need to be asked about how and when humans transfer control to machines in their every day lives.<\/p>\n<p>Asaro points to self-driving cars as a perfect example of that tension. \u201cPresumably, the car is driving itself so the person behind the wheel can devote attention to something else: email, entertainment, a nap, etc. So how and when should the car inform the person that they are in a critical situation and the car cannot figure out on its own what to do?\u201d he asks. \u201cShould it wake them up from a nap just before an accident, or just try to do its best?\u00a0 And what if someone just woke up from a nap and sees a truck in the oncoming lane that poses no threat but the person\u2019s natural reaction is to swerve violently away?\u00a0 Should the car allow them to over-ride the automatic control when that poses a danger?\u201d<\/p>\n<p>There\u2019s also evidence that humans sometimes form pet-like attachments to their robots.\u00a0 One\u00a0<a href=\"http:\/\/www.arl.army.mil\/arlreports\/2012\/ARL-TR-5949.pdf\" target=\"_blank\">study found that<\/a>\u00a0some soldiers who used explosive-disposal robots \u201cformed such a strong bond with their explosive- disposal robots that they insist getting the same robot back after it is repaired or become sad if their damaged robot cannot be repaired.\u201d Another reported that some Roomba owners cleaned up before turning the robot on as \u201ca token of their appreciation for the hard cleaning work\u201d Roomba did.<\/p>\n<p>Some experts argue that a robot\u2019s ability to deceive human beings is on the horizon. Asaro isn\u2019t concerned about lying robots, but he does note that robots might\u00a0be able to get humans to do something they don\u2019t want to do, including, perhaps, things that many people would see as positive, like getting elderly people to take needed medications.<\/p>\n<p>\u201cWe are probably a long way from robots that could trick people into doing things that are good for them\u2014they would need much better capabilities for reading social cues, using social cues, as well as deciphering and manipulating human desires and incentives,\u201d Asaro says. But, he notes, \u201cThere could, of course be all sorts of nefarious computers and robots that manipulate people , such as to get their credit card info and so forth.\u201d There could also be what he describes as \u201csubtle manipulations, like product placements or in-app purchases.\u201d<\/p>\n<p>\u201cFor instance,\u201d he says, \u201cif you spill wine on your carpet and your house-cleaning robot starts recommending specific products to clean it, is that because it is the best cleaning method or is it due to a commercial agreement between the robot manufacturer and the carpet stain remover manufacturer?\u00a0 In such cases, will we require transparency about such agreements?\u201d<\/p>\n<p>The main issue for Roff, the professor at the University of Colorado, is that people need to remember what distinguishes them from robots. \u201cI think technology is great,\u201d she says. \u201cBut we have to be guarded not to be automatons ourselves.\u201d<\/p>\n<p><a href=\"http:\/\/www.scientificamerican.com\/article\/the-air-force-wants-you-to-trust-robots-should-you\/\">http:\/\/www.scientificamerican.com\/article\/the-air-force-wants-you-to-trust-robots-should-you\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A British fighter jet was returning to its base in Kuwait after a mission on the third day of the 2003 Iraq War when a U.S. anti-missile system spotted it, identified it as an enemy missile, and fired. The two men in the plane were both killed. A week and a half later, the same &hellip; <a href=\"https:\/\/aireligion.org\/?p=12\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">The Air Force Wants You to Trust Robots&#8211;Should You?<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/aireligion.org\/index.php?rest_route=\/wp\/v2\/posts\/12"}],"collection":[{"href":"https:\/\/aireligion.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aireligion.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aireligion.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/aireligion.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=12"}],"version-history":[{"count":2,"href":"https:\/\/aireligion.org\/index.php?rest_route=\/wp\/v2\/posts\/12\/revisions"}],"predecessor-version":[{"id":43,"href":"https:\/\/aireligion.org\/index.php?rest_route=\/wp\/v2\/posts\/12\/revisions\/43"}],"wp:attachment":[{"href":"https:\/\/aireligion.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=12"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aireligion.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=12"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aireligion.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=12"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}