Dane A. Morey[1], Prerana Walli [1], Kenneth S. Cassidy[1], Priyanka K. Tewani[1], Morgan E. Reynolds[1], Samantha Malone[1], Mohammadreza Jalaeian[1], Michael F. Rayo[1], and Nicolette M. McGeorge[2]
Machines can never replace people, but they can change people’s roles and the kinds of the work people do[3]. No matter how capable machines become to take actions on their own, people will ultimately be held responsible for the actions of machines[4]. At minimum, this requires people to monitor or supervise machines, which often creates additional problems[5]. Not only do machines that attempt to replace people fail to do so, they also miss out on the benefits of being part of a team. Effective teams enable all members to contribute to the work so that the team accomplishes more than any of its members could accomplish alone. Likewise, machines should augment, amplify, or contribute in a way that aids, not replaces, the work of people[6].
Effective teams are well-coordinated, which requires team members to interact with others in ways that help the team align their activities. These interactions include making actions and intentions visible to others, monitoring others’ actions and intentions, synchronizing actions, aligning goals, shifting roles, and minimizing the burden that these interactions have on others in the team[7]. Machines attempting to remain separate from or invisible to people do not eliminate the need for people to coordinate with machines, but rather make this coordination difficult or impossible to accomplish[8]. Instead, machines should expand, rather than reduce, the ways in which they can interact with people to coordinate activities[9].
To be well-coordinated, teams need to have a shared understanding of the overall task and the progress being made towards achieving it[7]. Team members need to clearly communicate their activities, status, and intentions so that others can seamlessly and efficiently coordinate their current and future actions without surprises. For machines, this requires more than simply making actions or computations visible. People must understand what the machine has done, what it is doing, and what it is going to do next[5].
Machines are literal-minded, meaning that they will only function in accordance with their programming and resultant models of the world. However, a machine cannot tell if its model of the world is in fact the world it is in[3]. Consequently, machines will often take the right actions (according to their models of the world) in the wrong world. People need to help keep machines aligned to the current situation and make sure they are not operating outside their limits[10]. To help people do this, machines must send signals or clues to people that convey when, how, and why they are operating outside their limits. However, like any team member, machines have limits to how well they can understand their own limits[11]. Therefore, machines will be unreliable at communicating their own limits and need help from people. At minimum, this requires some way for people to simultaneously understand the world, how the machine is “seeing” the world through its model, and how well the machine’s view of the world is aligned to the world itself[12].
Effective teammates are able to direct the actions of others and be directed by others[7]. However, unlike human teams which share responsibility, people are always responsible for the outcomes of machine actions[4]. Therefore, people must always be in control. Otherwise, people will find ways to indirectly exert their influence, like turning off the machine[13], or else machines can cause catastrophic accidents, like the two Boeing 737 MAX crashes which killed a total of 346 people[14]. The ability to comply with people’s directions is a compulsory machine design requirement[9].
As situations escalate from normal to exceptional, machines often get in the way more than they help[15]. Machines stick to their set of rules, but successfully responding to these exceptional situations often requires changing procedures, sacrificing some goals, or otherwise breaking the normal rules[16]. Machines must allow (or even help) people break the machine’s rules during exceptional circumstances; otherwise, machines are likely to exacerbate the situation with additional burden[15].
Attention is limited. Effective teams send signals to help each other shift and focus their attention on what is important; however, machines often struggle to understand what is important[17]. Machines that frequently send unhelpful alerts can annoy or even impair people’s ability to figure out what is important[18]. Therefore, people must be able to determine whether the signals machines send are important without having to fully shift their attention from what they are currently doing[17].
Effective teammates assess whether their message or action is important enough to interrupt what another is doing, which depends upon both the importance of the interruption and the importance of the other’s current actions[7]. However, machines are often poor at gauging interruptibility, which can exacerbate periods of high workload with additional unhelpful disruptions[17]. Machines should alert people to important changes in the situation in the least disruptive manner possible, without loss of information or urgency, so that managing the interruptions themselves does not become an additional burden.
Though machines can detect, process, and display massive volumes of data, they struggle to reliably make sense of what that data means, especially when something is novel or unexpected. For example, a substantial proportion of AI/ML research continues to focus on image classification[19], something so basic for people that we do not consider it to be a decision. Machines still largely rely upon people to understand the bigger picture, but data overload becomes a problem. The volume of data machines display can overwhelm and inhibit people from seeing the bigger picture, yet machines that reduce this volume risk removing crucial data, which also prevents people from seeing the bigger picture[20]. Effective machines must organize, but not reduce, the data available in a way that helps people simultaneously see the bigger picture without getting lost in the details and see the details without losing sight of the bigger picture[21].
Every perspective is limited, both revealing and hiding certain aspects of the current situation[22]. Effective teams overcome the limits of each perspective by supporting people in seeing and contrasting multiple viewpoints. Machines need to be explicitly designed to support switching, comparing, and combining different viewpoints. Otherwise, switching between viewpoints can be too costly or disruptive[23]. Machines must provide low-cost ways for people to shift perspectives and highlight when it is valuable to do so.
The Ohio State University, Columbus, OH, USA ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8
Charles River Analytics, Cambridge, MA, USA ↩
Woods, D. D., & Hollnagel, E. (2006). Joint Cognitive Systems: Patterns in Cognitive Systems Engineering. CRC Press. https://doi.org/10.1201/9781420005684 ↩ ↩2
Murphy, R., & Woods, D. D. (2009). Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent Systems, 24(4), 14–20. https://doi.org/10.1109/MIS.2009.69 ↩ ↩2
Sarter, N. B., Woods, D. D., & Billings, C. E. (1997). Automation Surprises. Handbook of Human Factors & Ergonomics, 2. https://doi.org/10.1201/9780849375477.ch587 ↩ ↩2
Woods, D. D. (1985). Cognitive Technologies: The Design of Joint Human-Machine Cognitive Systems. AI Magazine, 6(4), 1–7. https://doi.org/10.1609/aimag.v6i4.511 ↩
Klein, G., Feltovich, P. J., Bradshaw, J. M., & Woods, D. D. (2005). Common Ground and Coordination in Joint Activity. In W. B. Rouse & K. R. Boff (Eds.), OrganizationalSimulation (pp. 139–184). John Wiley & Sons, Inc. https://doi.org/10.1002/0471739448.ch6 ↩ ↩2 ↩3 ↩4
Roth, E. M., Bennett, K. B., & Woods, D. D. (1987). Human inter- action with an “intelligent” machine. International Journal of Man-Machine Studies, 27(5), 479–525. https://doi.org/10.1016/S0020-7373(87)80012-3 ↩
Johnson, M., Vignatti, M., & Duran, D. (2020). Understanding Human-Machine Teaming through Interdependence Analysis. In Contemporary Research. CRC Press. ↩ ↩2
Hoffman, R. R., Feltovich, P. J., Ford, K. M., & Woods, D. D. (2002). A rose by any other name. . . Would probably be given an acronym [cognitive systems engineering]. IEEE IntelligentSystems, 17(4), 72–80. ↩
Woods, D. D. (2018). The theory of graceful extensibility: Basic rules that govern adaptive systems. Environment Systems and Decisions, 38(4), 433–457. https://doi.org/10.1007/s10669-018-9708-3 ↩
Rayo, M. F., Fitzgerald, M. C., Gifford, R. C., Morey, D. A., Reynolds, M. E., D’Annolfo, K., & Jefferies, C. M. (2020). The Need for Machine Fitness Assessment: Enabling JointHuman-Machine Performance in Consumer Health Technologies. Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care, 9, 40–42. https://doi.org/10.1177/2327857920091041 ↩
Christoffersen, K., & Woods, D. D. (2002). How to make auto- mated systems team players. In E. Salas (Ed.), Advances in Human Performance and Cognitive Engineering Research(Vol. 2, pp. 1–12). Emerald Group Publishing Limited. https://doi.org/10.1016/S1479-3601(02)02003-9 ↩
FAA. (2020). Preliminary Summary of the FAA’s Review of the Boeing 737 MAX (Preliminary Summary version 1). Federal Aviation Administration. ↩
Woods, D. D., & Patterson, E. S. (2000). How Unexpected Events Produce an Escalation of Cognitive and Coordinative Demands. In P. A. Hancock & P. A. Desmond (Eds.), Stress,Workload, and Fatigue (pp. 290–302). CRC Press. https://doi.org/10.1201/b12791-2.3 ↩ ↩2
Chuang, S., Chang, K.-S., Woods, D. D., Chen, H.-C., Reynolds, M. E., & Chien, D.-K. (2019). Beyond surge: Coping with mass burn casualty in the closest hospital to the Formosa FunCoast Dust Explosion. Burns, 45(4), 964–973. https://doi.org/10.1016/j.burns.2018.12.003 ↩
Woods, D. D. (1995a). The alarm problem and directed attention in dynamic fault management. Ergonomics, 38(11), 2371–2393. https://doi.org/10.1080/00140139508925274 ↩ ↩2 ↩3
Rayo, M. F., & Moffatt-Bruce, S. D. (2015). Alarm system manage- ment: Evidence-based guidance encouraging direct measure- ment of informativeness to improve alarm response: Table 1.BMJ Quality & Safety, 24(4), 282–286. https://doi.org/10.1136/bmjqs-2014-003373 ↩
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). ExplainableArtificial Intelligence (XAI): Concepts, taxonomies, opportuni- ties and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012 ↩
Woods, David & Patterson, Emily & Roth, Emilie. (2002). Can We Ever Escape from Data Overload? A Cognitive Systems Diagnosis. Cognition, Technology & Work. 4. 22-36. 10.1007/s101110200002. ↩
Woods, D. D. (1995b). Toward a Theoretical Base for Representation Design in the Computer Medium: Ecological Perception and Aiding Human Cognition. In Global Perspectives on theEcology of Human-Machine Systems. CRC Press. ↩
Hoffman, R. R., & Woods, D. D. (2011). Beyond Simon’s Slice: Five Fundamental Trade-Offs that Bound the Performance of Macrocognitive Work Systems. IEEE Intelligent Systems,26(6), 67–71. https://doi.org/10.1109/MIS.2011.97 ↩
Woods, D. D. (1984). Visual momentum: A concept to improve the cognitive coupling of person and computer. International Journal of Man-Machine Studies, 21(3), 229–244. https://doi.org/10.1016/S0020-7373(84)80043-7 ↩