Psychology in AI Part 1: Why Humans & Automated Cars are Not (Yet) A Good Team

Juli 12, 2023

 Psychology in AI designs

        There is a popular idea that artificial intelligence or commonly referred to as AI can replace humans. Is it true, however, that AI can always outperform humans in everything? Also, is AI completely trustworthy? To elaborate this issue, this article will particularly discuss the implementation of AI in automated cars' design, as well as provide some concrete recommendations for further system improvement.

Read also:

Psychology in AI Part 2: Can Artificial Players always Beat Humans in a Complex Language Game?

1. The trend of implementing AI on car designs

    The interest in building (semi-)automated cars has increased in the past several years as they are expected to provide us with safer transportation (Piao et al., 2016). Implementation of AI in an automated car is also projected to increase its inclusiveness since several functionalities, e.g., auto-parking (Tenhundfeld et al., 2019; Endsley, 2017; Piao et al., 2016), adaptive cruise control, and summon (Endsley, 2017) are all intended to let the car to operate autonomously with little to no human intervention. Thus, anyone, regardless of age or ability, is capable of driving it without significant problems.

2. Concerns regarding its safety level

        Despite all of the benefits promised, a 2018 incident involving automation technology, in which a self-driving uber car killed a pedestrian in Arizona, has prompted many academics to investigate more about its limitations. A number of topics have been discussed for this purpose. From the importance of addressing the macrosystem while designing technology (Banks, Stanton, & Plant, 2019), the necessity of enhancing training (Casner & Hutchins, 2019; Endsley, 2017), to the need of improving its AI to promote driving awareness (Endsley, 2017).

3. Why AI and humans are not (yet) a good team?

a. Overtrusting the system

        In psychological perspective, one of the most important thing to have for building a solid team is trust. However, too much trust can sometimes be toxic for a relationship, and the human-machine interaction is no exception to this. Companies frequently overestimate the reliability of their innovations, without giving proper cautions to drivers. It may generate a common mental model in inexperienced drivers that automation can replace humans (Woods & Hollnagel, 2006).

        This high degree of trust encourages drivers to shift their roles from being an active and attentive driver to being a passive and "more like passenger driver" (Ebnali, Hulme, Ebnali-Heidari, & Mazloumi, 2019; Endsley, 2017; Piao et al., 2016). As a result, drivers frequently perform less visual monitoring to the situation, which is actually needed for driving with automation mode (Hergeth et al., 2016).

        Such behavior is dangerous because AI is only a series of algorithm and may abruptly switch to manual mode if it is unable to manage a circumstance that is uncovered by the programmed algorithm (Woods & Hollnagel, 2006). When such thing happens, the system needs human to fully take-over the control.

        However, in most of cases, the drivers have no clue of what action is required since they overtrusted the system and were totally focus on performing other activities while on automation mode. It makes the drivers have inadequate situation awareness (Endsley, 2017). Thus, it is critical to instill in the drivers from the beginning that the automation system is merely a team player, not a human substitute.

 

a. Lack of predictability and observability

         Predictability and observability in psychology play a crucial role for establishing a good cooperation, with no exception for human-machine teamwork. To become a good assistant, AI in automated cars should possess these following criteria: transition-oriented, future-oriented, and pattern-based (Woods & Hollnagel, 2006).

           However, most of the system in automated car is not yet observable and predictable enough to be identified as a safe car for everybody. Tesla, for example, is not yet transition-oriented since there is no early warning before its unexpected automation transition to manual mode. Its system's indicators of transition activity are even limited to visual display and low tone audio that is not loud enough to be heard (Endsley, 2017).

        Since there is no sufficient notice, drivers are unable to accurately predict what will happen next. It suggests that the system is not yet future-oriented. In addition, Tesla is also not yet pattern-based as no overview is provided for explaining the series of factors that trigger the mode switch. As a consequence, the drivers are unable to recognize situations that may cause the automation mode to halt.

a. Lack of transparency

        Most AI in automated cars' systems do not offer drivers with adequate information regarding software updates. It is solely presented in-vehicle, with no additional explanation of the detailed functional change, and is not available online (Endsley, 2017). An experienced driver may recognize the new features of the revised version. But what about inexperienced drivers?

 4. Recommendations for further designs

           By considering factors I outlined above, I now present a design recommendation for automated cars by adopting a human-machine interaction strategy that is similar to Google Voice. Since most of incidents are caused by drivers' dual decision, where drivers often have no clue whether they should brake manually or let the system do it due to its lack of transperancy (Endsley, 2017), I propose that drivers should be clearly informed by AI regarding its system upgrade, as well as its limitations of each updated feature.

        The information should be provided in a friendly and natural way through audio to prevent drivers' ignorance. It is expected that by having enough objective infromation, drivers' tendency to overtrust the automation can be reduced, allowing them to stay actively connected with AI as their team partner. The following lines can be a proper example of how AI can provide information to the drivers every time they prefer to activate ACC function:

 

“Hello, the ACC function has been activated. If there's a car in front of you, I'll slow down. However, you still have to be attentive since I cannot automatically stop when a car or anything else approaches us unexpectedly.”

 

        Such simple audio-delivered infromation may significantly help the drivers to establish healthy boundaries with their automation system, reducing the chance of having overtrust toward AI. In addition, once a system upgrade occurs, the car should also notify and explicitly remind the driver that AI is not a human substitute: 

 

"Hi! My system has been upgraded. There are some modifications. I can now manage the sharp curves. You may relax a little, but remember to keep your hands on the wheel and your foot on the brake. Also, don't forget to stay aware of your environment. Cause once again, I can't manage any sudden situational change and the responsibility remains yours." 

 

        By doing so, AI may give impressions for the drivers that their car and its features are reliable for helping them driving safely, while keep underlining that AI is only a partner for them and that humans cannot completely rely on AI.

          Last but not least, to improve the level of observability and inter-predictability between human and machine, AI should also provide information to the drivers about the reasons of why a switch from automation to manual mode is needed. Thus, the drivers will have directed attention and will understand the situation, making it faster for them to take required actions when the automation reaches its limits. These qualities are needed to make an automation a helpful and supportive team player to human (Woods & Hollnagel, 2006).

 disclaimer:

        This article is a revised and improved version of my assignment for completing a course (Resilience Engineering) under supervision of Prof. Jan Maarten Schraagen while pursuing my master degree in Human Factors and Engineering Psychology at the University of Twente. I received a quite good grade (for Dutch Grading System), 8.1/10. So, I'm having quite enough confidence to post my opinion here (with some revisions).

 

Reference


Banks, V. A., Stanton, N. A., Plant, K. L. (2019). Who is responsible for automated driving? A macro-level insight into automated driving in United Kingdom using the risk management framework and social network analysis. Applied Ergonomics, 81, 102904. DOI: https://doi.org/10.1016/j.apergo.2019.102904.

Casner, S. M., & Hutchins, E. L. (2019). What do we tell the drivers? Toward minimum driver training standards for partially automated cars. Journal of Cognitive Engineering and Decision Making, 13(2), 55-66. DOI: https://doi.org/10.1177/1555343419830901.

Ebnali, M., Hulme, K., Ebnali-Heidari, A., & Mazloumi, A. (2019). How does training effect users’ attitudes and skills needed for highly automated driving? Transportation Research Part F, 66, 184-195. DOI: https://doi.org/10.1016/j.trf.2019.09.001.

Endsley, M. R. (2017). Autonomous driving systems: A preliminary naturalistic study of the Tesla Model S. Journal of Cognitive Engineering and Decision Making, 11(3), 225-238. DOI: https://doi.org/10.1177/1555343417695197.

Feltovich, P. J., & Hoffman, R. R. (2004). Keeping it too simple: How the reductive tendency affects cognitive engineering. Human-Centered Computing, 19(3), 90-94. DOI: https://doi.org/10.1109/MIS.2014.14.

Hergeth, S., Lorenz, L., Vilimek, R., & Krems, J. F. (2016). Keep your scanners peeled: Gaze behavior as a measure of automationntrust during highly automated driving. Human Factors, 58(3), 509–519.DOI: https://doi.org/10.1177/0018720815625744.

Piao, J., McDonald, M., Hounsell, N., Graindorge, M., Graindorge, T., & Malhene, N. (2016). Public views towards implementation of automated vehicles in urban areas. Transportation Research Procedia, 2168-2177. DOI: https://doi.org/10.1016/j.trpro.2016.05.232.

Tenhundfeld, N. L., de Visser, E. J., Haring, K. S., Ries, A. J., Finomore, V. S., & Tossel, C. C. (2019). Calibrating trust in automation through familiarity with the autoparking feature of a Tesla Model X. Journal of Cognitive Engineering and Decision Making, 20(10), 1-16. DOI: https://doi.org/10.1177/1555343419869083.

Woods, D. D., & Hollnagel, E. (2006). Joint Cognitive Systems: Patterns in Cognitive Systems Engineering. Boca Raton: Taylor & Francis.



 

You Might Also Like

0 comments

Popular Posts