Article Evaluation

Culture of violence theory

  • The article is consistent with the overall theme of the topic. The article was concise and did not introduce misleading information.
  • The article maintains a neutral position and does not influence the author to take a side.
  • The article keep a solid balance and does not overemphasize viewpoints.
  • The embedded links are fully functional and the citations support the claims.
  • The sources accurately represent the information, and are neutral.
  • All information is up to date.
  • conversations are in the scope of wiki psychology. No theoretical/topic discrepancies discussed.
  • the article is part of the wiki project sociology, psychology, crime, biology, anthropology.
  • Article is fairly consistent with what was discussed in class.

Article Evaluation

  1. Automation bias - Automation-induced complacency is the process by which an operator expects automation to have perfect reliability, resulting in the operator taking a less active role when interacting with the system. If the operator's perception of system reliability is above the system's actual level of reliability, errors can result. Trust is an integral component for developing appropriate human-automation interactions, and is not given enough attention in the automation complacency segment of the article. Specifically, overtrust in automation can lead to automation complacency and undertrust in automation can lead to unmerited speculation about system performance. In addition, systems that convey reliability information can often assist with appropriate trust calibration, reducing the consequences associated with undertrust (e.g., misuse of automation and increased workload) or overtrust (i.e., automation complacency). The article would therefore benefit by including trust literature in the overall discussion. In addition, the article presents an opportunity to tie in social psychology constructs, such as persuasion, when describing the nature of the relationship between automation reliability and its effects on operator interactions.
    • References establish a strong, empirical framework for automation complacency but not include literature on trust.
    • The links are functional.
    • nominated as a good article, but it did not meet the good article criteria at the time. There are suggestions on the review page for improving the article. If you can improve it, please do; it may then be renominated.
    • The article is apart of the wiki psychology and philosophy group.
  2. Social Intuition occurs when individuals search for evidence to support an intuitive belief or position. The article does not frame the discussion in terms of how justifying pre-defined beliefs can benefit or harm social interactions. In addition, persuasion literature may also shed light on how individual's persuade themselves to maintain moral positions in the face of contradictory evidence.
    • Article is steeped in Kohlberg's moral reasoning framework and Haidt's social institution model.
    • Article is apart of the wiki psychology project but is rated as "Stub." The article does not have any active conversation about its content.
    • The links are functional

Persuasion Project

Copied from original source, automation bias, for editing purposes.

The concept of automation bias is viewed as overlapping with automation-induced complacency, also known more simply as automation complacency. Like automation bias, it is a consequence of the misuse of automation and involves problems of attention. While automation bias involves a tendency to trust decision-support systems, automation complacency involves insufficient attention to and monitoring of automation output, usually because that output is viewed as reliable.[1] "Although the concepts of complacency and automation bias have been discussed separately as if they were independent," writes one expert, "they share several commonalities, suggesting they reflect different aspects of the same kind of automation misuse." It has been proposed, indeed, that the concepts of complacency and automation bias be combined into a single "integrative concept" because these two concepts "might represent different manifestations of overlapping automation-induced phenomena" and because "automation-induced complacency and automation bias represent closely linked theoretical concepts that show considerable overlap with respect to the underlying processes."[2]

Automation complacency has been defined as "poorer detection of system malfunctions under automation compared with under manual control." NASA's Aviation Safety Reporting System (ASRS) defines complacency as "self-satisfaction that may result in non-vigilance based on an unjustified assumption of satisfactory system state." Several studies have indicated that it occurs most often when operators are engaged in both manual and automated tasks at the same time. This complacency can be sharply reduced when automation reliability varies over time instead of remaining constant, but is not reduced by experience and practice. Both expert and inexpert participants can exhibit automation bias as well as automation complacency. Neither of these problems can be easily overcome by training.[2]

The term "automation complacency" was first used in connection with aviation accidents or incidents in which pilots, air-traffic controllers, or other workers failed to check systems sufficiently, assuming that everything was fine when, in reality, an accident was about to occur. Operator complacency, whether or not automation-related, has long been recognized as a leading factor in air accidents.[2]

To some degree, user complacency offsets the benefits of automation, and when an automated system's reliability level falls below a certain level, then automation will no longer be a net asset. One 2007 study suggested that this automation occurs when the reliability level reaches approximately 70%. Other studies have found that automation with a reliability level below 70% can be of use to persons with access to the raw information sources, which can be combined with the automation output to improve performance.[2]

Planned contributions

Define automation transparency. Briefly explain how it differs from automation bias and summarize literature.

Discuss why automation complacency is important in the broader context of human automation interaction.

Discuss components that can influence automation complacency (automation reliability and trust)

Discuss common methods for measuring automation complacency and the need for future research.

Will discuss factors influencing automation complacency, such as: difference between automation bias and automation complacency, automation reliability, first failure effect, expertise, scale that measures automation complacency, and trust.

Operational definitions of automation complacency vary. Weiner (1981) suggested that automation complacency changes as function of skepticism, in that, when an operator is not skeptical of a system's performance, they will be more likely to engage in other tasks. Other definitions consider the changes in the relationship between the operator and the system. Specifically, when the operator performs a task and then relates that same task to an automated system, the operator becomes a monitor (Farrell & Lewandowsky,2000). Taken together, automation complacency results in the failure of an operator to respond or detect changes in an automated system. This failure is a form of system misuse, in which the operator's attention allocation falls beneath the recommended threshold for detecting changes in a system's performance. For complacency to occur, the operator must first experience high levels of mental workload. Once the operator begins to exhaust mental resources, automation becomes valuable in that the operator can leverage it to assist with performance. In turn, the operators' perceptions of the automated system's reliability can influence the way in which the operator interacts with the system. Endsley (2017) describes how high system reliability can lead users to disengage from monitoring systems, increasing monitoring errors, decreasing situational awareness, and interfering with an operator’s ability to re-assume control of the system in the event performance limitations have been exceeded (i.e., automation conundrum). In contrast, when operators are interacting with an automated system perceived to have low reliability, they may be skeptical of the system's performance, and therefore allocate more attentional resources to monitor the system in the event automation fails.

Perceptions of reliability, in general, can result in a form of automation irony, in which more automation can decrease workload but increase the opportunity for monitoring errors, while low automation can increase workload but decrease the opportunity for monitoring errors (Bainbridge). Take, for example, a pilot flying through inclement weather, in which continuous thunder interferes with the pilots ability to understand information transmitted by an air traffic controller (ATC). Despite how much effort is allocated to understanding information transmitted by ATC, the pilot's performance is limited by the source of information needed for the task. The pilot therefore has to rely on weather pattern and flight path information conveyed by automated gauges in the cockpit. If the pilot perceives the weather information and subsequent flight maneuvers to be highly reliable, the pilot may reduce the amount of effort needed to understand ATC and the flight gauges simultaneously. Moreover, if the flight gauges are perceived to be highly reliable, the pilot may ignore the weather information entirely to devote mental resources for deciphering information transmitted by ATC. In so doing, the pilot runs the risk of missing critical information from the automated weather gauges, becoming a complacent monitor and therefore increasing the chance of committing monitoring errors in the event weather gauges are incorrect. If, however, the pilot perceives the weather gauges and flight path information to be unreliable, the pilot will need to exert additional recourses to interpret ATC and the weather gauges. This creates scenarios in which the operator may be expending unnecessary resources when the automation is in fact reliable, but also increasing the odds of identifying potential errors in the weather gauges should they occur. To calibrate the pilot's perception of reliability, automation should be designed to maintain workload at appropriate levels while also ensuring the operator remains engaged with monitoring tasks. It has been recommended that system reliability should therefore fluctuate over time to maintain the operator's attention over time. The operator should be less likely to disengage from monitoring when the system's reliability can change as compared to a system that has consistent reliability (Parasuraman, 1993).


An operator's trust in the system can also lead to different interactions with the system, including system use, misuse, disuse, and abuse (Parasuraman and Riley, 1997. Automation, in general, can assist a user with information processing tasks, though the user's trust towards the system is an important determinant. Information processing occurs in several steps: information acquisition, information analysis, decision making and action selection, and action implementation (Wickens, Hollands, Banbury, & Parasuraman, 2015). For example, information acquisition, the first step in information processing, is the process by which a user registers input via the senses (Wickens et al., 2015). An automated engine gauge might assist the user with information acquisition through simple interface features -- such as highlighting changes in the engine's performance--thereby directing the users selective attention to changes in the engine when necessary. When faced with turbulence originating from the aircraft, pilots may tend to overtrust the engine gauge and assume that the engine is the sole cause of the turbulence, loosing sight of other possible causes--a form of automation complacency and misuse. If the pilot devotes time to interpret the engine gauge, and manipulate the aircraft accordingly, only to discover that the flight turbulence has not changed, the pilot may be inclined to ignore future error recommendations conveyed by the engine gauge-- a form of automation complacency leading to disuse.



Discuss how user trust can also influence performance when interacting with an automated aid.


Additional Sources:

Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407-434.

Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human factors, 52(3), 381-410.

Wickens, C. D., Hollands, J. G., Banbury, S., & Parasuraman, R. (2015). Engineering psychology & human performance. Psychology Press.

Wickens, C. D., Clegg, B. A., Vieane, A. Z., & Sebok, A. L. (2015). Complacency and automation bias in the use of imperfect automation. Human factors, 57(5), 728-739.



  1. ^ Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy (June 16, 2011). "Automation bias: a systematic review of frequency, effect mediators, and mitigators". J Am Med Inform Association. 19 (1): 121–127. doi:10.1136/amiajnl-2011-000089. PMC 3240751.
  2. ^ a b c d Parasuraman, Raja; Manzey, Dietrich (June 2010). "Complacency and Bias in Human Use of Automation: An Attentional Integration". The Journal of the Human Factors and Ergonomics Society. 52 (3): 381–410. doi:10.1177/0018720810376055. Retrieved 17 January 2017.