As human- AI collaboration becomes increasingly commonplace, both at work and across people’s lives, issues related to how humans are using AI are emerging. It is clear that people and organisations / businesses are increasingly using artificial intelligence for a wide range of tasks, including decision-making. This trend is expected not only to continue, but to accelerate over the coming years.
Issues in Human-AI Collaboration
Whilst the capabilities of AI are increasing rapidly, issues are emerging around the ability of humans to successfully collaborate with and operate artificial intelligence systems. For example:
- Mismatched expectations. Human users of AI often have unrealistic expectations about what an AI system can do.
- Lack of training. Users often lack the skills and thinking needed to interact with AI and make assumptions about how AI works, based on their experience with other humans (who are able to make inferences and read intonation, body language, etc.).
- Lack of understanding. Users often have little idea about how AI works and, therefore, do not understand its limitations.
- Lack of trust. A variety of issues lead to the perception of a lack of trust in AI including:
- Fears about bias
- Ethical issues
- Lack of transparency
- Lack of adaptability
- Privacy concerns
- Inconsistent performance
- Acceptance wariness
- Overreliance and acceptance of AI outputs. Often users of AI fail to engage in critical thinking and fact checking when interacting with AI systems.
In short, human perceptions of AI tend not to match the actual capabilities and working of AI. It is important, therefore, that human knowledge, understanding, and skills match the capabilities of AI.
Understanding why humans have perceptions of AI that differ from reality is important, as this will help us understand how to equip them to better collaborate with artificial intelligence systems.
How people are thinking about AI
Given that the issue is a cognitive/perceptual issue (how people are thinking about AI), there are two different types of explanation:
- Cognitive load
- Decision control, or locus of control
Cognitive load and learning
Cognitive load refers to the total amount of mental effort an individual has to exert with any particular task. The level of cognitive load an individual experiences depends on a number of factors, including:
- The amount of material that needs to be processed
- The complexity of the material
- The newness of the material or concepts
- Whether the material or concepts are in line with or challenge current knowledge and perceptions
- The skill and practice the individual has had in thinking about, processing, and working with problems or issues of the nature presented
- Previous knowledge and experience
- Stress levels
Humans have only a certain capacity in terms of cognitive load, which is closely linked to their learning capability. However previous studies have found that the common assumption that reducing cognitive load (through learning design) always enhances learning is not always correct. It has been found that lowering the complexity of information too far inhibits learning, rather than increases it. In effect, there needs to be enough nuance in the information for the individual to start to form connections and create new realisations.
The cognitive load an individual can cope with and meaningfully process differs both from person to person and from context to context. It is quite likely that part of the issue facing many humans when conceptualising and trying to understand AI and how to use it is their learning or absorptive capacity, based on both the cognitive load of trying to do so and the individual cognitive load capacity.
Decision Control
Decision control refers to two things:
- The extent to which an individual feels they can change the decision of others. High levels of decision control indicate that a person feels and can successfully change the choices and decisions made by others.
- The level to which an individual can accept or reject the result of others’ attempts to control or provide decisions.
A person with low decision control tends to feel they have no agency or influence on the decisions of others and tends to assume that they have to accept the decisions of others.
Where AI is involved in decision making to the extent that it presents its own decisions and recommendations, the concept of an individual’s level of decision control could be a contributory factor in their reactions and interactions with AI.
A new study
A new series of studies looking at ineffective human-AI interactions, its psychological causes and potential solutions has been conducted by researchers from the University of Cologne, Faculty of Management, Economics and Social Sciences, the Karlsruhe Institute of Technology, Institute of Information Systems and Marketing both in Germany and the Technion–Israel Institute of Technology, Faculty of Data and Decision Sciences in Israel.
Findings
The studies found that:
- Cognitive load and decision control both significantly predict effective AI-human interaction. Increased cognitive capabilities (ability to deal with complexity and critical thinking capability, in particular) and self-efficacy (decision control) were found to result in better human-AI interactions.
- The explanation of how to use AI needs to be within the bounds of the individual’s comprehension and cognitive capabilities. Cognitive overload, especially in terms of complexity of the explanation, and critical thinking capability were found in part to predict successful human-AI interaction.
- The level of familiarity with AI and its limitations was also found to impact the effectiveness of human interactions with AI.
- The task needs to within the cognitive capabilities of the individual. Highly complex tasks that require critical thinking to carry out and interpret significantly reduce successful human AI interactions.
- There were three recommendations from the study:
- Match tasks and cognitive capability and sense of decision control.
- Provide the right level of explanation of task and the capabilities of the AI with the individual’s cognitive capability and capacity. Too often, instructions are provided by people with a technical background and do not take into account others’ base understanding.
- Help users to learn how to get the AI to reprocess its decisions. This increases critical thinking and trust and often develops better AI based decisions.
Primary reference
Using Artificial Intelligence to Enhance Coaching – A new study
Be impressively well informed
Get the very latest research intelligence briefings, video research briefings, infographics and more sent direct to you as they are published
Be the most impressively well-informed and up-to-date person around...