Generative AI models like ChatGPT have revolutionised how we interact with information. From answering complex questions to generating persuasive content, these AI-driven chatbots have become a trusted source for millions. But what if that trust is misplaced?
Artificial Intelligence
Artificial intelligence (AI) has become hugely popular with users beyond IT and other professionals since the launch of ChatGPT in November 2022. ChatGPT’s (generative pre-trained transformers) are a type of large language model that uses deep learning techniques to generate human-like text or speech (and more recently images and video).
- GPT models are pre-trained on vast amounts of diverse text data, allowing them to learn patterns and structures of human language without explicit supervision.
- GPT models utilise the transformer architecture, which is designed to process sequential data, such as text, by attending to different parts of the input simultaneously.
- Once trained, GPT models can generate coherent and contextually relevant text based on a given prompt or context.
- GPT models can be fine-tuned for various natural language processing tasks, such as text completion, question answering, language translation and text summarisation.
- Current uses for GPTs include:
- Customer Support – automating responses to frequently asked questions and handling routine customer inquiries to reduce the workload on human agents.
- Content Creation – assisting in generating written content for blogs, articles, social media posts and marketing copy.
- Education and Learning – providing tutoring or explanations on various topics, helping students with homework and facilitating language learning.
- Programming Help – offering coding assistance, debugging help and explanations of programming concepts.
- Data Analysis and Summary – analysing large volumes of data and summarising research papers or reports.
- Entertainment – creating and narrating stories, jokes, poems and engaging in general conversation for entertainment purposes.
- Personal Assistants – managing schedules, setting reminders and helping with daily tasks similar to a personal assistant.
- Accessibility – enhancing accessibility for those with disabilities by enabling voice-operated technology and providing readable content.
- Healthcare Assistance – providing preliminary medical advice and mental health support in a non-clinical setting.
- Translation Services – translating text or speech between various languages to help overcome language barriers.
- E-commerce Assistance – facilitating online shopping by providing product recommendations, reviews and customer service.
- Travel and Hospitality Services – assisting in travel planning, providing information on destinations and managing bookings and reservations.
- Financial Advice – offering basic guidance on financial planning, investments and budgeting.
- Legal Assistance – providing preliminary legal advice and information on common legal inquiries.
- HR and Recruitment Services – screening resumes, automating initial candidate interviews and answering FAQs about job positions.
- Coaching and Personal Development – offering coaching in areas like career growth, personal development and lifestyle changes.
- Therapeutic Uses – engaging in therapeutic conversations, typically in support of professional therapy, but not replacing it.
- Games and Puzzles – facilitating interactive games, quizzes and puzzles to engage users.
- Research and Information Gathering – assisting researchers by summarising existing research, finding relevant papers and preparing literature reviews.
Previous research
Whilst Generative AI models are proliferating and new uses are for them are being developed all the time, a new area of research around the psychology of human – AI interaction is also developing. For example, recent studies have shown that:
- A 2023 study from Harvard University found that people are turning to chatbots for help with mental health issues, even though the chatbots are not designed for this purpose. The issue the study found was that chatbots frequently fail to recognise signs of user distress and mental health crises, and can provide unhelpful or even risky / counter-productive and even dangerous responses that can exacerbate such issues.
- A 2024 study published in Nature found that:
- Large language models like ChatGPT can effectively generate personalised and highly persuasive messages tailored to individuals’ psychological personality profiles, even when given minimal information about the target individual.
- This AI-driven approach to personalised persuasion allows companies to automate sophisticated, psychologically targeted marketing at scale and very low cost.
- The effectiveness of the AI-generated personalised messages persist, even when disclosing they were created by AI to appeal to specific personality traits, suggesting transparency does not curb their persuasive impact.
- Organisations can and are leveraging consumer data (e.g. from social media, browsing patterns, etc.) to infer psychological profiles, then use AI to dynamically generate personalised ads, potentially adapting them based on the consumer’s real-time interactions.
- This can also be used for political manipulation.
- Another 2023 study found that professionals using AI in their work frequently suffer from a new type of psychological distress called creative displacement anxiety. Characterised by feelings of anxiety, self-doubt and a crisis of professional identity when people feel their creative and professional skills are being overshadowed or replaced by AI, creative displacement anxiety is likely to have a significant impact on organisations and their culture, including:
- Reduced motivation and productivity due to feelings of inadequacy or a perceived lack of value in their skills, which in turn can lead to a decrease in the quality and quantity of creative output within the organisation.
- Creative displacement anxiety can create tensions within teams, particularly between those who embrace AI technologies and those who feel threatened by them, which can lead to a breakdown in collaboration and communication, hindering the overall effectiveness of creative projects.
- The mental health implications of creative displacement anxiety, such as increased stress and anxiety, can negatively impact employee well-being and job satisfaction, resulting in higher turnover rates, as employees seek work environments where they feel their creative contributions are valued more.
- A culture of fear and uncertainty in organisations surrounding the role of AI in organisations can lead to a decrease in innovation and risk-taking. As a result, employees are often less likely to propose new ideas or experiment with creative or new approaches due to concerns about being replaced by AI.
- Another 2023 study published in the Journal of Language and Social Psychology looking at truth bias or the tendency to believe information presented, found that:
- Both humans and AI have a strong tendency to believe information is true by default, even if it may not be – in effect they both have a significant truth bias.
- AI models like ChatGPT and GPT-4 display an even stronger truth bias than humans, judging up to 99% of information it comes across as true.
- Prompting AI to be more sceptical and providing information about the base rate of lies vs. truths can somewhat reduce AI’s truth bias, but it still remains higher than humans truth bias.
- Being highly capable at many language tasks does not also mean that AI systems can reliably detect deception.
A new study
A new study published in the International Journal of Human–Computer Interaction by a team of researchers from the Graduate School of Information, Yonsei University and the Stanford Center at the Incheon Global Campus, both in the Republic of Korea, has looked at the impact of the use of language AI chatbots and other generative AIs like ChatGPT, Microsoft Copilot, Perplexity, etc. on human perception, belief and scepticism of generative AI outputs.
Findings
The study found that:
- AI-generated language is rich in rhetorical elements that tends to significantly influence users’ perception of truthfulness, even if the information is false.
- The rhetoric used in AI-generated language significantly affects truth discernment. Limited rhetoric hinders truth differentiation, while excessive rhetoric can mislead users. In effect, AI-generated language is the language of persuasion and belief manipulation.
- Three forms of persuasive rhetoric are used by generative AI:
- a. Ethos statements, or using the language of experts, trustworthiness and high credibility, such as:
- Citing reputable sources: “According to a study published in the Journal of the American Medical Association”.
- Referring to expert opinions: “Dr. Sarah Johnson, a renowned expert in the field of neuroscience, states that …”.
- Mentioning affiliations with respected institutions: “As a researcher at the Massachusetts Institute of Technology (MIT), they found that …”.
- Using technical jargon or specialised vocabulary: ” The cerebral hemispheres have several distinct fissures…”.
- Apparent demonstrations of transparency: “It’s important to note that while this study suggests a correlation between X and Y, further research is needed to establish a causal relationship”.
- Showing pseudo-self-awareness or self-criticism: “Sorry for that. In my previous analysis, I overlooked a crucial factor that may have influenced the results. After revisiting the data and accounting for this factor, these are now my findings”.
- a. Ethos statements, or using the language of experts, trustworthiness and high credibility, such as:
- b. Logos – language that suggests logical reasoning, factual information, statistics and suggested evidence to make rational arguments and justifications / detailed responses. For example:
-
- “Multiple peer-reviewed studies have confirmed the effectiveness of this treatment…”
- “Statistical analysis showed…”
- “This can be inferred from the study….”
- c. Pathos – A dominant mode of persuasion used by AI are appeals to emotion, emotional language, cases and stories. For example: “We’re all feeling the pressure as we approach the deadline for Project X. The long hours and late nights have been tough on everyone and I deeply appreciate your dedication and hard work…”
- User expectations significantly influence their trust in AI outputs. Users prefer AI to be technical rather than human-like in truth discernment tasks.
- The presence or absence of rhetorical elements had a more significant impact on perceived truth than the actual veracity of the statement. False statements from ChatGPT were often deemed more truthful than accurate statements from Google search.
- When AI communication was laden with rhetorical elements, it amplified the system’s perceived trustworthiness, logic and transparency.
- Contrary to expectations, prior knowledge did not consistently enhance users’ ability to discern truth in AI-generated responses. In some cases, users were more likely to trust AI responses when they aligned with their prior knowledge, even if the information was false.
- Users applied heuristics and biases from human-to-human communication when evaluating AI-generated language, such as associating detailed responses with truthfulness.
Primary reference
ChatGPT Thinks it is not Ethical! What do you Think?
Be impressively well informed
Get the very latest research intelligence briefings, video research briefings, infographics and more sent direct to you as they are published
Be the most impressively well-informed and up-to-date person around...