
AI Moderated Research on Research: AI vs. Human Interviewers’ Effectiveness
By Glaut
- article
- AI
- Artificial Intelligence
- AI Moderated Interviews
- Agile Quantitative Research
- Agile Qualitative Research
- Automated Survey Research
- Audience/Consumer Segmentation
- Behavioural Analytics
- Online Surveys
In the first article of this series, we explored how AIMIs produce richer open-ended responses than static online surveys. But a more fundamental question remains: can AI interviewers match human interviewers in one-on-one depth interviews?
To answer this rigorously, researchers at Curtin University conducted a biometric study through our Research on Research programme, comparing AI and human interviewers side-by-side.
The study presented here measured participants’ emotional and physiological responses to each interviewer type.
Understanding the Research Question
For decades, human interviewers have been considered the gold standard for qualitative research. They build rapport, adapt in real time, and create an emotional safety that encourages participants to share sensitive information. Yet this assumption has rarely been tested rigorously against AI alternatives. Patrick Duong and Billy Sung from Curtin University posed a straightforward research question: when all variables are held constant, do participants respond differently to an AI interviewer than to a human interviewer?
To test this, the researchers conducted a randomised controlled trial with 60 English-proficient participants (28 interviewed by humans, 32 by AI). All interviews focused on fast fashion perceptions and justifications – a topic chosen to be realistic and potentially sensitive. Crucially, both groups received identical questions generated by the AIMI platform. The human interviewers simply read the AI-generated questions, ensuring that interview content remained constant across conditions and that the only variable was the interviewer type.
The study measured responses in two ways. First, participants completed self-report surveys assessing their sense of connection, trust, willingness to disclose, and overall interview experience. Second, the researchers captured objective biometric data: facial expressions (analysed for eight emotions, including joy, confusion, and fear), skin conductance (reflecting emotional intensity such as stress), and heart rate (indicating engagement). This combination of subjective and objective measurements provided a comprehensive picture of how participants experienced each interview type.
The Emotional Connection Gap
The results revealed a clear pattern: human interviewers fostered a stronger emotional connection. Participants reported a significantly stronger sense of connection with human interviewers (5.83) compared to AI interviewers (4.64) – approximately 26% higher. They also gave higher overall evaluations to human interviewers (6.45 versus 5.96).
Biometric data confirmed this emotional difference. Participants exhibited significantly greater joy when interviewed by humans: facial expression analysis showed almost three times more joy with human interviewers (18.43%) than with AI (6.24%). Similarly, heart rate measurements revealed 9% higher engagement with human interviewers (81.44 bpm versus 74.80 bpm), reflecting greater emotional involvement and attentiveness during the conversation.
Yet a critical finding emerged: participants showed no significant increase in negative emotions when interviewed by AI. Measures of contempt, confusion, or stress did not differ meaningfully between the two interview modes. This is important because it suggests that whilst AI may feel less relational, it does not come with an emotional “cost” for participants. People do not become more tense, uncomfortable, or defensive when speaking with AI, they simply experience less warmth.
Disclosure, Trust, and Data Quality
Despite the emotional gap, participants showed comparable willingness to disclose personal information to both interviewer types. Self-report measures revealed no significant differences: both AI and human interviews produced similar ratings for trust in the interviewer, positive experience, ability to answer questions effectively, and willingness to share sensitive information.
The researchers identified two key drivers of willingness to disclose: sense of trustworthiness and positive experience. Neither of these factors differed meaningfully between the interviewer types, thus, the analysis concluded that AI interviewers performed comparably to humans in eliciting disclosure.
This finding addresses a central concern among researchers about AIMI: that participants may withhold or provide lower-quality responses when speaking to a machine. The evidence suggests otherwise. Participants were equally willing to share personal information, regardless of whether they spoke to an AI or a human.
Implications and When to Use Each Approach
The research demonstrates that AIMI is a viable alternative to human moderation for most research contexts. Whilst human interviewers continue to excel at building emotional rapport and fostering positive emotional engagement, this advantage does not translate into better data collection or disclosure. When the research priority is gathering rich, honest qualitative responses at scale, AIMI delivers equivalent results without the cost, moderator variability, or logistical constraints of human interviewing.
This suggests a practical framework for researchers. Use AIMI when the priority is scale, consistency, standardisation, and depth of disclosure – this method excels in these areas and enables thousands of interviews in multiple languages with zero moderator bias. Use human interviewers when the research context specifically demands emotional attunement – for instance, when discussing trauma, sensitive health issues, or topics requiring intensive rapport-building. For the majority of consumer research, market research, and exploratory qualitative studies, AIMI provides a strong, evidence-backed alternative.
The findings also point toward the future: as voice models, emotional modelling, and real-time adaptive interviewing continue to improve, the gap in socio-emotional connection between humans and AI is likely to narrow further. Early adoption of AIMI now positions organisations to build experience and competitive advantage as the technology evolves.
For researchers evaluating AIMIs, the evidence-based insights from our Research on Research programme provide a foundation for making informed decisions – check out the programme for yourself.






