Voice Assistants for Research & Insight

Voice Assistants for Research & Insight

What are voice assistants for research?

Before anyone gets carried away, these are very early days. We’re a long way from automated, natural conversations that will pass any kind of Turing test. But let’s step back and look at the wider ‘conversational research interfaces’ space.

There are three tiers to it.

1. Messaging interfaces

These are cropping up everywhere now.

Research tools are starting to mimic the WhatsApp / WeChat / Messenger format, or even embedding directly into these apps.

Question formats are typically structured (pick from a list of options), free-text or media upload.

Machine learning helps to analyse the content of unstructured inputs (words, pictures, videos) – but right now, most of these models struggle with follow-up replies that really make sense.

You know what I mean.

– Hi I’m Qualbot. Nice to meet you. What’s your name?

– Bite me hard

– Hi Bite me hard! Welcome to our discussion about toothpaste …

Current reality is that most research in a messaging format is a pre-scripted survey or a discussion with lots branching logic.

I’m not whining – this is just a statement of where we are on the curve right now.

2. Chatbots

The next step up is chatbots – software that can respond to inputs and adapt its responses more intelligently.

Actually, the distinction between level 1 and 2 is not so clean – most messaging research apps have a mix of both of these. Where there is more training data – for example in continuous studies on the same topic – the bots re getting cleverer and able to adapt better.

As with most machine learning applications, improvements come gradually then suddenly. Once a tipping point is reached, expect very rapid progress. This is what happened with Google Translate: enough data helped to train better algorithms – and almost overnight it became really rather good.

You can read more about chatbots for research here.

And you can find them in the Insight Platforms directory here.

3. Voice assistants for research

Voice assistants for research are level 3 on this scale.

Developers worry about research participants doing a Joachim Phoenix and falling head over heels for Samantha Survey.

Kidding, obvs.

These things really are a long way from seducing respondents. Don’t get your hopes up.

Check this out for a little market map of chatbots and voice platforms.

Who’s building voice assistants for research?

A few brave types have started creating applications to explore how voice assistants can play a role in research. But it’s early days – most are in beta right now and their use is quite restricted.

Here’s a collection of innovators in the space.

I’m sure there are others I have missed – please let me know if that’s the case.

Lewers Research / Research by Bot

I recently watched Anne-Marie Moir, Head of Innovation at Australian agency Lewers Research, present her voice research experiment to the ESOMAR APAC conference.

She used Google Voice for a simple qualitative research project with about 50 respondents. It was a proof of concept, to see whether it’s worth exploring alongside traditional methods.

It was just two questions:

  • ‘tell me what you had for dinner last night’
  • ‘why did you have X for dinner last night’.

TL;DR: people give longer answers speaking than they do when they type; simple response stuff is fine, but anything more complex will need a lot of training – and it’s not worth the effort for most ad hoc projects.

Even on a very small sample size of less than 50, the scale of the challenge becomes clear: training a model to ask a sensible follow up question based on responses to the initial question will be a mammoth effort.

Jeffrey Henning gives it a much better write-up here:



The Quester platform already uses machine learning for text-based research applications.

It combines quantitative and qualitative research into a hybrid approach using a ‘linguistically-trained, AI-backed virtual moderator’ to conduct in-depth interviews with hundreds or thousands of consumers.

The team recently launched a pilot with Conagra to deploy their software for voice-based research using Amazon Alexa.

You can find out more here, and watch some of the videos from the pilot interviews:



Rival Technologies is one of the leading chat-based research platforms.

Its solution lets respondents feed back using chat, voice and video, and has been used to build communities of customers, fans and employees. ‘Conversational’ surveys are deployed through social media, SMS and messaging apps.

The Rival team has carried out several experiments with voice assistant tech, and will be launching something in beta shortly.

You can find out more about voice tech broadly and Rival’s plans from this article:


Rant & Rave

Rant & Rave is a cross-platform customer engagement and CX feedback solution.

‘Listening Posts’ invite feedback from customers in-the-moment, with messaging as the primary feedback channel through SMS, Facebook Messenger or Snapchat.

Emoji feedback and images are analysed alongside text inputs, and machine learning is used to provide ‘contextual and empathetic’ responses to customers’ feedback.

The team have run experiments using Amazon Alexa to capture voice feedback; you can find more details and watch a video in this post:



SurveyLine is a voice surveys platform for use with Amazon Alexa and Google Voice.

You can create surveys for free, but I haven’t had time to play around with this.

Maybe someone can have a go and do a write up for the site?

True Reply

TrueReply is a voice data collection platform for Amazon Alexa-powered devices and automated telephone interviewing. Its focus is on healthcare research.

Key features include 2-way automated translations with support for 120 languages; response-based skip logic; and incentive management.

Who else have I missed? Let me know so it can be added.

Leave a Comment

Scroll to Top