Why Bad Survey Data Is a Design Problem, Not a Respondent Problem

By inca

  • article
  • Survey Research
  • Chat/Messaging Surveys
  • Chatbots
  • Conversational AI
  • Conversational Surveys
  • DIY Surveys
  • NLP (Natural Language Processing)
  • Qual-Quant Hybrid
  • Survey Software
  • Text Analytics
  • Questionnaire Design

Summarise with AI

ChatGPT
Claude Logo
Gemini Logo
Perplexity Logo

When I started my career, I worked almost entirely in qualitative research – I loved it. The focus was on listening, creating the right conditions for people to express themselves, and asking questions in a way that actually enabled them to answer. If a participant struggled, we did not blame them; we looked at ourselves. The moderator was not doing their job, the questions were too hard, or the technique was wrong.

Then I moved into quantitative work, and I found myself in a very different world.

Learn more by watching or listening to Kathy on the Founders and Leaders Series podcast here:


The “Bad Respondent” Myth

The language was different. People talked constantly about bad respondents. The whole industry seemed focused on identifying and removing them. We developed ever more sophisticated methods to kick people out of surveys, catch them speeding through, and flag inconsistent answers. I found this deeply counterintuitive. In qualitative research, we never complained that respondents were bad. So why were we doing it here?

I started to think about what was actually different. And the more I looked at it, the more I came to believe that the problem was not the participants, but the surveys themselves.

Fixing the Experience, Not the Participant

When I started building what became inca, the idea was fairly simple. Instead of trying to fix the respondents, what if we tried to fix the experience we were giving them? What if we took all the things that made qualitative research work – the engagement, conversational flow, projective techniques, and the sense that someone is actually listening – and tried to bring them into quantitative surveys?

The basic premise was that people, when they join a research panel, have things to say. They want to share their opinions and want to help companies build better products. That is why they signed up. The few pence we offer them as an incentive is not really the point. So if that is true, and I think it is, then the question becomes: what kind of experience are we giving them? Are we creating the conditions in which they can actually express themselves, or are we putting them through something so tedious and arbitrary that even the most willing participant eventually gives up?

Bridging the Gap

A traditional survey has a particular feel. You are reading a long list of questions and clicking boxes. There may be a few open-ended questions, but by the time you reach them, you are not in a mindset that encourages genuine reflection. You are just trying to get to the end. The experience is designed around the researcher’s need for structure, not around the participant’s ability to engage.

The way inca surveys work is to turn the whole thing into a conversation. Even close-ended, choice, and scale questions are delivered as part of a dialogue. There is a moderator, even if it is an AI moderator, asking the questions. When you do need a moment of genuine depth, a mini in-depth interview, it is already embedded in the format. The transition between open and closed questions feels natural because the whole experience is framed as a conversation rather than an exam.

I want to be clear about something: the conversational AI element, the AI moderation capability, is important to us. It is a core piece of technology that we have built. But it is just one part of the whole experience. If you take that capability and embed it in a boring survey, it is not going to do the magic. Someone on the other side suddenly seeming to understand what you have said does not make up for having just gone through five batteries of rating scale statements. The experience has to work as a whole.

The other elements that I think make a real difference are the projective techniques and the tools. Qualitative moderators have always used tools. We would give people a piece of paper with a concept on it and say, underline what you like. Simple things. But they work because they give people something to look at, something to respond to, a way in to a more specific and more useful answer. When you have those kinds of tools available in a survey, even a scale question can become something more interesting. The number itself matters less than what the person says afterwards. So a scale can be used as a kind of projective. You are giving people a structure within which to think, not just asking them to produce a rating.

The Human-Centric Choice

For a long time, not everyone saw participant experience as a priority. I remember a client once saying to me, “This is really interesting, but I only receive my report at the end. I don’t see it!”. And I understood the logic. Why would you care about what happens inside the survey if all you are looking at is the output? There was also an investor who said, “Your paying customers are not the participants. Why are you focused on them?”.

I think that logic is understandable, but it misses something important. The quality of the data you get out is directly connected to the quality of the experience you give people going in. More and more research on research is now showing this. When participant experience improves, data quality and the depth of the insight improve. The two things are not separate.


Shifting the Industry Narrative

I believe our industry has been quite negative for a long time. The conversation for many years was focused on the problems – bad respondents, panel quality, and fraudulent responses. All real issues, but framing them the way we did led us to focus on filtering and removing rather than on improving. The assumption, often unstated, was that people do not really want to take surveys. That they are just doing it for the money, and not very enthusiastically.

I do not think that is true. I think it is simply wrong to assume that people do not want to participate. What if we believed that people, when they join a panel, genuinely have things to share? What if we took seriously the idea that they want to express their opinions and help companies make better decisions? Then the question is not how we catch the bad ones. The question is how we create the kind of experience that makes it possible for the good ones, which is most of them, to actually do that.

That shift in perspective is what I think the industry needs. Not new technology or methodology, but a different starting assumption about the people we are asking to take part. If we start from the belief that participants have value to offer, and then design experiences that make it possible for them to offer it, the data quality follows naturally.

The surveys that are still designed around 1930s conventions are not going to get us there. Not because the people are wrong, but because the format is.

Learn more by watching or listening to Kathy on the Founders and Leaders Series podcast here:


Author

Learn more about

inca
inca is a Conversational AI solution that delivers deep human understanding by blending qualitative capabilities with quantitative scale.
FIND OUT MORE inca
Scroll to Top