
The Roots of Bad Surveys: Why We’re Still Asking Questions Like It’s 1935
By inca
- article
- Survey Research
- Chat/Messaging Surveys
- Chatbots
- Conversational AI
- Conversational Surveys
- DIY Surveys
- NLP (Natural Language Processing)
- Qual-Quant Hybrid
- Survey Software
- Text Analytics
- Questionnaire Design
There is a conversation I find myself having quite regularly with clients and researchers, and it usually starts with frustration. The frustration is about survey quality. About data that cannot be trusted and participants who seem disengaged – who rush through or give inconsistent answers. And the conclusion people tend to reach is that the problem is the respondents.
I believe the opposite – in most cases, the problem is the surveys themselves. And when you look at where those surveys come from, historically, that starts to make a lot of sense.
Learn more by watching or listening to Kathy on the Founders and Leaders Series podcast here:
Episode 12: Kathy Cheng, Founder & CEO, Nexxt Intelligence – inca
The Legacy of Constraint
The design conventions that define most online surveys today, the long lists of statements to rate, the grids, the multiple choice questions presented as blocks of text, the linear one-question-at-a-time format, were not invented for the internet. They were developed in the 1930s and 1940s, as a function of the technology and materials available to researchers at the time.
Researchers faced real constraints when surveys were conducted on paper or over the telephone. Questions had to be structured in a particular way because that was the only practical way to collect answers at scale and then process them. The format was not designed to be the most natural or engaging for the person completing it. It was designed around what could realistically be administered and tabulated. Structure was a necessity, not a choice.
When survey research moved online, something interesting happened. We had, for the first time, the technical capability to do something very different. We could create interactive experiences – use images, tools and conversational interfaces. We were no longer constrained by paper forms or telephone scripts. And yet, in most cases, we replicated what we had always done. We took the paper questionnaire and put it on a screen.
I understand why that happened – there was a sense of familiarity. There was an existing methodology built around those formats. There was also, for a long time, little pressure to change because the approach still worked well enough. Panels were full, and response rates were acceptable. The data came back usable.
But the cracks were there from the beginning, and over time they have become much harder to ignore.
The Human Experience
The experience of completing a traditional online survey is, for most people, not a good one. You are presented with a long sequence of questions, asked to rate statements on scales, and given lists of options to click through. There is very little sense that anyone on the other side is actually interested in what you think. It feels transactional, much like an exam. And just as nobody sits an exam and thinks “I really want to give this my best possible effort for the sake of it”, the same is often true here.
Applying Qualitative Principles to Scale
In qualitative research, we approached this completely differently. If a participant could not answer a question, we did not blame them. We said the moderator was not doing their job; the question was too hard; the technique was wrong. We used projective techniques, for example, to help people access and articulate things they might not be able to express if asked directly. We created an environment in which people felt listened to and their contributions felt meaningful.
The fundamental insight behind inca is that those qualitative principles do not have to stay in qualitative research. There is no reason why a quantitative survey cannot be designed to engage participants, ask questions in a way that people can actually answer, and use tools and conversational flow to create an experience that feels more natural than an exam.
A scale question, for example, is usually treated as a data point. The person selects a number and moves on. But a scale can also be used as a projective. You give people a structure to think within, and then you ask them what they want to say about it. The number matters less than the conversation it opens up. That kind of thinking requires a different approach to survey design and researchers with some understanding of qualitative methods, not just quantitative ones.
The broader point is that what we inherited from the 1930s was not a set of best practices. It was a set of workarounds for constraints that no longer exist. The constraints were real at the time, and the solutions were reasonable given what was available. But we are not in the 1930s. We have tools that can deliver questions as part of a conversation. We can use images, interactive exercises, and projective techniques, all within what is still, at its core, a scalable quantitative survey. The question is whether we are willing to use them.
Treating the Cause, Not the Symptom
I think the fact that so many surveys have stayed close to those original conventions for so long is partly due to inertia and partly to the way the industry has thought about the problem. For a long time, when data quality was poor, the response was to work on the supply side. Better panel recruitment, stricter quality controls, and smarter ways to identify bad actors. All of that is understandable, and some of it is genuinely useful. But it treats the symptom rather than the cause.
If the experience we are asking people to go through is fundamentally uninviting, no amount of filtering will fully compensate for that. People who join panels to share their opinions and contribute something useful will eventually stop doing so if every survey they take feels like a chore. The engagement decline we are seeing in panels is not just a supply problem. It is, at least in part, a design problem.
Breaking the Habit
Surveys are still a very important tool. They play a significant role in decision-making for organisations of all kinds. I believe that, but I also believe that if we do not address the design problem, the relevance of surveys as a method is going to keep eroding. The rise of synthetic data is, in my view, partly a symptom of this. People are reaching for alternatives not only because surveys are expensive or slow, but because they have lost confidence in what surveys produce. And that happened, partly, because the experience has not kept pace with what is now possible.
The good news is that the conventions we inherited are not laws but habits. And habits can change. The technology exists to do something very different. Whether the industry chooses to use it is a different question, and one I think will shape much of what happens over the next few years.
Learn more by watching or listening to Kathy on the Founders and Leaders Series podcast here:







