How to Design Surveys for Better Open-Ended Responses

By Fathom

  • article
  • AI
  • Text Analytics
  • Customer Experience (CX) Feedback
  • NLP (Natural Language Processing)
  • Verbatim Response Coding
  • Data Analytics
  • Questionnaire Design

Summarise with AI

ChatGPT
Claude Logo
Gemini Logo
Perplexity Logo

Open-ended survey questions can offer researchers the qualitative depth needed to understand the reasons behind respondent behaviour and attitudes, at a quantitative scale. When they do, researchers can unlock the value of qual-at-scale. 

However, poorly designed open-ended questions often yield vague, ambiguous responses that provide limited actionable insight.

Tovah Paglaro, CEO and founder of Fathom, recently shared practical guidance on optimising open-ended question design to generate richer, more actionable feedback. Her recommendations focus on key areas that researchers can implement immediately using existing survey tools and methodologies.

This article is based on the presentation “Qual-at-Scale, Designed for Impact: Turning Open-Ends into Actionable Insights”, offered by Tovah Pagalaro of Fathom at the Insights to Action Summit in October 2025. The full video replay is free to watch here:


1. Be Specific and Focused

Open-ended questions are not an excuse for ambiguity. While researchers often approach open-ends with a broad “I just want to know what people think” mindset, research projects have specific business goals, research questions, or hypotheses to test. 

Focused open-ended questions provide a wealth of narrative data in answer to those questions. The goal is to find the sweet spot between questions that are too narrow (better suited to closed-ended formats) and those that are too broad (leading to scattered, difficult-to-analyse responses).

The other challenge we really broad open-ended questions is that they are actually hard for respondents to answer meaningfully. Focused questions provide respondents with the clarity they need to deliver rich, relevant answers. 

Well-focused questions yield higher response rates and more coherent data sets. They also improve the respondent experience by making it clear what information you are seeking.

2. Replace Why With What

Open-ended questions typically aim to understand the “why” behind behaviour or attitudes. However, asking “why” directly tends to  produce vague responses like “because I do” or “because it’s good”.

Replacing “why” with “what” questions forces respondents to think about specific, actionable elements of their experience. This approach narrows the breadth of the question while encouraging rich answers aligned with research objectives.

Examples of this reframing:

  • Instead of “Why do you like that brand?” ask “What specifically do you like most about products made by that brand?”.
  • Instead of “Why are you unlikely to recommend us?” ask “What happened that makes it unlikely you would recommend us?”.
  • Instead of “Why do you support the candidate?” ask “If this candidate were elected, what change are you most hopeful or excited about?”.

This reframing eliminates the incidents of vague responses and directs respondents toward the specifics that drive their opinions.

3. Avoid Double-Barreled Questions

When researchers want comprehensive insight, the temptation is to ask multiple questions within a single text box. This approach undermines data quality in two significant ways.

First, most respondents only answer whichever part of the question stands out to them, for reasons unknown to the researcher. Second, this practice undermines the goal of representative scale because you no longer have a robust sample of respondents all answering the same question.

Modern survey platforms offer branching logic and probing capabilities that allow researchers to gather comprehensive information through focused, sequential questions rather than cramming everything into one text box.

Questions like “What about their platform do you like and how would it impact you?” should be separated into distinct questions or strategically condensed into a single, focused query.

4. Start Broad and Mind the Prime

In mixed methodology surveys, earlier questions inevitably prime the responses that come later. While this priming effect cannot be eliminated, it should be managed strategically.

Place the broadest version of your open-ended questions early in the survey to capture top-of-mind associations before any priming occurs. Then be intentional about question ordering to ensure subsequent open-ends benefit from, rather than suffer from, the priming effect.

One effective approach is the open-ended funnel. First, ask respondents to list all associations, reasons, or factors related to a topic. Then follow up with a second open-ended question asking for the most important one. This ensures respondents have thought through the full landscape before identifying what matters most, rather than simply reporting the first thing that comes to mind.

5. Use Branch Logic Strategically

Branch logic allows researchers to tailor follow-up questions based on previous responses, particularly after scores or multiple-choice selections. This technique is especially valuable after rating scales or categorical choices.

For instance, after an NPS score, different open-ended questions can be posed to promoters, passives, and detractors. Promoters might be asked what they value most, while detractors are asked what would need to change to improve their experience.

This approach delivers focused, specific responses aligned with each segment which is a better respondent experience, which in turn results in robust responses. Combined with advanced thematic coding or text analytics, this enables highly actionable insights to drive business or product strategy. 

6. Deploy Smart Probes

Probes are follow-up questions that go deeper into an open-ended response. When used effectively, probes can significantly enrich data quality. However, they must be deployed with discipline.

Best practices for probes:

  • Maintain a consistent line of inquiry. The probe should ask respondents to go deeper on the same topic, not pivot to something entirely different.
  • Limit probes to one per open-ended question. Diminishing returns set in quickly beyond the first probe, and the survey experience becomes tedious.
  • Combine the initial response and probe response when analysing. Since they form a conversational unit, they can be coded together as a single, richer response.
  • Use branches for “why” questions after scores or choices. Use probes to go deeper on open-ended responses.

7. Eliminate Generic Questions

Questions like “Anything else?” rarely produce valuable data unless there is a specific, compelling reason to include them. These questions lack the focus needed to generate robust, comparable responses across the sample.

If such a question is not essential to your research objectives, remove it. Your respondents and your analysis will benefit from the tighter focus.

Enabling Advanced Analysis

These design principles create the foundation for sophisticated analysis approaches. Well-crafted open-ended questions are the starting point for high-quality thematic coding and text analytics

When questions are specific and focused, responses are rich and comparable, giving researchers the depth needed to capture nuance and detail. This creates a strong foundation for solutions like Fathom’s human-in-the-loop approach, where analysts and AI work together to identify patterns, uncover themes, and surface insights from open-ended data sets of any size, in any language. 

Teams can leverage advanced thematic coding and open-ended analytics workflows that allow researchers to compare themes across demographic groups, behavioural segments, or attitudinal clusters, as well as longitudinal analysis to confidently understand changes in themes over time. Themes can even be integrated with other data sources for predictive modelling.

However, all of these advanced techniques and AI efficiencies require a foundation of well-designed questions that generate focused, specific, rich responses from the entire sample.


Practical Implementation

The principles outlined here can be implemented immediately with existing survey platforms and tools. No major technology investments or process overhauls are required.

The key is shifting from a mindset of “I want to know what people think” to “I want to know specifically X about people’s experience with Y”. This specificity guides question design while still preserving the qualitative depth that makes open-ended questions valuable.

For researchers working with modern text analytics solutions, these design improvements amplify the value delivered by AI-powered coding and analysis. Better questions generate better data, which in turn enables better insights and more confident decision-making.

This article is based on the presentation “Qual-at-Scale, Designed for Impact: Turning Open-Ends into Actionable Insights”, offered by Tovah Pagalaro of Fathom at the Insights to Action Summit in October 2025. The full video replay is free to watch here:


Author

Learn more about

Fathom
Fathom provides AI-driven software and services to analyse open-ended text responses accurately, cutting verbatim analysis time by 75%.
FIND OUT MORE Fathom

Scroll to Top