
Five Qualitative Frameworks You Can Apply in AI Today: Plus Prompts to Get You Started
By DoReveal
- article
- AI
- Artificial Intelligence
- Qualitative Data Analysis
- Qualitative Research
I want to start with something that took me a while to fully appreciate: current large language models already know these frameworks. Not in a superficial way. If you go to ChatGPT, Claude, or Gemini and ask whether they’re familiar with Kelly’s personal construct psychology, or means-end chain analysis, or Alan Cooper’s goal-directed design methodology, they will give you a substantive, accurate answer. They’ve been trained on this material, meaning you are not starting from zero.
That’s the foundation for everything that follows. Because it means that when you bring your qualitative data to these models and ask them to apply a framework, you don’t need to teach them what the framework is. You just need to tell them to use it and be specific about what you want back.
I’ve been doing this across a range of studies – including a set of COVID-19 interviews, about ten to twelve transcripts – and what I want to share here are five concrete frameworks, what each one brings into focus, and how to get started applying them using AI. All of these prompts you can use with whatever tool you prefer. They’re yours to take and adapt.
I’ve also published a prompt library, which is public and free for you to use in your own project here.
1. Journey Mapping: the Time Dimension
Journey mapping is one of the most commonly used frameworks in both UX and market research, but it’s often applied quite superficially – a list of stages with some touchpoints attached. What makes it genuinely powerful is the time element. It’s not a static observation about a person’s state. It’s a relative observation: something happened, and then something else happened, and the relationship between those events matters.
When you apply journey mapping to qualitative data, you’re asking: what triggered this person’s experience? What did they do and feel at each stage? Where did things shift emotionally? Where did friction build up or dissipate?
The kind of prompt to use: ask the AI to create a journey map based on your interview data, specify the stages you want (or ask it to identify them), and explicitly ask for emotional tone and mindset at each stage, not just actions. For example: “Create a journey map based on these transcripts. For each stage, identify the key actions, the touchpoints, the emotional state, and any friction or moments of relief expressed by participants.”
What you often discover is an emotional arc that none of your participants would have described if you’d asked them directly. In the food delivery example I use in training, the arc runs from exhaustion through overwhelm to relief through lingering guilt about the cost. That sequence is analytically interesting in ways that isolated observations about convenience or price sensitivity simply aren’t.
2. Jobs to Be Done: Functional and Emotional in Parallel
Jobs to be done has become one of the most widely used frameworks in product research and strategy, but – and I say this having seen many applications of it – it’s also one of the most commonly misapplied. The insight behind it is simple and powerful: people don’t buy products for their features, they hire them to do a job. But the framework only shows its full value when you distinguish between functional jobs and emotional jobs.
The functional job is what the person is trying to accomplish at a practical level. In our food delivery example, it’s straightforward: get access to dinner without having to expend the energy to cook. But the emotional job is what they’re really trying to feel. And that often turns out to be something like: I want to feel taken care of; I want the feeling of restoration at the end of a draining day. That’s a different brief for a product team than “make ordering faster”.
The kind of prompt to use: “Analyse these transcripts through a jobs to be done lens. For each participant or theme, identify both the functional job – what they’re trying to accomplish – and the emotional job – how they’re trying to feel. Where the two diverge or interact is particularly interesting.”
The distinction between functional and emotional jobs often reveals something that a straight thematic summary would have flattened out.
3. Kelly’s Personal Construct Psychology: the Ladder of Meaning
George Kelly developed personal construct psychology in the 1950s as a way to understand how individuals make meaning of their experiences. The core idea is that people navigate the world through personal constructs – mental templates they use to interpret, anticipate, and respond to events. Laddering, the primary analytical technique derived from this work, is a method for climbing from concrete observations up through consequences to underlying values.
In practice, it looks like this: a participant in my COVID-19 study talked about stockpiling masks and building backup staffing plans. At the surface level, that’s a behavioural observation. But by laddering upward – what does this behaviour serve? – you arrive at something like: they want to get ahead of the curve, to maintain situational awareness, which ultimately connects to protecting staff and patients and their own sense of professional competence.
That chain from stockpiling masks to professional identity is not something you’d extract from a thematic summary. It requires the deliberate act of asking why at every level.
The kind of prompt to use: “Apply personal construct psychology principles to analyse these transcripts. Generate a set of ladders that move from the concrete attributes or behaviours participants describe, through to the consequences they express, and finally to the underlying values or self-perceptions those consequences connect to.”
You can then go back and forth with the AI on individual ladders – “take the first ladder and explore it further, what other values or whys are present?” – and the conversation starts to feel like doing analysis with a colleague rather than running a query.
4. Means-End Chain Analysis: Attributes, Consequences, Values
Means-end chain analysis is closely related to laddering but has a more formalised structure, making it especially useful in consumer and market research contexts. Developed in the 1980s to understand how consumers link product attributes to personal values, it maps explicitly across three levels: attributes (what the product or experience has), consequences (what that means for the person in practical terms), and values (what it connects to at the level of personal beliefs and identity).
What makes it valuable alongside Kelly’s approach is that it tends to surface a slightly different slice of the same data. Where Kelly’s laddering might take you to professional identity or psychological safety, means-end chain analysis might surface something like fairness or trust, because it’s more explicitly structured around what the experience delivers rather than what the person constructs from it.
In the COVID-19 study, when I applied a means-end chain analysis to conversations about mask shortages and improvisation, what emerged was a chain linking those concrete behaviours to the consequence of keeping everyone safe and being prepared, and from there to values around fairness and institutional trust. That’s a genuinely different insight from the laddering output – not better or worse, just differently revealing.
The kind of prompt to use: “Apply a means-end chain analysis to these transcripts. For each major theme, identify the concrete attributes or behaviours that participants describe, the functional and psychosocial consequences they associate with those, and the terminal values those consequences connect to.”
5. Persona Development: Behaviour, Goals, and the Value of Trying Both
Personas are familiar to most researchers and product teams, but the way they’re usually built – primarily around demographics – tends to produce types that are descriptively interesting but strategically thin. The more powerful approaches cluster by behaviour or by goals, and what I’d encourage you to try is running both in parallel and then comparing.
Behavioural personas group people by what they actually do: how they engage with a product or service, what their usage patterns look like, and how they respond to different situations. The COVID-19 study identified types such as the System Sentinel and the Pragmatic Fixer, each with a distinct behavioural signature.
Goal-directed personas, developed along the lines of Alan Cooper’s methodology, cluster by what people are fundamentally trying to achieve. The same data produced different types: a Preparedness Coordinator and an Improvising Responder. And the interesting thing is where those two sets overlap. If the Preparedness Coordinator appears in both the behavioural clustering and the goal-directed clustering, that’s a stronger signal than either alone.
The kind of prompt to use: “Create two to four behavioural personas based on these interview transcripts, clustering participants by their observable patterns of behaviour and response. Then run a second analysis creating goal-directed personas based on Alan Cooper’s methodology, clustering by the fundamental goals and motivations that participants express. Note where the same archetype appears in both.”
Combining Frameworks: Where the Real Richness Emerges
The most powerful application of all this isn’t using a single framework at a time. It’s stacking them. Do a thematic analysis first to understand the broad landscape of what’s in the data. Then ask the AI to identify the emotional tones associated with each theme. Then ask it to cluster participants based on those thematic and emotional profiles and generate personas from that clustering. Then, for a specific persona, generate a journey map that traces their particular experience through the research context.
Each step builds on the last. Each one uses the output of the previous analysis as the foundation for the next. And the result – a persona-specific journey map that’s grounded in thematic and emotional analysis of real interview data – is a level of analytical richness that would have taken days to produce manually. With AI, it takes a conversation.
The frameworks and capabilities are there. The main thing left is to start using them.






