
How AI is Changing the Skills Insight Teams Actually Need
By Verve
- article
- Artificial Intelligence
- Generative AI
- AI Personas
- Customer Panels
- Insight Communities
- Online Communities
- Qualitative Research
- Synthetic Data
- Data Analytics
- Trend Analytics
- AI
The capabilities that made someone an effective researcher five years ago are not the same capabilities that will make them effective in five years’ time. That is not a comfortable thing to say, but I think it is an honest one. AI is changing what research work involves, and the industry needs to be clear-eyed about what that means for the people doing it.
Learn more by watching or listening to Andrew on the Founders and Leaders Series podcast here:
Episode 9: Andrew Cooper, Founder & CEO, Verve
The Donkey Work Problem
A significant proportion of what research agencies and in-house teams have traditionally employed people to do is analytical process work – the kind of work that requires methodological knowledge and attention to detail, but does not necessarily require high-level judgement. Coding responses, structuring data, drafting standard sections of reports, and managing survey logic. AI is becoming increasingly capable of handling these tasks, and the trend will continue.
This creates a genuine challenge for organisations that have built their staffing models around junior and mid-level researchers whose primary value has been in reliably executing these processes. That work will not disappear entirely, but its volume will reduce, and the number of people required to do it will decrease. Organisations that do not plan for this will find themselves holding on to skills that are declining in value while lacking the skills that are growing in importance.
What Is Growing in Importance
The capabilities that matter most in an AI-assisted research environment are, broadly, consultative skills and critical evaluation. These are related but distinct.
Consultative skills mean the ability to understand a client’s real decision-making context, to ask the right questions before any research is designed, and to interpret findings in terms of what they mean for the business rather than for the dataset. This has always been important in research, but acceptable work was still possible without it if the analytical processes were sound. That is no longer the case. If AI is handling the analytical execution, the human contribution has to add value at a higher level – in understanding the problem, designing the approach, and translating the output into something that a senior decision-maker can act on.
Critical evaluation means the ability to interrogate AI outputs – to ask not just “what does this say?” but “is this right, and how would I know?”. This is a specific skill that needs to be developed deliberately. AI outputs, including simulation outputs, can be plausible without being accurate. The tendency to accept fluent, well-structured output at face value is a real risk, and it is not one that the technology itself will address. It requires human researchers who are trained to challenge, triangulate, and maintain appropriate scepticism.
The Attitude Question
Beyond specific skills, there is an attitudinal dimension that I think is equally important and harder to train for. Working effectively with AI requires a particular stance – neither resistant nor uncritical. People who resist engaging with AI tools because they distrust them, or because their identity is tied to traditional ways of working, will find themselves at a disadvantage. But people who accept AI outputs without question will produce unreliable work.
What is needed is an AI-native mindset: an approach to work that treats AI as a capable tool with specific strengths and limitations, and that involves using it actively while maintaining independent judgement about its outputs. I think anyone who wants to build a career in research or insights over the next three to five years needs to develop this mindset. It is not optional.
What This Looks Like in Practice
To make this more concrete, I can describe how we think about it within our own work. We have built a set of internal AI tools – we call them Oracles – that are trained on specific areas of our research workflow. We have sixty of them, covering areas ranging from advanced quantitative methods to specific aspects of survey design.
What these tools do is change the nature of the work our researchers do. In the past, a researcher designing a conjoint study would have spent significant time on the foundational work – structuring the design, checking the logic, building in the right competitor sets for the relevant market. That work is now handled by the tool. The researcher’s job becomes one of reviewing, refining and applying judgement – checking that the design makes sense, that the factors are correctly specified, and that the competitive frame is accurate for the geography in question.
This is genuinely a higher-level task than what it replaced. But it requires the researcher to be comfortable working with AI output, to know enough about the underlying methodology to evaluate it critically, and to bring their own expertise to bear in a more focused way. The Oracle can act like a knowledgeable graduate getting the groundwork done. It can also, at a more advanced level, suggest approaches the researcher had not considered, acting more like a senior peer. Both modes require the human to be engaged and critical, not passive.
The Implication for Hiring and Training
For insight leaders managing teams, this has practical implications. Hiring decisions need to weigh consultative ability and critical thinking more heavily than they have done, relative to technical process execution. That is a different profile from what many research teams have historically recruited for.
Training needs to address AI literacy as a core competency. That means helping researchers understand how AI tools work at a sufficient level to evaluate their outputs, building familiarity with the specific tools in use, and creating environments where it is normal to question and test AI outputs rather than accept them.
It also means being honest about the fact that some existing roles will change substantially, and that some skill sets will become less central to the work. That is a difficult conversation to have with teams, but it is better addressed directly than avoided until it becomes a crisis.
The research industry’s core purpose has not changed. We are still trying to understand human behaviour and bring that understanding into organisations to improve decision-making. What is changing is the means by which we do that and the skills required to do it well.
Learn more by watching or listening to Andrew on the Founders and Leaders Series podcast here:







