From Production Work to Strategic Impact – The Rise of the Insight Advocate

By Harris QuestDIY

  • article
  • AI
  • Artificial Intelligence
  • DIY Surveys
  • Online Surveys

Summarise with AI

ChatGPT
Claude Logo
Gemini Logo
Perplexity Logo

As I shared the findings from our recent study of market researchers across the US, one theme kept emerging that both surprised and excited me: we’re witnessing the birth of the Insight Advocate. This isn’t just another buzzword, it’s a fundamental shift in how researchers will create value in an AI-driven world.

When we surveyed full-time employees working in market research and insights functions – split roughly fifty-fifty between client-side and vendor-side researchers – 72% reported using AI at work at least once a day. Compare that to just 23% of consumers generally using AI at work, and you can see we’re well ahead of the curve in adoption.

However, what really caught my attention was that 88% of researchers expressed either somewhat or very positive views about AI use, and 89% reported that it had either made their work life somewhat or significantly better. This isn’t grudging acceptance, it’s genuine enthusiasm for tools that are fundamentally changing how we work.

What I’ll mention in this article has been discussed in the webinar below, which you can watch on demand to go into more detail.


The Production Problem We’ve Been Ignoring

For too long, so much of our time has been consumed with the production of market research, and not nearly enough time has been spent on using the outcomes within our businesses. AI is finally giving us a chance to flip that equation.

The data tells this story clearly. When we asked what researchers are primarily using AI for, the top applications were all in the analysis space: analysing multiple datasets, working with both structured and unstructured data, and automating insight report production. About 41% are using it for survey design, 37% for programming surveys, and around 30% for proposal creation.

The time savings are real and substantial. The vast majority of our respondents are saving somewhere between one and ten hours per week. Only 5% said AI was taking them more time than working without it – and interestingly, that figure jumps to 8% on the vendor-side versus just 1% on the client-side, highlighting some of the workflow challenges agencies are facing as they integrate these tools.

What Happens When the Grunt Work Disappears?

Here’s the crucial question: as AI takes over more of the production work, what do we do with that reclaimed time? This is where the concept of the Insight Advocate becomes critical.

When we asked researchers about their ideal future state, the responses were telling:

  • 31% said human-led research with significant AI support;
  • 25% preferred AI-led with significant human oversight;
  • 26% mainly wanted a human with AI assistance.

That’s 82% of researchers seeing the optimal future as a combination of humans and AI. Only a tiny minority envisioned either pure human work or pure AI automation.

This suggests that researchers instinctively understand something important: the true value of market research lies not in its production, but in the better business outcomes it drives. Delivering those outcomes requires uniquely human skills that AI cannot replicate.

The Skills That Will Define Success

As I consider what the Insight Advocate role entails, it’s built on a foundation that AI cannot replicate: understanding your business, your stakeholders, and how decisions are actually made within your organisation.

Allow me to be blunt – our businesses are messy. You have different points of view, findings can be interpreted in multiple ways, there’s often groupthink around what works and what doesn’t, and you need someone who can navigate that complexity, who can advocate for research and data-driven decision making even when it’s inconvenient or challenging.

The skills required are fundamentally human: communication, engagement, storytelling, and business acumen. These aren’t nice-to-have additions, they’re becoming the core competencies that separate successful researchers from those who get left behind.

The Need for Supervision: Why Human Judgment Remains Essential

While professionals are embracing AI, our study shows they are clear-eyed about its limits. 29% of respondents reported that AI was leading to increased uncertainty about their job security, and 26% expressed concern about reduced human input where judgment is required. As I’ve said, “AI is great, but it needs a supervisor. It needs a human involved”.

This need for supervision is reinforced by the very accuracy benefits researchers are seeing. AI assists human analysts with tasks like identifying bots, checking outputs, and analysing multiple datasets at scale. But it does this most effectively when supervised by researchers who understand methodology and context.

Our study highlighted several key concerns. 72% of researchers worry about accuracy, noting that AI tools can give confident yet inaccurate answers. Additionally, 30% expressed concerns about transparency, wanting to understand what underlying data AI is drawing from. These concerns were significantly higher on the vendor-side, where there’s constant pressure for automation and cost management.

Beyond accuracy, bias is a significant concern. AI can unintentionally reinforce existing biases within datasets, with large language models often exhibiting Western and English-language biases. Privacy concerns around handling personally identifiable information (PII) and intellectual property also emerged as critical considerations.

These findings highlight a core truth: despite AI’s transformative potential, human oversight remains essential to ensure quality, accuracy, and ethical use in research. This is why at Harris QuestDIY, we launched a “research constitution” that defines the rules for how organisations should conduct research while leveraging AI tools.

Looking Ahead: The 2030 Vision

When we asked researchers what they want AI to do in the future, projecting out to around 2030, the responses painted a picture of true partnership rather than replacement.

The top request was for advanced decision support tools – essentially a console that sits on top of research, allowing researchers to test hypotheses, model outcomes, and optimise their studies. This is AI as a sophisticated research assistant, not a replacement.

Other priorities included generative AI capabilities across a range of functions (generating stimuli, drafting reports, creating different ways to communicate findings…), AI data generation and amplification, automation of routine tasks, predictive analytics, and yes, deeper cognitive insights.

But notice what’s not on this list: “replace the researcher”. Even when we inquired about more advanced cognitive insight capabilities, only 43% of respondents considered this achievable or desirable by 2030. This reflects researchers’ realistic assessment of where AI can add value versus where human judgment remains essential.

The Future Research Team Structure

As I envision the future research team, it’s still fundamentally human-centric, but organised around two primary purposes. First, researchers as supervisors of AI and the production process. Second, researchers as Insight Advocates.

As agentic AI continues to evolve, we’ll have an army of AI agents supporting us in production work, extending into new ways of engaging with our audiences – whether that’s creating podcasts, videos, or other formats to communicate with our organisations.

But our business is primarily about humans. We need to get humans to behave differently, to make different decisions. That will be incumbent upon us to make happen. No amount of AI sophistication changes that fundamental requirement.

The Skills Transformation for New Researchers

This shift has implications for how we develop talent in our industry. The roles we’re talking about automating are traditionally the tasks that graduate or junior researchers come into the industry doing. We still need those individuals, we don’t want our research teams becoming disconnected from the audiences we’re trying to understand, and we need to develop the senior researchers of the future.

However, the skills they need will change. They still need to understand research methodology – the science of research isn’t going away. But rather than focusing purely on production skills, we need to emphasise business skills, understanding business challenges, managing stakeholders, and engaging with organisations to ensure research drives better outcomes.

I find this shift exciting. Instead of spending so much time on the mechanical aspects of research production, we can focus on the application and impact of our work.

The Reality Check: Challenges Ahead

It’s important to be realistic about the challenges we face. Our research identified several barriers to faster AI adoption, including: privacy, accuracy, bias and security concerns; a lack of training and knowledge; difficulty in integrating AI into existing workflows; and internal company policies and restrictions.

That last point particularly concerns me. As big organisations try to manage AI risks, we’re seeing more constraints and additional hoops to jump through for AI approval. There’s a real possibility that risk management could slow down the very innovation that could give research teams competitive advantages.

In general, as we become more reliant on these tools, managing these challenges and concerns becomes more and more critical.


Making the Transition

For researchers reading this and wondering how to make this transition, my advice is to start thinking beyond the production of research to its application. Begin developing those business skills now, understand how decisions get made in your organisation, and practice storytelling and stakeholder management.

Don’t wait for AI to force this change, embrace it. The researchers who thrive in the next decade will be those who see AI as enabling them to have greater business impact, not those who view it as a threat to traditional research skills.

The Insight Advocate role isn’t just about surviving the AI revolution, it’s about leveraging it to finally achieve what many of us got into research to do in the first place: drive better business outcomes through a better understanding of customers and markets.

What I mentioned in this article has been discussed in the webinar below, which you can watch on demand to go into more detail.


Author

Learn more about

Scroll to Top