
How Leading Brands Use AI for Insights
By Insight Platforms
- article
- AI Agents
- AI
- Generative AI
- Artificial Intelligence
- Insight Transformation
This article is based on a webinar presented to AURA, the UK’s networking forum for Corporate Researchers. You can watch the video of that webinar here:
AI in the Real World: How Insights Teams are Actually Using It
The noise around artificial intelligence in market research is bonkers. Every vendor claims to have the solution, and AI panels dominate every conference agenda. But who’s actually using AI for insights, and what value are they getting?
This article examines real-world case studies from enterprise insights teams, drawing on presentations hosted here on Insight Platforms, conference papers from recent Esomar Congress events and work featured in the Corporate Researcher’s Guide to AI. The evidence shows that AI is delivering tangible value, but not always in the ways the marketing materials suggest.
The Corporate Researcher’s Guide to Artificial Intelligence
Understanding AI’s Role in Research
Before examining specific implementations, it’s worth establishing a framework for how AI shows up in research teams. The term “artificial intelligence” has become almost meaningless, covering everything from basic automation to advanced generative models. What matters is understanding the distinction between deterministic and probabilistic AI models.
Deterministic models deliver fixed outputs based on specific inputs. They’re predictable, traceable, and consistent. They work well for classification, topic modelling, and segmentation, where you need repeatable results.
Probabilistic models, particularly large language models, generate likelihoods rather than certainties. They handle complexity and ambiguity well, but produce varying outputs even with identical inputs.
This distinction matters because expecting deterministic behaviour from probabilistic models is where many AI initiatives fail. The key is matching the right type of AI to the specific research challenge.
AI in research operates at three levels:
- individual tools for point solutions
- workflows that connect multiple capabilities
- emerging organisational structures built around AI.
Today, most activity sits in the first category, with perhaps 10-20% in workflows, and less than 1% in truly AI-native organisational models. The examples below mostly fall into the tools category, although some clearly feel more like workflows.
Google: Testing the Boundaries of Digital Twins
Google’s research team worked with brox.ai to pilot digital twins for concept testing. The approach involved creating shadow panels based on primary research conducted through video interviews. Each human respondent was modelled as a digital twin that could be queried for subsequent research.
The team tested a subscription service for online music, running the digital twin study alongside primary research with real humans. In many cases, the feedback aligned closely. But the conclusion was clear: digital twins excel at screening and optimising ideas, not at validation. They complement primary research rather than replacing it. This is a critical finding given the binary views many hold of synthetic data. You don’t have to be with it or against it. You can be both or neither.
IKEA: Blending Ethnography with Computer Vision
IKEA’s Life at Home Insights programme demonstrates how AI can scale qualitative research without losing rigour. Working with Ipsos, the team conducted in-person ethnographies across different countries to understand how people approach storage in their homes. These ethnographies established the analytical framework.
The team then collected 3,000 photographs and used a computer vision model to classify the images. The crucial step was training the AI using insights from the ethnographic work. This ensured the automated analysis reflected genuine cultural and behavioural understanding rather than superficial pattern-matching.
The approach shows the value of using traditional research methods to build the foundations for AI deployment, rather than jumping straight to automated analysis.
You can read a more in-depth article on this case study here:
How We Do It at IKEA: Combining Deep Ethnography with Cutting-Edge AI
Unilever: Combining Stated and Observed Data
Unilever’s food division in Benelux used AI-moderated interviews to gather feedback from 600 consumers during actual meal preparation. The interviews took place in people’s kitchens while they were cooking, with the AI moderator adapting questions based on what respondents said about specific recipes and ingredients.
The breakthrough came from combining transcript analysis with visual analysis of the video footage. This multimodal approach revealed significant gaps between stated and observed behaviour. For certain brands, the number of verbal mentions was substantially lower than the number of times the brand appeared in the video footage. The platform, provided by Conveo, helped the team quantify these differences and extract insights that would have been missed with traditional qualitative analysis.
Watch a presentation that explains Conveo’s multi-modal AI and includes a reference to the Unilever case study here:
Seeing What Words Miss: How Multimodal AI Transforms Video Research
Kia Europe: Transforming Customer Experience Analytics
Kia Europe deployed an AI-driven platform from Caplena to analyse customer experience feedback across multiple channels. The system processes nearly 250,000 open-ended verbatim comments from NPS surveys, online reviews, and other feedback mechanisms.
The impact was significant on both speed and insight quality. Previously, answering questions from the executive team took weeks using legacy tools. The AI platform reduced that to 24 hours. More importantly, the fine-grained topic modelling revealed specific drivers of customer behaviour that weren’t visible in aggregate data.
The team reported a six-point improvement in NPS within a relatively short timeframe, attributed to the ability to respond more effectively to customer feedback across different touchpoints.
More on that case study here:
Case Study: Putting Feedback to Work at Kia with Caplena
JBL Speakers: End-to-End AI Research
JBL uses an end-to-end research platform from Knit that integrates qualitative and quantitative data collection with AI-driven analysis and reporting. The approach allows the team to test campaign creative with hundreds of consumers and turn around results in three to four days.
The platform combines video-based open-ended questions with structured survey data, giving the team both numerical data and rich qualitative context. AI handles the synthesis of video clips, the analysis of responses, and the generation of PowerPoint reports that researchers can then refine. This allows the team to provide feedback to their agency quickly enough to influence campaign direction before launch.
Read more about this case study on the Knit website.
Novartis: Knowledge Management at Scale
Novartis deployed Market Logic Software’s DeepSights platform as a generative AI layer across all its research and data sources. The system, which they named Sherlock, connects primary research, secondary data, internal reports, CRM data, and external subscriptions into a single queryable interface.
The platform integrates with Microsoft Teams, allowing anyone across the organisation to ask questions and retrieve relevant data during meetings. With approximately 10,000 weekly users, adoption has been strong. The business impact extends beyond convenience: the company reported significant reductions in primary research spend by making existing knowledge more accessible and usable.
Read the full case study on the Market Logic website.
Reckitt: Strategic AI Deployment
Reckitt’s approach, led by Elaine Rodrigo, Global Head of Data Analytics & Insights, represents perhaps the most comprehensive AI implementation in enterprise research. The team focused on innovation as the primary use case and established clear frameworks for deciding whether to build, buy, or partner for specific capabilities.
The decision framework centres on data ownership and internal expertise. Where Reckitt owns substantial internal data, they build solutions using tools like OpenAI’s APIs. Where data or expertise sits externally, they partner or buy. This pragmatic approach has delivered both internally built tools for insights generation and concept creation, and partnerships for synthetic screening and AI-moderated research.
The results are striking: 70% efficiency gains in time and money spent, alongside double the quality scores for concepts entering their testing framework. The AI tools generate insights from historical data, automatically create concepts, screen them synthetically, and then feed higher-quality ideas into primary research with real humans.
Practical Considerations for AI Adoption
The case studies reveal several patterns.
First, general-purpose AI tools like ChatGPT have value for individual brainstorming and quick tasks, but purpose-built research platforms offer critical advantages for team-based work: repeatability, consistency, proper citation and sourcing, integration with existing research infrastructure, and appropriate governance controls.
Second, AI doesn’t replace traditional research methods. The most successful implementations blend AI with established approaches. Digital twins complement primary research. Computer vision scales insights from ethnography. AI moderators handle volume while researchers provide interpretation.
Third, organisational change matters more than technology. The BCG framework suggests that only 10% of AI value comes from algorithms, 20% from technology, and 70% from people and processes. Insights leaders who proactively own AI strategy, like Reckitt’s approach, position themselves at the centre of decision-making rather than on the receiving end of others’ technology choices.
The future of research isn’t purely AI or purely human. It’s a blend of human-to-human research, AI-assisted researchers working with human participants, human researchers querying AI systems, and autonomous AI processes generating insights continuously in the background.
The evidence from these leading organisations suggests AI is moving beyond pilot projects into genuine workflow transformation. The question for insights teams is no longer whether to adopt AI, but how to deploy it strategically in ways that enhance rather than replace research rigour.
Watch the presentation on which this article is based here:










