
The Double-Edged Sword: How AI is Simultaneously Saving and Threatening Market Research Data Quality
By Cint
- article
- AI
- Artificial Intelligence
- Agile Quantitative Research
- Online Surveys
- User Surveys
- Online Panel/Sample Exchange
- Survey Panel
- Customer Panels
- Synthetic Respondents
I’ve spent over eight years fighting fraud in market research, and I can tell you with absolute certainty: we’ve never faced anything quite like the AI revolution we’re living through right now.
In a recent webinar with my colleague Marcus Carter, we debated the pros and cons of AI in our industry, and honestly, it felt a bit like arguing with myself. Because here’s the truth: AI is both our greatest weapon and our most formidable threat when it comes to data quality.
Let me explain what I mean.
This article covers part of the webinar “The Data Quality Truth: How AI is Both Saving and Threatening Market Research”. Rewatch the entire webinar here:
When AI is the Hero
Every day, I watch AI-powered systems analyse millions of data points across the Cint Exchange, Cint’s research marketplace, identifying fraud patterns that would take human teams weeks or months to detect. Trust Score, our proprietary machine learning solution, is constantly learning and evolving to predict fraud before it even enters our clients’ surveys. It’s analysing device metrics, respondent behaviour, regional trends, and countless other signals in real-time.
The speed is breathtaking and the scale is unprecedented. Frankly, it’s one of the reasons we’re able to keep pace with the velocity attacks that have become routine in our industry.
AI’s benefits extend far beyond fraud detection. Synthetic data – one of the most debated topics in our field right now – has genuine potential to solve real problems. When researchers struggle to fill quotas in niche markets or hard-to-reach populations, synthetic data can step in. When a study looking for CEOs in a specific industry becomes a fraud magnet (and trust me, they often do), a synthetic sample offers a safer alternative.
The operational efficiency gains are equally compelling. AI helps my team scale up our fraud-fighting operations by handling repetitive analytical tasks, freeing us to focus on what humans do best: having personalised conversations with concerned clients, strategising new approaches, and maintaining the empathetic relationships that no algorithm can replicate.
When AI is the Villain
Here’s where I put on my sceptic’s hat.
AI-assisted fraud has become a nightmare. Free generative AI tools have democratised sophisticated fraud techniques, putting powerful automation capabilities into the hands of anyone with an internet connection. Fraudsters are using AI to read surveys, answer open-ended questions, and even control their screens in ways that increasingly mimic human behaviour.
The arms race is real, and it’s exhausting.
My concerns about AI go deeper than just its misuse by fraudsters. Synthetic data, for all its promise, is built on human responses – including responses from fraudsters we may not have caught yet. It’s only as good as its source data. And here’s the uncomfortable question: if we’re using AI to predict human behaviour based on past data, are we actually understanding humans anymore, or are we creating an increasingly detached simulation of reality?
Our industry exists to understand genuine human thoughts and behaviour. When we start fabricating data – however sophisticated – where does that leave us? Inaccurate insights damage businesses financially and reputationally. Can we really predict how humans will respond to present events based on historical patterns when human opinions are constantly evolving?
Then there’s the over-reliance risk. We’re already seeing this happen: companies deploy AI solutions and then gradually reduce their human oversight. They trust the algorithm implicitly. But AI still has false positive rates. It still misses things. It hallucinates. It has biases. It still needs human intelligence to validate its decision-making, interpret outputs, catch edge cases, and maintain ethical guardrails.
The Environmental Elephant in the Room
Let’s talk about something that doesn’t get enough attention in data quality discussions: the environmental cost of AI. The data centres required to train and support these systems consume massive amounts of power and cooling. As someone who fights fraud every day, I genuinely believe AI is worth it for identifying and preventing attacks that would otherwise take weeks to find manually. That being said, I sincerely value the preservation of our environment and acknowledge that the impact of AI on the environment is still being understood. So we need to hold ourselves accountable and actively work toward understanding and reducing AI’s environmental footprint.
Finding the Balance
So where does this leave us? After years in the trenches of trust and safety, here’s what I believe:
AI is a tool, not a replacement. At Cint, we live by the mantra “deploy your humans”. Our AI systems are incredibly powerful first lines of defence, but they’re exactly that: first lines. We maintain continuous human oversight to ensure our systems catch real fraudsters without creating roadblocks for genuine respondents or exhibiting unfair biases.
Synthetic data needs guardrails. If we’re going to use synthetic data to complement studies, we need transparency with research buyers, clear consent from panellists whose data trains these systems, and honest conversations about the limitations and appropriate use cases.
The cost-benefit analysis is ongoing. Yes, AI requires substantial investment, not just financial, but also environmental. We need teams of humans to build, maintain, and oversee these systems. The ROI calculation isn’t as simple as “AI replaces humans, therefore it’s cheaper”.
Human connection remains irreplaceable. I cannot send an AI agent to discuss quality concerns with a worried client. Technology can’t replicate the understanding that comes from years of experience, the intuition that flags something unusual, or the strategic thinking required to stay ahead of evolving threats.
Looking Forward
The AI revolution in market research is still being written. We’re all figuring this out together, and that’s exactly why conversations like these matter. The jury may still be out on whether AI is ultimately an angel or a demon, but my experience tells me it’s neither. It’s a powerful tool that requires wisdom, caution, and constant vigilance to wield effectively.
What I know for certain is this: the companies that succeed in maintaining data quality won’t be the ones who blindly embrace AI or stubbornly resist it. They’ll be the ones who thoughtfully integrate AI capabilities while preserving the human expertise, ethical oversight, and collaborative relationships that have always been the foundation of good research.
Because at the end of the day, market research is about understanding people. And while AI can help us do that more efficiently and at a greater scale, it can never replace the human understanding at the heart of what we do.
This article covers part of the webinar “The Data Quality Truth: How AI is Both Saving and Threatening Market Research”. Rewatch the entire webinar here: