
5 Practical Ways to Manage AI Anxiety in Research
By Insight Platforms
- article
- AI
- Generative AI
- Machine Learning
- Conversational AI
- AI Agents
It’s easy to get freaked out about artificial intelligence.
Most of us in the insights industry wonder if we’ll have a meaningful job in future… But it’s also easy to go down the existential risk rabbit hole.
This post summarises a webinar presentation we gave at the July 2025 demo days event.
You can watch that webinar replay here:
How To Stop Freaking Out About AI
Or just read the rest of this article to get the main points.
Why We’re All a Bit Freaked Out
The headlines alone are enough to cause concern. Companies like Fiverr have sent alarming messages to staff about AI coming for their jobs. Numerous startups are actively working to automate white-collar work, and recent statistics show a reduction in entry-level graduate jobs, partially attributed to AI adoption.
Beyond job concerns, there are broader societal implications. As Mo Gawdat (former Google X executive) discusses on Tom Bilyeu’s Impact Theory podcast, the pace of technological change is accelerating dramatically. What seemed impossible a few years ago is becoming routine, and AI development is increasingly entwined with geopolitical tensions between superpowers.
For those wanting to dive deeper, websites like AI 2027 offer thought-provoking explorations of what might happen if AI is allowed to program ever more sophisticated AI systems—a concept known as recursive self-improvement. The risks of misaligned AI, illustrated in Nick Bostrom’s book “Superintelligence” with the “Paperclip Maximizer” thought experiment, highlight how AI with the wrong objectives could lead to unintended consequences.
AI’s Impact on Research and Insights
Within the insights industry specifically, several concerning trends have emerged:
- Data integrity challenges: AI makes it easier than ever to generate fake data, making existing issues with survey fraud and low-quality responses even worse.
- Synthetic content creation: As demonstrated by tools like Google’s Veo3 model, AI can generate realistic video testimonials that appear authentic but are 100% fabricated. This poses serious questions about the trustworthiness of qualitative research.
- Data pollution: AI-generated content is increasingly difficult to distinguish from human-created content, creating a cycle where new models are trained on AI outputs, further blurring the line between authentic and synthetic data.
- Overwhelming output volume: The ability to generate 10,000-word research reports, synthetic data sets, and presentations in seconds equals information overload.
- Tool proliferation: The rapid expansion of AI-based research tools creates confusion with new terminology and competing vendor claims.
- Budget pressures: Finance departments may expect cost savings from AI implementation before the technology has proven its value, putting pressure on insights teams.
A Historical Perspective on Technology Anxiety
Our suspicion toward technology and automation has deep historical roots. From the Luddites of the 19th century who sabotaged looms threatening their weaving jobs, to films like Metropolis and The Terminator, our collective psyche harbours deep-seated concerns about technology.
Even ancient Greek mythology reflects this anxiety. The god Hephaestus created Talos, a bronze automaton designed to protect Crete by hurling boulders at invaders. However, Talos lacked intuition and contextual understanding, failing to recognise allies from enemies—much like today’s AI struggles with nuance.
Yet, history also shows that fear isn’t always warranted. Despite decades of predictions about automation eliminating jobs, humans remain essential in most fields. Geoffrey Hinton, the “godfather of modern AI,” once suggested we should stop training radiologists because AI would replace them. A decade later, radiologists remain crucial. They are “augmented” by AI but not replaced by it.
Throughout history, technology has typically augmented human capabilities rather than replaced them entirely, for example:
- Despite sophisticated autopilot systems, airline pilots remain essential for decision-making and handling unpredictable conditions.
- Laser eye surgery technology enhanced ophthalmologists’ capabilities rather than replacing them, and actually increased demand for their expertise.
The Case of Self-Driving Cars: Why AI Isn’t Taking Over Everything
Elon Musk’s unfulfilled 2019 prediction that Tesla would have one million fully autonomous robo-taxis on the road by 2020 illustrates three key limitations that apply to AI in research:
- The accountability problem: When inevitable mistakes occur, someone must be responsible. In research, practitioners and clients remain accountable for ensuring validity—AI can’t and won’t shoulder this responsibility.
- The last mile problem: While AI handles predictable scenarios well, it struggles with real-world complexity. Similarly, in research, AI might collect and summarise data effectively, but struggles with the “last mile” of insight—interpreting recommendations with emotional subtlety and cultural nuance.
- The trolley problem: Ethical dilemmas have no single correct answer, and different researchers reasonably reach different conclusions from the same data. AI struggles to determine what truly matters without human judgement.
The Positive Impact of AI on Research
Despite all this doom and gloom, AI is also making research better in significant ways. Tools like Claude (used by Audience Strategies for their State of Electronic Music report) can dramatically reduce time spent on repetitive tasks, improving efficiency. AI also helps researchers find lateral connections across different data sources and formats, enhancing data connections. Conversational AI enables more human-like interactions at scale, improving on traditional online surveys. And rather than static reports, AI enables interactive, “living” artifacts that stakeholders can interrogate and explore.
Five Ways to Manage AI Anxiety
1. Get Hands-On with AI Tools
Experimentation is key to understanding AI’s capabilities and limitations. Try various models beyond ChatGPT, including:
- Research tools like Perplexity for deep research.
- Media generation tools like Suno (music), Google Veo3 (video), and HeyGen (avatars).
- Visualization tools like Napkin, Gamma, and Beautiful.ai.
- Voice tools like Eleven Labs for synthesising speech.
- Productivity enhancers like Flow for dictation.
2. Stay Informed
Dedicate time (around 5% of your working week) to learning and growing. Follow thought leaders like Ray Poynter, Yogesh Chavda, Ethan Mollick, and Stuart Winter-Tear on LinkedIn.
Consider reading books like “The Coming Wave”, “Co-Intelligence”, “The Exponential Age”, or “The Singularity is Nearer” to understand different perspectives.
Here is a brief summary of some of these books:
AI & The Future of Life, The Universe & Everything: A Reading List
3. Explore Your “Jagged Frontier”
The concept of a “jagged frontier” introduced by Ethan Mollick, represents the boundary between what AI can and cannot do. Some research tasks fall inside this frontier and can be fully automated (like creating themes from transcripts), while others remain outside and require human supervision or complete human involvement (like prioritising insights that will resonate culturally).
This frontier is subjective and depends on your business, proposition, and individual skills. It’s influenced by factors like budget, time constraints, the need for nuance, and brief complexity. Since this boundary constantly shifts as AI improves, regularly reassessing your personal frontier is essential.
4. Build New Skills
Researchers should leverage their existing capabilities into five emerging areas:
- Product management: Leading the transition from professional services to technology products
- Data asset management: Understanding how data connects, ensuring ethics, compliance, and privacy
- Business translation: Helping translate insights into business needs, building credible ideas for innovations
- Coaching and enablement: Supporting others as research becomes more democratised
- Storytelling and curation: Finding meaningful narratives within mountains of AI-generated content
Learn more in this on-demand presentation:
Next Generation Roles: 5 Career Paths for Researchers in the Gen AI Era
5. Make a Hero of the Human
No matter how powerful AI becomes, it will benefit from an accountable human expert who can curate strategically impactful outputs. The more human guardians of quality and validity are valued, the more secure our roles will be.
Research requires discernment and judgement—the ability to read between lines, prioritise what matters most, and expertly curate a strategically valuable interpretation of data. These human capabilities remain essential even as AI tools evolve.
Different research briefs require different approaches. For straightforward, lower-risk projects (like simple naming projects or basic UX studies), AI might play a more central role. For nuanced, complex work involving emotional drivers or cultural subtleties, humans will still need to take the lead in interpretation.
Finding Your Balance
The narrative around AI swings between extremes: either we’re headed for total disruption or this is all just boosterish hype from people with shiny stuff to sell.
Whichever it is, AI’s role in research is going keep growing. But let’s remember that the human elements (intuition, empathy, cultural understanding, strategic judgement, commercial context) should still be useful assets in the future.
Want to explore this topic further? Watch the full webinar replay of “How to Stop Freaking Out About AI” featuring Mike Stevens and Tom Woodnutt for more insights, examples, and practical strategies for navigating the AI revolution in market research.