What are AI agents - agents skills - MCP Connectors - featured image - Insight Platforms

What are AI Agents, Agent Skills and MCP Connectors? A Guide for Researchers

By Insight Platforms

  • article
  • AI
  • Artificial Intelligence
  • AI Agents
  • AI Moderated Interviews
  • Conversational AI
  • AI Personas

Summarise with AI

ChatGPT
Claude Logo
Gemini Logo
Perplexity Logo

AI agents and agentic workflows are the bingo buzzwords of research for 2026.

Like any new category, there’s a lot of confusion, a lot of people misusing terminology, and a lot of people with vested interests pitching things off as agents when they’re really just re-badged automations.

So what’s it all about? This article is an attempt to pin down some of the definitions as simply and accessibly as we can.

What’s the Difference Between a Chatbot, an AI Workflow, and an Agent?

Think of this as a three-layered cake.

Chatbots are the base layer. Then you move up to workflows, and at the top you’ve got agents. I’m using Claude to bring this to life because he’s my friend. But you could just as easily pick Deepseek, Gemini, ChatGPT. The same principles apply.

Level 1: Claude as a Chatbot

You open claude.ai, type a prompt, and get a response.

That’s genuinely useful, but it has limits. The AI can draft text, answer questions, summarise things. What it can’t do is go off and do things on your behalf.

You could ask it to draft a research debrief, summarise a set of verbatims, or write a discussion guide. It does the thinking and the writing, and then it stops. You copy the output and carry on with your work. The AI has no access to your files, your systems, or anything else happening outside that conversation window.

Level 2: Claude in a Workflow

You’re working in Microsoft Word and you have Claude for Word set up as an integration.

You highlight a section of your report, ask Claude to rewrite it, and it appears directly in the document. Or you’re using a platform like Zapier, where Claude is one step in a chain: a qual transcript comes in, Claude summarises and codes it, and the result gets written to a spreadsheet. Claude is doing the same kind of thinking as before, but it’s been given a specific context and connected to tools that let it interact with the systems around it. It’s not just generating text.

Level 3: Claude as an Agent

Claude has a goal, a set of tools it can call on, and the ability to reason through how to achieve that goal step by step.

You could ask it to review all the open-ended responses from a survey, identify the three most prominent themes, draft a summary slide for each theme, and save them into a PowerPoint file. It plans the steps, uses the right tools at each stage, checks its own work, and delivers the finished output.

This webinar with Rival Technologies brings this to life with a simple definition of an agent (a reasoning LLM + context + tools – see below) and some good examples. Watch the replay here to learn more:

The Three Things Every Agent Needs

A Reasoning LLM

Not all AI models are designed to plan and reason through complex problems. Some are better at following instructions; others are built to think step by step before acting. Agents need the latter.

Context

This is everything the agent “knows” beyond its base training: your data, your research objectives, your methodology preferences, your document repository. The more relevant context you give it, the more useful it becomes. A general-purpose model given specific research context becomes something much more focused. This is a big part of what separates purpose-built research agents from general AI tools.

Tools

Tools are what allow the agent to take action: running statistical tests, generating a chart, querying a database, searching the web, sending output to PowerPoint. Without tools, the agent can think but can’t do anything with its thinking.

What Can Agents Actually Do in a Research Context?

Quite a lot already, expect a lot more to come.

Automating repetitive and time consuming work

Processing, analysing, visualising and reporting data can be time-consuming. Historically, there have been lots of mechanical steps in the process that eat up researcher time without adding much value.

Displayr’s Research Agent tries to address this. You upload your survey data, answer a few questions about your research goals, and it works through the whole analysis pipeline: running crosstabs, handling weighting, writing up key findings, generating charts, and producing a branded deck or dashboard.

Similarly, Forsta built a bunch of purpose-built agents into its Research HX platform. There’s a metadata agent that cleans and standardises data for analysis, a reporting agent that turns data into PowerPoint charts, and a research agent that picks out key findings from completed reports. Each one handles a specific job.

Going deeper on expert tasks

quantilope’s quinn provides support at each stage of the research process: generating survey inputs for complex methods like MaxDiff or implicit association tests, analysing open-ended responses to surface themes and verbatims, and producing charts and dashboard summaries on demand. The underlying philosophy is that quinn is trained on validated research methodologies, so the assistance it provides is grounded in research best practice rather than just general AI capability.

Synthesising insights from multiple sources

Voxpopme Compass sits across an organisation’s entire research repository so findings are searchable in plain language.

If a stakeholder asks a question, Compass searches studies, transcripts, and reports to find evidence and build a synthesised output from it. If it finds that existing research doesn’t cover the gap, it can also recommend and initiate new studies to fill it.

Market Logic Software’s DeepSights Persona Agents tackle a related problem but deliver the output in a different way.

Instead of surfacing findings as a report or summary, they synthesise existing a massive variety of insights and research sources (segmentation data, research reports, social media) into AI-powered personas you can have a conversation with. Product and marketing teams can ask questions directly, test concepts, and explore reactions in natural language.

It now can also be taken a step further with an AI interviewer having in-depth conversations with personas for rapid concept validation and iteration before going into live research.

Automating video analysis and reporting

Rival Technologies’ Insight Reels agent automates the analysis and reporting of video-based qualitative feedback.

It runs thematic analysis on a collection of responses (text or video), identifies the most relevant clips, stitches them together, and produces a narrated video highlight reel that summarises the key findings. The researcher can adjust the output, but it doesn’t necessarily need to watch hours of footage.

Generating concepts grounded in insights

Another agent from Market Logic Software (DeepSights Innovate) takes research findings and turns them into tangible concepts that can then be tested.

It’s actually a team of specialist agents, designed to work with human reviewers at each stage. A Desk Researcher Agent pulls together evidence from an insights knowledge base. Whitespace Identification Agents analyse unmet needs and market gaps to surface opportunity areas. Ideation Agents transform whitespace themes into early-stage ideas. Then Concept Creation Agents take the surviving ideas and structure them into test-ready formats

Zappi’s Innovation System covers similar ground but comes at it from a different angle: its AI Concept Creation Agents are fuelled primarily by the brand’s own accumulated concept testing data, so ideas are informed by what’s been tested before, how consumers responded, and what’s worked in their category.

There’s also an AI Concept Optimizer that takes a concept through iterative refinement based on live consumer feedback, and an AI Quick Reports layer that handles analysis and reporting. The underlying philosophy is similar to Market Logic’s: keep the human in the loop to guide and converge on the strongest ideas, while letting the agents handle the divergent, time-consuming work of generating and structuring options.

What Are Agent Skills?

Agent skills are a way of packaging up detailed instructions for an agent and saving them as a reusable file, typically a Markdown document (.md). Instead of writing a long, complex prompt every time, you create a skill file once and the agent can call it up whenever it recognises it needs those instructions.

Think of it like a standard operating procedure. If you’ve worked out exactly how you want qualitative data coded, or how a particular type of research debrief should be structured, you can write that out as a skill. The agent applies it consistently, across projects, without you having to repeat yourself.

There are two reasons for using agent skills for research:

Context window limitations

Every AI model has a limit on how much it can hold in active memory during a task. For complex research work, where you might be providing a lot of data and instructions, that window can fill up. Agent skills are stored separately and referenced on demand, rather than being stuffed into the conversation thread, so the agent can handle more complex and specialised work.

Sharing what works

If someone on your team has figured out a really effective way to prompt an agent for a particular research task, skills let you share that with colleagues. Over time, you build up a library of reusable research templates that make the whole team more consistent and productive.

Many teams are using skills as back office enhancements for their own workflows. Some research tools are making skills available publicly for use, like OriginalVoices:


What Are MCP Connectors?

MCP stands for Model Context Protocol. They are basically a standardised way for different software tools to talk with AI models and transfer data between them.

It means that software tools don’t need to build their own separate integrations each time; they can connect with any MCP-compatible agent. Build one MCP connector, and any agent that speaks MCP can use it.

For the research industry, it means that a survey platform, a data repository, a qualitative analysis tool, or an insight management system can each build one connector, and then any agent operating in that ecosystem can draw on those tools as part of a workflow.

For example: an agent that pulls data from a survey platform, runs it through an analysis tool, cross-references findings from a past study in your research repository, and outputs a summary into your reporting system, all without anyone manually wiring those steps together.

We’re in the early stages of this. But MCP connectors are one of the infrastructure layers we’re starting to track in the Insight Platforms directory, because the platforms that build them now will be better positioned to plug into agentic workflows as they become more common.

Here are a handful of the MCP connectors now available for research tools:


What Does This Mean for Research Teams?

A few things worth taking away.

Agents work better when they’re built for a specific job.

The temptation is to find one agent that can do everything. The more effective approach is to have agents designed for specific tasks that can be combined where needed.

The researcher stays in the loop.

Most of research agents are positioned as working alongside humans with outputs that are editable, reviewable, and traceable. Agents are handling the heavy lifting, but the human is still making the judgement calls.

Hallucinations are still a thing.

AI models can still get things wrong, produce plausible-sounding but inaccurate outputs, or miss nuance. That’s true for research agents too. Build in review steps, assume there will be errors, and don’t let anything go to a client or stakeholder without a human having checked it.

There are accessible ways to experiment, even without a dedicated research agent.

It’s not as techie as you might think to get hands-on. Tools like MindPal, n8n, and Zapier let you start building simple agent-style workflows without writing code. YouTube has tutorials. Playing with stuff is a good way to get comfortable. Obviously don’t do this with live, sensitive client data.

Where to Explore Further

We’re building out new sections of the Insight Platforms directory to cover this space, including AI agents, agent skills, and MCP connectors. You can submit a free product listing here for your solution.


Author

Scroll to Top