I recently completed some work for a regulated utility supplier whose customer base - for various reasons - could only be interviewed face-to-face.
This is not the norm for me these days. In fact, it's not the norm in most mature markets, where online research is now the default.
It made me appreciate how far we've come over the last 20 years; but it also highlighted how far we still have to travel. A look at the evolution of online research shows where progress has been made and highlights those areas where we can expect much more innovation over the next 5 years.
In my first job, we used a lot of research - mostly done face to face and over the phone.
The typical project process looked something like this:
Get the latest thinking in research & analytics
Subscribe to get email updates from Insight Platforms
Most projects took 2-3 months to plan and execute. Fieldwork was costly; preparation was painstaking; and there was a high 'value hurdle' to research. Only big questions justified the cost and effort of gathering data to answer them.
Online Research 1.0 (2000 - 2015)
Then, in the late nineties, this thing called the information superhighway came along. Some enterprising types began building online survey panels.
This revolutionised the research world, and the new workflow looked like this:
Oh, wait ... it didn't change.
Research agencies gradually started doing online surveys.
But everything else stayed exactly the same.
OK, there were some cost savings, initially helping to flatter agency margins - but then quickly bagged by clients once procurement teams got wise.
And maybe fieldwork periods shortened by a day or two.
But essentially we just replicated the survey designs, workflows and outputs that had been in place for the previous thirty years.
Online research 2.0 (2016-2020)
Then along came the agile disruptors, and this happened:
Today, the front half of the process can be condensed from weeks to hours, and unit costs have fallen at a similarly dramatic rate.
It's easy to take this for granted, but it actually needed lots of related innovations.
Software tools that weren't ugly and didn't need a PhD to operate.
Question libraries to guide non-specialists, standardise data and avoid re-designing the wheel each time.
Automated analysis and visualisation with real-time updating.
And so on.
And the innovation hasn't stopped. Here's a partial market map of automated research platforms - all of which can now turn round completed surveys within a few hours.
This is all great, but what about the green half of the bar?
All these agile platforms help us to generate data quicker; but how do they help us use that data better or more efficiently?
Online Research 3.0 (2020 onwards)
I don't know what we'll call the next evolution, but much of it will be about condensing the green bit: enabling more real-time decision support; making customer feedback an integral part of different workflows; shortening the time lag between insight and action.
It will be about integration, automation and embedding.
Integration: much tighter connection between software platforms with the use of APIs and marketplace infrastructure.
Automation: getting it all done by machines rather than people.
Embedding: putting insight tools into the workflows of different departments.
This is all starting to sound like bullshit marketing jargon. Sorry.
Maybe some examples will help.
These are all current applications where data about people (users / consumers / customers) is more tightly connected to action and workflows than most survey research.
This is kind of old hat these days.
At its most basic, A/B testing involves giving different user groups a different experience on a website or in an app. Two version of the same ad; a red button and a blue one; with / without a call to action.
At any one time, large digital properties will be running dozens of live experiments with different parts of their site.
Many A/B testing solutions are now part of fully automated conversion optimisation platforms: as soon as one variant shows a statistically robust advantage (more clicks, longer dwell time, higher purchase rate), the inferior version is automatically switched off and everyone is shown the higher performing version.
This is the equivalent of condensing the green part of the process - data is 'used' almost instantaneously once enough of it has been generated.
A/B testing is a great example of insight that is embedded (in the digital teams that need it) and automated.
The core of these tools is screen recording, session replay and analytics to help improve the user experience.
Again, these tools capture data and insight on users in real time; and they are fully embedded into product teams' workflows.
Another example of workflow embedded insight platforms is user testing.
Tools like Loop11, Lookback and Userzoom can capture video and audio of users as they test a website or app. In the old world, a designer would brief a UX researcher to do this work, wait for their report and then decide what changes to make.
Now it's possible to for designers to get near immediate feedback from users; draw their own conclusions about what works or doesn't and make changes much more quickly.
Customer Experience feedback and Voice-of-Customer programmes have evolved out of all recognition in the last ten years.
Satisfaction surveys were once owned by the research department; reported quarterly or annually; and occasionally used in management incentive plans.
How times change.
Today, most larger organisations have dedicated CX teams; feedback is captured continuously across every touchpoint; verbatim answers are analysed for topic and sentiment on-the-fly; and red flag cases are routed to service recovery teams in real time.
And this doesn't even need huge budgets and industrial strength platforms like Medallia, Qualtrics or Clarabridge. NPS tools like Wootric, Hello Customer and Promoter.io can do this on a smaller scale across digital channels; and analytics tools like Chattermill can be used to make sense of feedback for frontline teams.
AI and machine learning will transform the research process over the next few years.
Platforms like Attest have brought agility to the front end of projects; their next evolution will be to change the back end, to close the gap between insight and action by embedding, integrating and automating research outputs.
There are some useful lessons in the digital analytics, CX and UX research platforms above; and here are five specific ways I think online research platforms will change to become more actionable over the next few years.
1. More control
Democratising research throughout an organisation is great in principle - but can also be pretty risky. Untrained users can ask terrible questions; some use leading language to get the answer they want; and others pick and choose the data points to mark their own homework. More widespread use of templates, e-learning modules and tiered user management will be essential.
2. More prescriptive outputs
Research platforms will increasingly use natural language generation to tell users what their data means; they will use better visualisations to show them the answer; and they will incorporate more predictive analytics to guide them to the right action.
3. Deeper integration
APIs are a force multiplier for software and data - but even today's most agile research platforms still largely sit largely on their own. Combining research with other data sources and outputs will be key to making it more actionable.
4. Better UX
If you think this is shallow, just go and look at some of the truly awful research interfaces out there. Easier to use, more attractive UX drives user adoption and engagement. Simple, but still surprisingly rare in the research world.
5. More meta analysis
The research world has a good pedigree of using norms - metrics from questions asked the same way in multiple projects - but a patchy record of building cross-project knowledge capital. With the wider adoption of data lakes and AI-based analytics, survey research will be able add more value in aggregate than the sum of individual projects.