Consumer research is integral to a successful brand – but it is tricky business when human decision-making is so complex and dependent on many changeable factors. As such, the data produced from consumer research will also be complex, but what does complex data look like?
There are a few key characteristics that distinguish complex data, for example: big data is typically complex data – large amounts of data poses a key challenge that requires great resources to process and sort. Other characteristics include:
- The structure and type of data – if there are multiple datasets from different sources (as there should be when it comes to consumer research channels), then there is likely to be a lot of data that refers to the same thing but in different formats. Using the data from all sets and formats will take some skill to work out so that there is no duplication of data and some sense can be made through analysis.
- The language in which the data is recorded – English, Chinese, Russian, Binary, etc. the datasets will be recorded in the language of the consumers, and such will need translation before analysis. Each language will have its own interpretation, colloquialisms and sentiments that will be hard to understand fully in other languages.
- The speed in which the subject of data is evolving, and that will determine how much data you’ll need to collect and analyse on a continuous basis to understand the subject itself. In this time, new data sources will emerge and need to be accounted for in the consumer research project.
Analysing and simplifying complex consumer data from all of data types can be tricky even for great researchers who have conducted many research experiences and have a working understanding which analytics techniques are the best for simplifying consumer data in every scenario possible, and this is because every dataset that we come across is unique and will need specific methods applied to draw out the insights needed.
Methods for Analysing Complex Data
Complex data can look intimidating in its rawest form. Masses of unorganised datasets that are teaming with insights to uncover if only the right analytical methods are applied. So, what are some the most popular and impactful analysis methods that insight teams have at their disposal at this moment in time?
Popular Quantitative Analysis Methods
There are many analysis methods, tools and techniques to sift through, but an experienced insight team will already have a solid understanding of which ones are good for which purpose. Whether it’s to measure differences between datasets, establish patterns and relationships between variables, or to test hypotheses, some of the most popular quantitative analysis techniques range from foundational statistical methods to more complex analyses.
Statistical tests we typically learn in school such as mean, median, and mode are a useful starting point to identify a variety of basic patterns or foundational insights that researchers can then build on. Standard deviation metrics indicate how dispersed a range of numbers is. Skew is another statistical test that form the core of a researcher’s analytical arsenal, suggesting how symmetrical or potent the variables are. Then the Analysis of Variance, or ANOVA, compares the means of various, multiple groups – not just two.
Correlation analysis is a more complex analytical technique than the foundational statistical tests and is great for identifying patterns in numerical datasets and determining how strong that relationship may be. There are three different correlation coefficients that insight experts can use depending on the datasets and what those datasets indicate: Spearman’s Rank, Kendall and Pearson’s Coefficients.
Regression analysis takes correlation analysis a few steps further to understand cause and effect between variables. Assessing the strength of the current relationship between the datasets and variables helps to comprehend the future relationships between them too. Variations of regression analysis include linear regression, multiple linear, and nonlinear regression. The latter of which is commonly used for more complex datasets.
Factor analysis is another analytical tool that’s useful specifically for condensing variables and uncovering clusters of responses. Using this can uncover trends and can be used in conjunction with segmentation studies to simplify larger studies and better focus on and understand the overall results and patterns.
Every statistical analysis method has its own assumptions and limitations, so being aware of these and making sure to mitigate those limitations to gain the clearest picture, and understanding which one to use will depend on the type of data and the hypotheses the research project is based on.
Impactful Qualitative Analysis Methods
Qualitative data is mostly to made up of words, transformative texts that respondents pour their heart out in, but it can also be images, audio and video, which are arguably more impactful than the traditional word text data. The data to be gleaned from images and videos hold background contexts and unconscious tells that add more detail to the text and audio data, allowing researchers to generate well-rounded and fully-informed insights when the right analytic techniques are applied, and there are a few impactful methods to note:
- Content analysis – this is the analysis of text data through the process of coding and identifying themes and patterns.
- Narrative analysis – used to understand how research participants construct story and narrative from their own personal experiences. There’s a duel later of interpretation here – the respondent’s interpretation and the researcher’s interpretation.
- Discourse analysis – researchers analyse what a respondent says in text and videos to understand underlying meanings, they generate insights through interpretations based on the details of the research subject and relevant contextual knowledge.
- Thematic analysis – similar to content analysis but for identifying common and outstanding themes only across various datasets.
- Sentiment analysis – researchers look to identify the prevalent emotions and sentiments expressed across datasets. This should be used in conjunction with other analyses to see the full picture and understand how emotions can impact insights.
- Semiotic analysis – use this to analyse and understand how respondents communicate through their signs and sign systems to identify all interpretations possible. This can be used on text, audio, image and video data.
From both qualitative and quantitative analyses, insight professionals are able to understand most of what the data communicates. Using segmentation analysis alongside these can unlock additional insights that can then be used to target specific consumer groups that stakeholders want to focus on. Similarly, applying these insights to consumer behaviour models will enable brands to enhance their customer experiences at every turn, with greater insights into the consumer decision-making process and journey. Any type of consumer behaviour analysis is conducted in the aim to understand how people make purchase decisions, answering questions such as how consumers feel about the brand and their competitors, how they behave when faced in certain commercial situations, and how impactful marketing, advertising and other business strategies are on influencing consumer behaviour.
There are tools that can help automate the data analysis process to save time – but there is nothing like the human mind to interpret data sets and generate quality insights. Any tool will be able to find significant patterns and trends in singular datasets, and maybe even those across multiple datasets, however they don’t just find the significant ones, they will find all of them. It takes a human presence to pick out the ones that matter and make sure that the ones that don’t don’t get communicated to anyone who might not understand why they don’t matter.
What Does it Mean to Simplify Complex Data
Once the data has been analysed and insights generated, it’s up to the insight team to simplify the insights in the reporting stage. Simplifying complex data means to communicate it in a way that the intended stakeholders, and then anyone else who would gain ground based on the insights, will understand immediately.
Natural language generation technology can make it easier and quicker for organisations to read and understand large amounts of data by converting complex information to simple summarised text, but these technologies can be expensive for those insight teams already on a restricted budget. There are a few ways insight experts can do this.
Firstly, to convert complex data into comprehensible stories. Human beings as a race respond to stories well – whether those stories are written down, communicated aurally, or in another interactive and engaging way. Most people unfamiliar with raw or even sorted data will struggle to understand, so insight teams need to simplify the data and display it in a way that will aid universal comprehension, and there are a few ways in which this can happen.
Firstly, sorting complex data into categories based on consumer/customer personas – consumer data will reflect a segment of the brand’s target audience or current customer base (or even both), and so using this data to feed into each consumer persona, better informing it, creating a fuller picture with more data and insights collected. Stakeholders can take these personas and use them to better understand each segment, and make decisions based on the insights in each persona.
Data visualisation is a more standard version of communicating data and insights, usually the visualisation stakeholders are used to are charts, tables and graphs. But there can be more creative and interactive ways of visualising data, using dashboards for example or augmented reality in simulated models.