Learn surveys with Meena Das - Part 3: Survey Analysis
P.S: We encourage our readers to go through the entire blog post, but should your reading time today be even limited than you expected, then please scroll down to the summary section at the bottom. Every minute you have for reading should empower your perspective for the minutes you are not reading.
Welcome to the third and final part of the mini-series Learning Surveys with Meena Das.
With the first two parts focusing on design and the steps post design for launching and monitoring surveys, this part takes us to learning about analysis of those surveys.
In my experience, a PowerPoint based analysis report is really helpful, especially when you have team waiting to look at the results of the survey. So, how do you start?
First and foremost, you need to begin with some key metrics. Response rate (i.e. how many people responded to your survey), bounced email count (i.e. count of the emails where your survey could not be delivered. This reflects the quality of data in general), email read rate (i.e. how many read your email but did not click on the survey) and count of incompletes (i.e. to understand how many went on the survey link but did not complete the whole survey). You could look at more metrics if your survey tool is offering so. I generally look at these to understand the quality of my results at a high-level. We will cover these metrics in terms of what they mean and how are they calculated in another post. Once you have these metrics, you can move further to perform one/both of the analysis described below.
Now, before we understand the types of analysis, remember that every question should be analyzed on the basis of total complete responses. Let’s see an example. Assume there are three questions in a survey - A, B and C. The total completed responses to the survey are 100. However, A received 50 responses, B received 20 responses and C received 70 responses. Let’s understand this example in the two types of analysis below.
Types of analysis:
This means that every question has its own slide where you do an analysis depending on your question. If your question is checkbox, then mere counts can help. However, if your question is radio button, drop down or any Likert-scale type question, then you can calculate weighted average to be able to compare between the options of the question. From the example above, when you are analyzing A, B and C, individually, you will not look at the counts for each option of A, B or C on the basis of whole 100 responses. Instead, you will analyze, A on the basis of 50 responses, BUT you would make a note that 50 responses are blank. Similarly, for B, you would look at the counts of the response on the basis of its 20 responses BUT you will note that 80 responses are missing and so on.
Once you have completed a question-wise analysis, its time that you go the next level. Comparing the results of two or more questions against each other is called cross-tab analysis. When should you use cross-tab analysis? When you have a hypothesis (i.e. the reasons you feel could be the reason behind the way the responses of a question(s) turned out to be), you can perform such cross-tabs and dig deeper.
1. Determine the format of your report.
2. Make a note of all critical metrics like response rate, incompletes, email read rate and bounced emails in your report.
3. Perform one/both question-wise analysis and cross-tabs.
4. Ensure that the objective of your survey is answered after the analysis.