Leapsome offers you various views and filter possibilities to make the most of your survey results. You can find out more about filter options and views here.
Just a note that Analytics is based on the 'snapshot' taken when the response was given. Therefore any changes made in a user's profile settings during the survey round will not presently be indicated in the analytics. If you forgot to update your user's demographic data before running the survey our Customer Support Team can refresh demographic data to its current state after you have updated all data under User & Teams.
Below, we collected some best practices on how to interpret the survey results based on our analytics section:
Insights in the list view
In our survey module, you will see results showing the average on the 10-point scale and the change since the last round. If you are using our best-practice questions / question categories, you will also see a benchmark. The score displayed is always the most recent statistically valid score from your survey for that topic or question.
You can deep-dive into any category or question by clicking on the topic name or the corresponding question. This will provide you with additional insights, with any comments linked to the topic or question being visible. The timeline graph will also show previous scores if (and only if) they have passed thresholds for statistical both significance and anonymity. The grey line within the timeline graph represents the average score over time. If you hover over any of the dots, you will be able to see the exact scores of the specific question or category.
The bar chart also provides more granular details displaying the distribution of scores. It uses green for scores 9/10, grey for scores 7/8, and red for 6 and below. This is similarly replicated for questions with a 1-5 scale, where green represents a 4/5 score, grey a 3 score, and red for scores between 1/2.
The overview 'NPS equivalent within demographic' provides an overview of the NPS scores brokendown by demographics. By default it will be grouped by teams. The net promoter score (NPS) equivalent is calculated by deducting the share of detractors (answering 0-6) from the share of promoters (answering 9 or 10). We caclucate the NPS equivalent for each question; please use the results in the right context.
This way you can link changes in employees answers to certain question over time, e.g. to any measures you took to improve this or to any other external events. If you for example launched a new initiative to allow for more flexible working hours, you can track how this influenced the evolution of the survey results.
Please note: List view takes the current team assignments into account when providing results.
In the List view, you will also see the trend of the question. With the default filter of the most recent round, the trend will show the comparison to the round this question was last used. If you add filters from two different rounds, you will see the trend between the two rounds. When you hover over the trend icon, you will see the rounds you compare. If you filter by a round where this question was not asked, no trend will be shown.
Insights in the Heatmap
Next to the timeline graph, the heatmap gives you great insights into your survey results. The heatmap gives you a helpful hint on which areas you need to pay the most attention to. It shows you the segmented results in a visual way that is easy to understand and then make strategic decisions from.
We now use cross-tabulation analysis in the heatmap view, also known as contingency table analysis. It lets you choose the variables in rows and columns to uncover seemingly unrelated items. This analytical technique allows for a comprehensive examination of how two or more variables intersect and influence each other. By breaking down the survey responses based on various criteria, survey admins can pinpoint specific demographics, behaviors, or preferences that are interconnected, thus gaining deeper insights into employees’ behavior.
You can, for example, compare engagement scores across different departments to highlight which teams might need more attention in terms of improving their work environment or communication. By comparing engagement scores between male and female employees, you can uncover any gender-related differences in perceptions of work satisfaction and involvement. You can also figure out if certain problems are happening more in some job levels or office locations.
Using cross-tabulation helps us understand things better by looking at information from different angles and finding patterns. It's like connecting the dots to see the bigger picture.
1.) Heatmap team segments show team memberships based at the time of submitting the answer.
2.) If you have chosen to "Hide numbers and only show labels" either within the admin settings or the cycle specific scale, this will be applied unless the aggregation of data does not permit for it. The Leapsome platform will not arbitrarily generate an intermediary label between two of the designated labels as this could lead to misinterpretation. For example if the label "Good" corresponds to 3, the label "Great" corresponds to 4, and data aggregation on the Heatmap results in a 3.5, the number will be displayed instead of the labels to provide more clarity.
The heatmap gives you a quick overview of of the scores through utilizing different color coding for each score to give you an understanding of the general trend of the average scores per question. The color code logic is as follows for a 10pt and 5pt scale, respectively:
10 Point Scale:
9-10: strong green
< 9 : green
< 8 : light green
< 7: light red
< 5.5 : strong red
5 Point Scale:
4-5: strong green
< 4 : green
< 3 : light green
< 2.5 : light red
< 1.25 : strong red
You can find more on the segmentation and filtering options and logic you have here.
If your survey's question's scale uses different scales/custom scales in different rounds, the received scores might be normalized. To understand how scales are normalized in Leapsome analytics, please find more details here.
Understanding the Standard Deviation
The standard deviation is a statistical measure of how spread out numbers are. Low standard deviation indicates that the values tend to be close to the mean, i.e. everyone that answered the question is of a similar opinion. If you, for example, have a standard deviation of 4 and a mean of 6, this tells us about the variability and average response of the employees. The mean (in this case, 6) represents the average score given by the employees. The standard deviation (in this case, 4) means that the responses are relatively spread out, indicating a greater diversity of opinions among the employees.
If you want to dive deeper into data analysis, you can refer to the bell curve, also known as a normal distribution. In a normal distribution, it is observed that approximately 68% of the data falls within one standard deviation of the mean. In this case, since the standard deviation is 4, we can expect that around 68% of the employee responses will fall within a range of 2 points above or below the mean (i.e., between 2 and 10 on a scale of 1 to 10).
Therefore finding out which questions had a high standard deviation will show you which topics polarize your employees more. If a question has a very high standard deviation, meaning people are not unanimous about it, you could e.g. see whether this question received any written comments that help you understand why opinions diverge.
Understanding the eNPS
The NPS, or in Leapsome's case the Employee NPS, is an established metric for measuring satisfaction with a product, service, employer, etc. As with any other metric, you need an understanding of the basic characteristics of the metric (e.g. range) and, if necessary, corresponding comparison groups to know what a certain value means. The NPS cannot easily be converted into other metrics that are more familiar and easier to understand (e.g., %). Therefore, it's useful to know the basic properties:
- How it's calculated: an NPS score is calculated by subtracting the percentage of 'detractors' (people giving a rating of 0-6) from 'promoters' (people giving a rating of 9-10). People who give a rating of 7 or 8 are not counted, as they are considered neutral. For example, if 50% of your employees rate a 9 or 10 on a question, 10% rate a 7 or 8, and 40% rate 0-6, you would subtract 40% (the detractors) from 50% (promoters). Meaning your NPS score would be 10.
- Range: The NPS can lie between -100 (which would be the case if every individual gave a negative answer) and +100 (which would be the case if every individual gave a positive answer)
- Interpretation: Most companies consider a value >0 to be okay because this means that there are more promoters than critics/detractors. From 50 on, the score can already be considered a very good result.
- For an even more precise interpretation, compare the score with relevant benchmarks for your company size.
Please note: NPS can only be calculated when the questions are on a scale of 1-10.
Example: If you achieve an NPS score of 60 for the question "I would recommend [my company] as a great place to work", this means that there are many more promoters than critics/detractors among your employees and that your employees on average believe that your company is a very attractive employer.
(Please keep in mind that Leapsome calculates the NPS score for each question or topic automatically but that the score always needs to be interpreted in relation to the content of the questions - not in all cases does the NPS score provide information.)
The comment section shows all the (anonymous) comments made to specific questions. If you want to learn more about what a specific user meant, you can start an anonymous conversation with them. Leapsome's machine learning-based sentiment score automatically identifies whether a comment was positive, neutral, or negative and thus provides a bubble chart showing you which topic got the most comments and what their sentiment was.
This way you will easily understand how your employees feel about certain topics. If one topic received a lot of comments, this is a topic that your employees feel strongly about. The graph then shows you whether most comments would be positive, neutral, or negative, so if you had a large bubble tending to the negative side of the spectrum, this would be worth investigating.
You can also filter and segment comments if you want to deep-dive into what an e.g. specific team said.
Just a note: The team attribution visible in comments when users are part of multiple teams will be based on: the first team the user has been assigned in 'users and teams' and that meets the anonymity threshold.