Table of Content
Who can see review analytics?
Filters
Demographics and Attributions
Aggregation Levels
Normalization of results
Working with the Heatmap
Working with Box grid
Working with Distribution chart
Working with Radar Chart
Analytics for Multiple Choice Answers
Who can see review analytics?
Please watch this video for how to interpret the visibility settings of a review.
Super-Admins have full access to review analytics across all review cycles. They will not see results if the visibility of the cycle has been restricted in the visibility settings (see the video above).
Admins will, by default, have access only to the results of the cycles for which they are designated cycle owners. If the option ‘Admins can access all reviews' is enabled in the admin settings by the super admin, admin will be able to access all analytics.
- If they are neither the owner of a cycle and the above super-admin setting is not turned on, they will not have access to cycle analytics. This is true even if the 'Visibility for admins' allows admin access – as this only impacts the cycle owners (see the video linked above to learn more about 'Visibility for admins' settings).
- If the super-admin setting is active, the 'Visibility for admins' settings of a specific cycle or template will still apply to all admins. This means if you change the visibility to 'No visibility' for each assessment, admins will not have access to any analytics for that cycle.
Please note: Users who are admins and participants in a Review cycle may have different access to Review Analytics and the assessment itself.
- In the assessment, if a user is both Admin and participant (e.g. Reviewee or Manager), Leapsome applies whichever role gives them more access, according to the visibility settings for the specific cycle.
- In Review Analytics, a user's Admin role always trumps their participant role.
Managers will, by default, have access to the analytics of their direct reports (as well as indirect reports if cascading access in the super-admin settings is enabled). You can change what managers see in the 'Visibility for the Manager' settings of the review.
Employees will have access to their results depending on the 'Visibility for the Reviewee' settings.
Filters
By default, you will see an aggregate of all results across all review cycles. Any teams/groups/custom attributes you create for your organization will immediately be available to use as a filter under analytics. On most views, you can use (and combine) the following filters. These are especially useful during calibration meetings:
- Quick filters - Latest manager reviews: Filter to retrieve the latest review for each employee, regardless of the date or review cycle. It is useful if you have decentralized reviews and want to understand the most recent data on employee performance, rather than an aggregate of all past reviews.
- Teams: Filter to see the results of members of the filtered teams
- Managers: Filter to see the results of the direct and indirect reports of the filtered manager
- Employees: Filter to see the results of specific employees
- Perspectives: Filter to see the results of particular assessment types (e.g., only see the analytics for self or manager assessments)
- Contributor type: Filter to see the results of managers or individual contributors.
- Review cycles: Filter to see the results from specific cycles
- Timeframe: Filter to see analytics from any reviews completed in the specified timeframe
- Competencies and categories: Filter to focus on particular competencies, questions, or categories
- Custom attributes: Filter to see the results for employees that have a particular custom attribute (defined in the 'Employees' section by the super admin)
Please note: Filters remain in place if you switch from one view to another.
In the heatmap, the filtering logic works as follows: If you filter for two or more of the same filter (e.g. two teams) Leapsome will apply an OR logic, meaning that it will show people who are either in Team 1 or in Team 2. If you combine different filters, e.g. a team and an office location, it will apply an AND logic. This means you would only see the answers that came from people who are in e.g. Team 1 and work in the London office.
If you want to make it easy for users to find specific data in the Leapsome platform, they can save the filter via the save button and share it with anyone in the organization.
Filter review form by contributors
In case a review cycle includes assessments from many individuals, filtering the review form by contributors enables you to focus on feedback from specific individuals and/or specific perspectives ( manager, peers etc). If you want to focus on peer feedback, i.e, and explicitly have their results shown, you can simply deselect the manager and reviewee perspectives in the filter dropdown.
Demographic and Attributions
Attribution will be based on the current team membership / demographic data at the time of submitting the review. For example, if a member from Team 1 starts the review and then moves to Team 2, then the results of the user would be considered as part of Team 2.
Aggregation levels ('Group results by')
In the heatmap and timeline, scores can be aggregated in:
- Total score: This generates an unweighted average score across all questions
- Category: An unweighted average across all competencies in a category
- Question / Competency: As the most granular view, this allows you to compare competency scores one by one
- Perspective: Gain a holistic understanding of performance rather than just the manager's feedback. You can easily spot discrepancies between different perspectives.
Just a note that (total) scores presented in analytics may differ slightly compared to the cycle dashboard calculated average if including peer assessments. The cycle dashboard average score will consider the average of peers’ total ratings, whereas in analytics, you will see the average of all individual questions answered by peers.
This is most relevant if you have made peer scores optional in a review, in which case peers that answer all questions have a ‘higher impact’ on the total scoring in analytics.
Normalization of results
If you have chosen the same scale throughout all review cycles, you will also see that scale in the review analytics. If you use different scales across reviews, you will still be able to see the review analytics. However, Leapsome will normalize all results to a 0-100 scale.
Here is a video on how different point of scales are normalized to a 0-100 scale.
Generally, the minimum of 0 and max of 100 corresponds to the minimum and maximum of the scale used. Here is how different point scales get normalized in Leapsome's Analytics view:
Working with the heatmap
If you want to compare the competencies, questions, or aggregate categories score given by managers to different teams or employees, the heatmap is an ideal option.
Checking 'Show custom questions and goals (only visible when grouping by category' will display any custom questions you have added to the cycle alongside the competency ratings. To see these question scores, you will need to group the results by category.
Segmenting the results changes the groupings you see on the left-hand side (the default is to show individuals and their corresponding scores). You can segment the results by:
- Team: Compare results across teams.
- Reviewee: This allows you to compare employees (this is the default).
- Manager of the reviewee: Compare the aggregated scores of the direct reports of managers. This is particularly useful to see if there are differences in scoring behavior between managers.
- Perspective: Compare the average scores from self, peer, direct report, and manager assessments.
-
Gender: Compare differences between genders. This is particularly useful to see if there are structural differences in feedback given to different gender groups.
- Location: Compare the difference between locations. This is particularly useful to see if there are structural differences in feedback given in different locations.
- Competency: Compare different competencies across the whole company or within specific teams using the filter. This unlocks diverse use cases to find out which competencies are the company (or a subset of the company based on the filter selection) doing better at and which competencies need more focus on.
- Category: Compare different categories across the whole company or within specific teams using the filter. Similar to competencies, this helps you determine which areas in the company need more focus.
- Level: Get a sense of the distribution of scores across the different levels to notice discrepancies and further connect the competency framework with the review module.
- Tenure: A good-to-have feature to see how people are rated based on tenure.
- Custom attributes: Comparison depends on the attributes defined by the super admin in the 'Employees' section (read more here about custom attributes).
Many companies use the heatmap for calibration meetings: After running a review round, leadership and HR teams typically compare whether any teams / employees were evaluated exceptionally high or low to ensure a fair standard across all participants. To find the calibration option, head to 'Actions' > 'Calibrate'.
Normalized scores
If you use more than one scale within an assessment (ie. 4 points and 5 points scale) or use Review analytics to compare multiple cycles that use different scales, the displayed scores will be normalized to a 0-100 scale. More explanation of normalization in this video. In this way, you can easily compare scores, even if different scales were used.
Normalized scores are calculated using the following equation :
Example:
A participant receives a score of 4 out of 5 for one question. The normalized score would be calculated as: (4-1)/(5-1)*100 = 75 out of 100.
This normalization is then repeated for all scores and their respective scales within the assessment, to compute any total average scores.
Let's say the same participant in the same review cycle as above had a scale of 4 points for a second question, and the participant received 3. The normalized score for this second question would be calculated as:
(3-1)/(4-1)*100 = 66.67 out of 100
Now in the review analytics, if you look at the aggregate score for this user for this review cycle, you will see their total average score as (75 + 66.67)/2 = 70.8 out of 100.
Color logic
The color logic of the heatmap can be explained as follows:
- Lowest results = white
- Middle results = light green
- Highest results = dark green.
The color logic serves as a guide and remains flexible, i.e. it varies depending on the result range. Therefore, it is possible that the color white includes results up to 3 or higher in one heatmap and only results up to 2.5 or lower in another.
Please note: in Heatmap analytics, as long as a review for a specific reviewee is considered in progress, changed filters for relevant goals in the review settings will be applied for the reviewee, and existing rating on (removed) goals will be canceled. On the other hand, the goal selection gets fixed as soon as the review for the reviewee is not considered as in progress anymore and thus potential changes to the goal selection filter won’t apply to the specific reviewee anymore.
Exporting Heatmap analytics
You can export heatmap analytics by navigating to 'Analytics' > 'Reviews' > 'Heatmap' > 'Actions' > 'Export to Excel'. The export will include all of the user-specific attributes (including custom attributes); You will find these user-specific & custom attributes by navigating to 'Settings' > 'Employees' > 'Sections & Attributes'. The export respects the ‘Read access’ settings in the employee attributes settings. Therefore, if the person who is exporting doesn’t have access to a specific attribute, we exclude it from the exported file.
Participation
When running review cycles, you will want to get a deeper understanding of review participation across your organization. Having access to this data enables you to take action when driving active participation and addressing the areas that face issues with participation rates. This is captured in the 'Participation' tab of the Review Analytics.
By default, aggregate numbers of all cycles are shown. Using the 'Filter' drop-down, you can customize and narrow down the view to capture wanted segments of employees.
Working with the Box grid
The box view is beneficial for understanding which employees are outstanding at a specific competency or combination of competencies. You can also filter using the same filters as described above.
You can determine the X and Y axes in the top right corner. Once you've determined which combination you want to look at (or if you want to see everyones' total score in a line), the box will then display each individual (after you've filtered) as a dot on the chart. Hover over each dot to see the name of the employee and their score. You can view the chart as a schematic box, which groups the dots into their respective box, hiding their exact placement on the chart.
Only 80 employees will be visible in each box, however, the total number of employees who apply to each box will be visible in the upper left-hand corner of the box.
Example use cases
This view can also help you find the best people for a lead role in a certain project requiring a specific set of strengths. For example, you may need a project lead for a new project that requires strong listening and presentation competencies. You can set the X-axis to 'Listens actively' and the Y-axis to 'Providing structure' (or similar competencies). You would then choose from the people at the top-right-most corner, as they have the highest rating in both competencies simultaneously.
Maybe you want to know which managers receive the best ratings from their direct reports to understand how they lead their teams. In this case, you would filter to see Perspectives = 'Direct report assessments.' You would then want to set the X and Y-axes to 'Total average.'
Some companies also plot employees on a 'potential v. performance' matrix (explained in the graphic below). To achieve this, you would want to add two custom questions to your reviews that require a rating of these two aspects. You would then set 'potential' as the Y-axis and 'performance' as the X-axis. The box naturally grids employees into one of the boxes. This little video also explains this in further detail.
Working with the distribution chart
If you aim to achieve a particular score distribution during a calibration meeting, the distribution chart will be helpful.
Please note: By default, the view will show aggregate scores. You'll want to filter for a specific question, a specific review cycle, and 'Perspectives' = Manager Assessment in many cases.
The chart will show absolute counts of aggregate ratings (as opposed to their share).
Working with the timeline
The timeline view helps you see how your company, teams, or employees change in ratings over time.
Each line shows the aggregate score for a specific competency or category (depending on which grouping you've chosen in the 'Group results by' drop-down) or displays the total score. You can group results by: Total (unweighted), Category, Question / Competency, and Perspective.
Each dot or moment in the timeline represents when a review was submitted. For example, if you're looking at the timeline for the entire company and everyone went through a review in September and February, you would see a dot for those two months on the graph. You can filter by 'Timeframe' to specify a time window to narrow in on results.
Example use cases
You may decide to use this view to see how much an employee is improving over time. If they are consistently improving, this displays a high amount of potential and ability to learn. You may then mark them for a promotion.
Conversely, if you notice a certain competency rating (for an individual or the whole company) is going down over time, you may want to invest in training for that particular area.
Working with the radar chart
In the radar chart, you can compare competencies between employees, teams, managers, and/or the entire company, as well as see the specific strengths or areas for growth of a certain group.
For example, you could check how your People and Culture team is performing compared to the rest of the company and evaluate any strengths or weaknesses for the team based on this graphic.
To do so, you will want to click 'Enable comparison group.' In the example above, you would filter for 'Teams' = People and Culture in the first filter section. In the second filter section (the comparison group), you would filter to see all teams (which you do by selecting all teams apart from the People and Culture team).
There will then be two different colored lines representing the first group (People and Culture) and the comparison group (everyone). You can hover over a point on the chart to see which color represents which group, and the scores for each competency / category. The first group will be referred to as 'All assessments', and the second group as 'Comparison: All assessments'.
Please note: Competencies/questions are sorted by company/team competency; and category within those groups.
Analytics for multiple-choice answers
If you run review processes based on multiple-choice questions, you can find data on the answers to these questions in the 'Multiple-Choice' tab. You will see the answers listed and next to that, the number of participants that chose each answer. For each answer, you will always see the image of the Reviewee(s) who received this answer next to it.
For example, you may want to analyze active/ past review cycles to understand which competencies are being selected across the company and build training processes based on this.
Unfortunately, at the moment it is not possible to export Multiple-Choice questions from Analytics. The only was to do so would be to export multiple-choice questions is by going to each cycle individually, then selecting Actions, and finally choosing Export Answers.