Finding metrics that matter to teams is not an easy task. One common metric in agile literature measures team health or team happiness, but trying to create a way to measure this that is valued by the team has not been a straight line to success, in my experience.
We’ve learned several lessons by trying different methods, having discussions with teams and individuals, and observing their behaviors. In short, the metric becomes useful when accepted, not imposed; when the results are distributed only with the team’s permission; and when the primary goal is helping teams learn and improve.
Any metric that can gauge how a team feels about its relationships, practices, and working within the organization provides valuable discussion topics. The goal of this metric, as defined and refined in our journey, is to create a way to measure satisfaction that focuses on helping the team become more effective and informing the organization of how they help by removing organizational obstacles.
Over the past two years, we have taken a journey to create a metric focusing on team health that the teams accept as easy to collect and valuable to them. When this series of experiments started, the reason for trying to collect this information was different, and the primary audience of the metric changed over time.
We have explored in different depths three ways to gather information, called alternately Team Happiness, Team Health, or Team Morale. As you refine and reflect what you are doing via the plan, do, check, adjust cycles to achieve that goal, you can customize what works best for your teams or organization.
Ultimately, it’s having clarity on the reason you’re measuring these metrics and who benefits from it that gives you real value.
The Key Is the Why
When we started our journey of developing a team health metric, the primary consumer was as much the agile manager and the ScrumMaster as it was the teams. I believe that providing this information to supervisors can be a secondary goal when it’s used to help teams, not judge them.
The initial idea was to determine how the teams were thinking and use the information in one-on-ones to look for ways to help them. While the intent of helping the team is a positive goal, the idea of getting them to help themselves, with the team being the primary consumer, only came after discussion and reflection.
Your goal may be different, though, and understanding your goal is key to building the method of measuring team health or happiness. Collecting metrics to prompt discussion for discovering impediments and improvement opportunities turned the corner for our teams.
Questions frequently came up about who would see the information and how it would be used. We agreed on limiting access to collected information to the team, allowing only selected sharing of more general information—with the team’s permission—with their agile manager that they trusted.
When disseminating information like this, the key is trust. Without team permission over who sees the information and the goal of gathering it, you will not get the truth as the teams sees it.
Experiment 1: Team Morale
First, we needed to decide what the requirements of the metric process were. We determined that the tool or method had to be free, online, simple to use, and targeted at team health.
Initially, I believed that using an online, anonymous collection method would provide higher fidelity. We had previously used an online survey to collect information on our development pairing practice, but I thought it had yielded spotty results.
With some searching, I found a paper that changed how I thought about collecting this metric. The author made a convincing argument that asking one question, such as “Are you happy working within your team and your work?” was not sufficient. His tool was based on psychological studies and research on teams, and this method uses a collection of questions that derive “team morale,” not happiness.
There is an associated website that is free to use and allows you to compare your data to other teams at http://teammetrics.apphb.com/.
We used the Team Morale tool for over a year. Here is what I observed:
- Using an anonymous survey allowed team members to not participate. The collection data across multiple teams was 55 percent to 100 percent participation, with the median about 80 percent
- The data correlated to my observations and my one-on-one discussions with the teams
- Sometimes participants used drastic, exaggerated responses to make a point
- Several team members opposed participating in the survey as frequently as every sprint. The spacing of two months or so was universally accepted
- There were several complaints about using the website. Some team members believed the number of questions and going to a website to input data was inconvenient. Some made statements of not having the time to perform the survey due to the pressure of delivering the sprint goal. While I found the website very easy to use and discussed this with the teams, it did not matter; only the team’s input drove my actions
- Based on one-on-one conversations, I learned the minority of the team did not find the survey of value to them. In contrast, the agile manager whom the team trusted found it very useful to take the pulse of the team
Experiment 2: Team Happiness
With feedback about questioning the value of the survey, the lack of simplicity in entering data, and the time to complete the survey, I asked the teams to try a different approach. We used paper forms, such as stickies or a poster, and collected data at each sprint retrospective. We tried both anonymous and public scoring.
Two methods were used for data collection. The first used a scale from 1 to 5:
5: Extremely happy about working with the team and your work
4: Happy about working with the team and your work
3: OK with working with the team and your work
2: Sad about working with the team and your work
1: Extremely sad about working with the team and your work
The second method used happy, neutral, and sad faces.
This method of gauging Team Happiness was used for four months. Here is what I observed:
- The team was happier with the simpler data collection
- The single-team metric did not necessarily correlate to what I heard at retrospectives. For example, if the survey showed some people were not happy during the sprint, the root cause of their dissatisfaction was not necessarily discussed
- Public voting causes some people to be uncomfortable. For example, if someone said they were neutral, some team members wanted to know why. While we did discuss that giving a reason for a vote is up to the individual and questions about their vote should not be asked, people told me in one-on-ones that they could not be truthful and avoided the discomfort they experienced by not being forthcoming
We transitioned to putting votes in a box, with the ScrumMaster revealing the results by writing them on a poster to eliminate the possibility of deducing who wrote what through handwriting analysis.
Experiment 3: Net Promoter Score
Since conversations at retrospectives did not consistently match the Team Happiness metric collected in Experiment 2, I continued my search for a better way. I found an article that changed our direction.
The Net Promoter Score (NPS) method described in this paper also collects the reasons people voted like they did. I believe that adding the reason—when it accurately reflects the voter’s thinking—was the missing piece of the puzzle, so we started collecting this data every other sprint.
We are currently using the NPS questionnaire below:
The process requires the team to fill out the paper form and put it in a collection box at each retrospective. We have been discussing the value of being specific with the reason each person gave their rating. The teams, and only the teams, are sent all the information—their agile manager sees the NPS score and a one-line summary of the reasons that has been approved by the team.
Here are my observations:
- Over 90 percent of the two teams are finding value in the survey
- For the first time, I was asked when the next survey was
- There is no pushback on doing this survey every month
- Some people typed the reason, to be sure no one could do handwriting analysis. The majority of the team members took the simpler approach of writing their answers
- We were missing votes, and I needed to reinforce that this information is for their use. When given the opportunity to put their votes into the box afterward, the absent voters all participated
- The score went up every time we split into smaller teams and down when we combined them. The descriptions clearly defined this correlation
- Cross-checking the scores with the descriptions for consistency helped us improve the fidelity of the information
We have used the NPS four times, and we’ve found this approach to be the most accepted and effective measuring method to date. Still, we saw room for improvement, so we’ve done some experiments to try to get better data.
Coaching the team to be specific in their descriptions for the rating and to think about what it would take to get them to give a score of 10 resulted in more information. And instead of providing the team with all the reasons for the scores given, I created a summary statement paragraph using my words while trying to tie reasons together. This was the suggestion of a team member who believed that some on the team could not state their true reason for a more negative score, out of concern about others trying to determine whose comment it was.
The key lessons learned in this journey toward gauging team happiness are simple. First, the mechanism for collecting information must be easy to understand and short. Including the survey as part of a retrospective works well. Second, establishing trust and determining the intent of the survey results with the team are paramount. The metric is for team discussion and should belong to them. After all, the goals are to identify organizational impediments and to improve collaboration and self-organization.
User Comments
Thanks for sharing your experiences about team health scoring. You could try using feedback frames as another way to gather the responses in a low-tech way that is also anonymous. http://feedbackframes.com/
Thank you Mara for introducing me to feedback frames. I envision uses for this product with specific exercises. For what we did, the most powerful feedback are the "reason text" that accompanies the NPS score. I have learned you need to coach the team on how to write the reasons to get better information. Feedback Frames can't do this directly though with the right questions may drive out some of the same information. Thank you reading my article and providing feedback!