When analyzing and reviewing employee feedback data, it’s easy to get caught up in the technical aspects of statistical methods. However it’s important not to lose sight of what’s really important – improving the experience of your people.
At Culture Amp, we’re often asked if our survey questions and templates have been validated. It’s a good question, but it’s usually not the most important one. Commonly, the better question is whether something needs to be validated to be meaningful.
Another question we’re often asked is whether a result is significant. In the world of statistics, significance can mean the difference between taking a finding seriously or dismissing it, but in the world of people, things are not so simple.
Here’s a simple introduction to the two concepts, in the specific context of employee feedback.
Three reasons why “validation” is not always useful
In the typical use of the word, a validated question is one that’s been proven through a research study to measure something. Someone has studied a specific question, and shown that it actually measures the outcome in question.
There are three reasons why it’s not always sufficient or even useful to ask whether a question has been validated.
First, there’s no easy or often practical way to pre-validate new questions and processes, which gives you a limited range of questions to work from. It’s often the case that there is no validated form of the specific question that you want to ask. What’s most important is you ask the questions that are most meaningful to your organization.
Secondly, it’s also possible to validate a question in your own organization. If we ask a question in our own organization and find that it’s really predictive of something, that in itself is validation. People often forget that we’re conducting live research in our own organizations. So if you can predict when people are leaving or how positive someone is about various aspects of your workplace because of your research, there’s your validation.
Thirdly, just because a question is validated doesn’t guarantee that asking it will give you anything meaningful for your organization. A validation may have been done decades ago or for a specific context that is no longer relevant. For example, a well known question that has apparently been validated in the past is to ask people if they have a best friend in the workplace. The “validation” means that this question accurately measures if a respondent has a best friend in the workplace. This is great, but for most organizations this is completely useless. Validated questions are not necessarily actionable or meaningful.
At Culture Amp, we work with organizations to develop the right questions for their needs. A good people scientist will be able to tell you what types of questions tend to work really well and will be able to steer you towards high quality questions (or question formats). But there’s nothing to say that you won’t come up with something that’s unique to your organization and is even more meaningful.
You have the freedom to use a mix of questions that have been tried and tested by thousands of Culture Amp customers and that we have great data on, and also be empowered to create new and unique questions within your organization. These can then be validated within your organization and in the future.
Is “significance” really significant?
The second question I’m often asked is whether a result is significant. When asking about significance, what most people mean is whether it is possible that this result occurred simply due to chance.
A significance test asks a very specific and abstract question which roughly equates to ‘is this result likely to reflect a meaningful trend in some much broader population?’. Let’s say we’re studying a company and we find that eight out of ten randomly selected people are unhappy with the CEO. A statistical significance test would ask whether another randomly selected group of ten people would produce a result similar to the sample group. The answer depends on the result (eight of 10), the size of the sample (10), and the size of the overall population (the organization). In this context the question makes sense. We want to know how likely it is that these 10 people are representative of the overall organization.
However if we ask a different question, say whether Team A (a specific group of 10 people) are unhappy with their manager, significance becomes much less relevant. It’s a fairly abstract question because it can’t be replicated in the field. You couldn’t pluck another ten people out and put them in the team tomorrow to see if the results were the same, and the people we’re interested in are the actual people in the team.
Even if we could replicate the responses, that doesn’t take into account the most important thing – these are real people with real feelings about their workplace. If some people in a team are unhappy with their manager or the CEO, you don’t need to do a statistical significance test to tell you whether you should care about that. It’s more important to focus on the people who are feeling unhappy rather than whether their feelings are representative of the broader population.
At the end of the day, what we’re doing in employee feedback is studying real people to see how they feel about their working environment. If we discover something that’s meaningful to the organization we work in it shouldn’t matter if the methods are validated or the results are statistically significant. The most important question is always whether the question, or results, are meaningful to your organization.
Request a personalized demo