Verified by Psychology Today

5 Essential Questions to Consider When Using Assessments

Evaluating assessment quality is key to using assessments well.

Key points

  • Evaluating the quality of an assessment tool is an essential step in deciding whether to use it.
  • Quality can relate to what the tool measures, how it measures it, who designed it, and how your users will perceive it.
  • Do some research, and if you still need guidance, email an expert who can help cut through the haze.
Source: Towfiqu barbhuiya/Unsplash

The talent/HR market today is awash in assessment tools and methodologies that promise to solve any conceivable hiring and development challenges. There are tools that promise to measure everything from cognitive ability, personality, and emotional intelligence, to complex technical skills, social media savvy, and even what kind of dog you are (Shiba Inu, apparently?).

Because of this, the person tasked with evaluating or buying assessments for their organization and/or consulting work could be forgiven for succumbing to a sense of deep overwhelm and mild confusion. The options, and the questions that each option raises, can feel innumerable.

Keeping in mind a few key questions can help guide your evaluation of a given tool and whether it’s a good fit for your use case. And getting at the root of these questions will help guide your thinking about whether you’re buying and applying a quality tool.

Does this tool/method measure what it’s supposed to measure?

Assessments and tests estimate “constructs,” or abstract ideas about behavior or groups of behaviors that can’t really be seen or touched, so they have to be measured indirectly. For example, “agreeableness” is a construct that is representative of a constellation of traits such as cooperativeness, modesty, altruism, etc., that might be measured with a personality test to predict how agreeable or disagreeable a person is likely to be in actual interactions with others.

Within this question, a few questions that can be helpful to consider are: 1) Does this construct actually exist; 2) if so, does this test measure it; and 3) is this construct related to the situation in which I’m using it?

For example, is a person’s “dog type” 1) representative of a legitimate constellation of behaviors that 2) can be measured using a five-question Buzzfeed quiz that 3) helps me understand how they might perform in a job? The answer may require a little bit of research in an effort to determine whether the construct you’re considering using in hiring or development will provide information that has been shown to be useful in such situations.

Will it measure the same thing consistently?

This question is closely connected to the prior question—by asserting that you are measuring something that is indeed measurable in a person, does this particular tool do it in a way that will allow you to reliably measure it over time? If I score as highly agreeable on an agreeableness measure, will I tend to score high each time I take it?

Questions of test-retest reliability inherently involve questions about measurement error, faking, and testing conditions, all of which should be considered when evaluating the potential efficacy of an assessment method.

Can I draw reasonable conclusions using these results?

The answer to this question is quite important because it speaks to the “so what?” element of an assessment. OK, so I’m highly agreeable… so what? Does agreeableness predict performance on the job? Does it help me better understand how someone may improve their leadership effectiveness or how they may respond to coaching?

Drawing conclusions from test results is the whole point of using tests because it allows us to understand whether those results predict something in the real world that can help us better make decisions or navigate complex interactions. This question is also a good reminder to consider whether the tool you’re considering using has been designed for the context and population for which you’re considering using it. For example, some assessments are developed with convenience samples of students and may provide less relevant norm comparisons for senior executives.

Who is selling me this assessment?

One of my advisors in graduate school used to ask a version of this question every time someone would begin extolling the unimpeachable virtue or evidence of one assessment or another—who is selling me this? Where is this information coming from? Does this person have a vested interest in me believing their assessment will predict every imaginable outcome I can think of?

Unfortunately, the mixed-motive is sometimes unavoidable, especially if you’re interested in trying new assessments and technology. But this question can also be a good reminder to consider who developed the test, what kind of research went into its development, and what kind (and quality) of evidence is being provided to support its validity and reliability.

What will my user/client think about this?

Considering the participant perceptions of an assessment doesn’t have a bearing on its validity or reliability, but it can be useful to consider the participant experience in thinking about the assessment tool/method holistically as a product or service. For example, a game-based cognitive test may measure cognitive ability quite well but may be a bad fit in a senior executive assessment battery. I suspect managing the client experience in assessments will only become more important over time, which will require more active consideration of what we’re asking of participants and what they get out of the process.

In general, when considering whether to buy or use an assessment, make sure to do a little bit of research. Ask for a technical report about the tool from the provider. Look for any academic research that may have been done with that tool, and make sure to get a sense of who made up the samples. If the assessment provider isn’t forthcoming with technical information, factor that into your evaluation of the tool and whether it makes sense to use it in a given situation.

If you still have questions or want an outside opinion, email an expert who can provide some guidance on what tools are out there that have strong, evidence-based foundations. They may not be able to answer all the questions you have, but they can generally help point you in the right direction.

This is Part 4 of a multi-part series I’m writing on executive assessment for leaders and practitioners. For an in-depth look at the subject, check out my book Assessing CEOs and Senior Leaders.

More from Ross Blankenship Ph.D.
More from Psychology Today
Most Popular