people working PE060

Are Cognitive Ability Tests Still Relevant?

SHL’s point of view on recent research suggesting cognitive ability tests are not as predictive of job performance as previously established.

The problem with cognitive ability tests

For many years, cognitive tests have been used to assess a candidate’s suitability for a job role, as research indicated that performance on such tests was directly correlated with future job performance. However, recent meta-analytic research has indicated that cognitive ability tests are not as highly correlated with overall job performance as originally established.

There have also been concerns about the fact that cognitive tests produce adverse impacts for certain racial/ ethnic groups. From an inclusion perspective, the question, therefore, arises as to whether they should still be used for selection.

So, in light of these recent findings and concerns, does this mean that cognitive tests are no longer useful? Should we abandon them altogether?

Of course not!

Why cognitive ability tests are still relevant

Cognitive tests when developed fairly, validly, and reliably, and then used correctly, are great tools for talent assessment, as they provide an objective measure of a candidate’s ability and are still good predictors of overall job performance and excellent predictors of performance on cognitively oriented tasks.

If job analysis identifies cognitively oriented competencies as critical for a role (e.g., the ability to learn quickly), these can be measured through a combination of cognitive ability tests and behavior-based competency assessments. Taking a holistic approach by using cognitive tests in conjunction with assessment content that measures other non-cognitive competencies (that do not show an adverse impact on their own), helps to mitigate the potential for the adverse impact of a selection process while also increasing the predictive validity of the process overall.

Here at SHL, we have a robust test development process that ensures our tests are subject to rigorous reviews, extensive trialing, and comprehensive analyses to only publish quality, fair content. In fact, we recently undertook research to explore differences in performance at the individual question level to determine if any question content was biased against people in particular groups. We looked at differences across age, race, gender, and disability. Our analyses showed that, due to our thoughtful question development process, very few items were identified as showing bias. We then determined if removing these items had an impact on differences at an overall test level. We concluded that individual question level differences did not ultimately lead to overall further adverse impact at an overall test level.

 

 

 

When exploring group differences, we found that in some cases our items overall demonstrated a difference in performance between groups. When we isolated those items causing the difference, we discovered that removing them did not fundamentally alter the adverse impact, but from a fairness perspective, those items were still removed.

Best practice recommendations

Without a doubt, following robust scientific design principles as well as ensuring diversity, equity, and inclusion in that design is one aspect of the relevance of cognitive ability tests. The other aspect is ensuring that they are used responsibly and appropriately. To support our clients’ responsible use of our assessments, we have prepared some “do’s” and “don’ts” on how to make the best use of our cognitive tests whilst minimizing any bias.

Do:

  • Consider the relevance of the particular cognitive assessment to the actual job; is cognitive ability critical to your job role? If not, consider using other tools like behavioral assessments, structured interviews, or job samples to measure key competencies for your role instead.
  • Take a holistic approach to your selection process by incorporating any cognitive tests as part of a job-focused assessment; what skills are critical to the role?
  • Ensure a minimum benchmark for initial sift decisions rather than taking the top performers; minimize the adverse impact by not losing more than one-third of the candidate pool from the job-focused assessment stage.

Don’t:

  • Use cognitive tests as a blunt sifting tool for high-volume roles, especially entry-level roles. 
  • Use cognitive ability tests in the absence of a comprehensive job analysis or if there is no strong evidence that cognitive ability is critical to the role.
  • Adopt a one-size-fits-all approach to your organization’s talent assessment programs.

As leaders in the talent assessment space, SHL has a vast library of tools to support you in your talent assessment decisions.

How can you find out if cognitive testing is right for your organization? Contact us so we can help you figure it out and learn how you can use best practices when leveraging cognitive to select the best candidates fairly.

headshot akram sabia

Author

Sabia Akram

Sabia Akram is a Senior Scientist on SHL’s Science team, with over 15 years of experience in the assessment industry. She started her career in the education sector where she developed her expertise in Psychometrics and has transferred those skills into the Occupational Psychology space. Sabia works on the development and maintenance of SHL’s flagship ability assessments (Verify Interactive). Sabia holds a BSc (Hons) in Mathematics from Brunel University and is a Chartered Statistician.

Explore SHL’s Wide Range of Solutions

With our platform of pre-configured talent acquisition and talent management solutions, maximize the potential of your company’s greatest asset—your people.

See Our Solutions