Most professionally developed pre-employment tests are also well-validated. But what does it mean to say a testing program has validity? Many customers who are new to pre-employment testing imagine that "valid" tests have received some sort of seal of approval verifying that they passed certain standardized qualifications for validity. This is not the case. Instead, establishing a test's validity – the process of test validation – involves gathering different pieces of evidence to provide a scientific basis for interpreting the test scores in a particular way. A pre-employment test has predictive validity if there is a demonstrable relationship between test results and job performance.
Types of Validity Measures
There are a number of different types of validity measures that are used to validate pre-employment tests. The most important types of validity-supporting evidence include construct validity, content validity, and criterion validity.
A pre-employment test has construct validity if it measures what it is supposed to measure. In other words, construct validity refers to the extent to which a test correlates with a theoretical scientific construct such as general intelligence, mechanical aptitude, or extraversion. For example, a cognitive aptitude test is expected to measure cognitive aptitude, or generalized intelligence. If it fails to accurately measure intelligence, it is ineffective. The process for determining construct validity for an aptitude test might involve comparing the test with other leading measures of cognitive aptitude to see if the two measures are highly correlated.
Content validity measures how well the subject matter of a test relates to the capabilities and skills required by a certain job. Establishing a test's content validity involves demonstrating that test items reflect the knowledge or skills required for a particular position. Ensuring that a pre-employment test has content validity is necessary for affirming that the test follows the rule of job-relatedness. According to the UGESP and the Equal Employment Opportunity Commission (EEOC), pre-employment tests administered to job applicants must be related to the position for which they are administered. For example, administering a sales personality test to a computer programmer does not qualify as job-related if the position does not involve interacting with or selling to potential customers. Establishing content validity protects companies from the unlikely event of a lawsuit by ensuring that tests are used in a legally compliant way.
Ultimately, the most powerful way that a company can demonstrate the validity of its testing program is to establish criterion validity. Criterion validity (also called concrete validity because it refers to a test's correlation with concrete outcomes) refers to the relationship between two variables, in this case between test scores and a desired business metric. Typically, the business metric would be a measure of employee performance (e.g., supervisor's performance ratings or average sales per hour) or organization-wide business outcomes (such as employee retention rates). The relationship between test performance and job performance can be quantified by a correlation coefficient (ranging from -1.0 to +1.0) which serves as a measure of the extent to which test scores predict future job performance. Criterion validity is more difficult to measure than other types of validity because it requires large sample sizes for each position.
Organizational psychologists speak of two types of criterion validity: concurrent validity and predictive validity. Concurrent validity is determined by comparing tests scores of current employees to a measure of their job performance. For example, a company could administer a cognitive aptitude test to its employees to see if there is an overall correlation between their test scores and a measure of their productivity.
Predictive validity, however, is determined by seeing how likely it is for the test scores of applicants to predict their future job performance. If an employer's selection testing program is truly job-related, it follows that the results of its selection tests should accurately predict job performance. In other words, there should be a positive correlation between test scores and job performance.
A substantial body of research has concluded that for certain types of tests, especially tests of cognitive aptitude, validity evidence can be generalized across a broad range of jobs. Using meta-analysis and other statistical techniques, industrial-organizational psychologists have concluded that in many cases, the validity of tests as predictive tools "can be generalized from one employment setting to another without need for a local validation study" because the "validity of cognitive ability testing is not situation-specific…because cognitive ability is universally relevant to and useful in predicting job performance".* This practice of "validity generalization" means that employers may also choose to rely on "transportable validity" from test publishers who have demonstrated the validity of a specific instrument in a wide variety of employment settings, meaning that employers would not need to produce a local validity study of their own. Transportable validity is the evidence a testing provider has collected by using a test for various jobs at different companies. Smaller companies that don't have large enough sample sizes for local validity studies may want to rely on the transportable validity that the testing company can provide owing to their large-scale sample sizes.
*Testimony of expert witness Kenneth Willner at EEOC meeting of May 16, 2007.