In psychometrics, predictive validity is the extent to which a score on a scale or test predicts scores on some criterion measure.[1][2]

For example, the validity of a cognitive test for job performance is the correlation between test scores and, for example, supervisor performance ratings. Such a cognitive test would have predictive validity if the observed correlation were statistically significant.

Predictive validity shares similarities with concurrent validity in that both are generally measured as correlations between a test and some criterion measure. In a study of concurrent validity the test is administered at the same time as the criterion is collected. This is a common method of developing validity evidence for employment tests: A test is administered to incumbent employees, then a rating of those employees' job performance is, or has already been, obtained independently of the test (often, as noted above, in the form of a supervisor rating). Note the possibility for restriction of range both in test scores and performance scores: The incumbent employees are likely to be a more homogeneous and higher performing group than the applicant pool at large.

In a strict study of predictive validity, the test scores are collected first. Then, at some later time the criterion measure is collected. Thus, for predictive validity, the employment test example is slightly different: Tests are administered, perhaps to job applicants, and then after those individuals work in the job for a year, their test scores are correlated with their first year job performance scores. Another relevant example is SAT scores: These are validated by collecting the scores during the examinee's senior year and high school and then waiting a year (or more) to correlate the scores with their first year college grade point average. Thus predictive validity provides somewhat more useful data about test validity because it has greater fidelity to the real situation in which the test will be used. After all, most tests are administered to find out something about future behavior.

As with many aspects of social science, the magnitude of the correlations obtained from predictive validity studies is usually not high.[3] A typical predictive validity for an employment test might obtain a correlation in the neighborhood of r = .35. Higher values are occasionally seen and lower values are very common. Nonetheless, the utility (that is the benefit obtained by making decisions using the test) provided by a test with a correlation of .35 can be quite substantial. More information, and an explanation of the relationship between variance and predictive validity, can be found here.[4]

Predictive validity in modern validity theory edit

The latest Standards for Educational and Psychological Testing[5] reflect Samuel Messick's model of validity[6] and do not use the term "predictive validity." Rather, the Standards describe validity-supporting "Evidence Based on Relationships [between the test scores and] Other Variables."

Predictive validity involves testing a group of subjects for a certain construct, and then comparing them with results obtained at some point in the future.

References edit

  1. ^ Cronbach, L.J., & Meehl, P.E. (1955). Construct validity for psychological tests. Psychological Bulletin, 52, 281–302.[1]
  2. ^ The Marketing Accountability Standards Board (MASB) endorses this definition as part of its ongoing Common Language in Marketing Project.
  3. ^ "Where Predictive Validity May Fail To Make The Grade".
  4. ^ "Do Psychometric Tests Work?".
  5. ^ American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.
  6. ^ Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. American Psychologist, 50, 741–749.