Kaiser–Meyer–Olkin test

The Kaiser–Meyer–Olkin (KMO) test is a statistical measure to determine how suited data is for factor analysis. The test measures sampling adequacy for each variable in the model and the complete model. The statistic is a measure of the proportion of variance among variables that might be common variance. The higher the proportion, the higher the KMO-value, the more suited the data is to factor analysis.[1]

History edit

Henry Kaiser introduced a Measure of Sampling Adequacy (MSA) of factor analytic data matrices in 1970.[2] Kaiser and Rice then modified it in 1974.[3]

Measure of sampling adequacy edit

The measure of sampling adequacy is calculated for each indicator as


 

and indicates to what extent an indicator is suitable for a factor analysis.

Kaiser–Meyer–Olkin criterion edit

The Kaiser–Meyer–Olkin criterion is calculated and returns values between 0 and 1.
 

Here   is the correlation between the variable in question and another, and   is the partial correlation.

This is a function of the squared elements of the `image' matrix compared to the squares of the original correlations. The overall MSA as well as estimates for each item are found. The index is known as the Kaiser–Meyer–Olkin (KMO) index.[4]

Interpretation of result edit

In flamboyant fashion, Kaiser proposed that a KMO > 0.9 was marvelous, in the 0.80s, meritorious, in the 0.70s, middling, in the 0.60s, mediocre, in the 0.50s, miserable, and less than 0.5 would be unacceptable. [3] In general, KMO values between 0.8 and 1 indicate the sampling is adequate. KMO values less than 0.6 indicate the sampling is not adequate and that remedial action should be taken. In contrast, others set this cutoff value at 0.5.[5] A KMO value close to zero means that there are large partial correlations compared to the sum of correlations. In other words, there are widespread correlations which would be a large problem for factor analysis.[1]

An alternative measure of whether a matrix is factorable is the Bartlett test, which tests the degree that the matrix deviates from an identity matrix.[1]

Example in R edit

If the following is run in R with the library(psych)

library(psych)
set.seed(5L)
five.samples <- data.frame("A"=rnorm(100), "B"=rnorm(100), "C"=rnorm(100),
                           "D"=rnorm(100), "E"=rnorm(100))
cor(five.samples)
KMO(five.samples)

The following is produced:

Kaiser-Meyer-Olkin factor adequacy
Call: KMO(r = five.samples)
Overall MSA =  0.53
MSA for each item = 
   A    B    C    D    E 
0.52 0.56 0.52 0.48 0.54

This shows that the data is not that suited to Factor Analysis.[6]

See also edit

References edit

  1. ^ a b c "KMO and Bartlett's Test". IBM. Retrieved 15 February 2022.
  2. ^ Kaiser, Henry F. (1970). "A second generation little jiffy". Psychometrika. 35 (4): 401–415. doi:10.1007/BF02291817. S2CID 121850294.
  3. ^ a b Kaiser, Henry F.; Rice, John (1974). "Little Jiffy, Mark Iv". Educational and Psychological Measurement. 34: 111–117. doi:10.1177/001316447403400115. S2CID 144844099.
  4. ^ Cureton, Edward E.; d'Agostino, Ralph B. (2013). Factor Analysis. doi:10.4324/9781315799476. ISBN 9781315799476.
  5. ^ Dziuban, Charles D.; Shirkey, Edwin C. (1974). "When is a correlation matrix appropriate for factor analysis? Some decision rules". Psychological Bulletin. 81 (6): 358–361. doi:10.1037/h0036316.
  6. ^ "KMO function - RDocumentation". Retrieved 14 May 2021.