Psychology 524/624
Lab Lecture #2
SingleFactor Scale Reliability Analysis
Objectives:
Use AMOS to test a single factor CFA model
Learn how to compute Cronbach’s reliability estimates using SPSS
We’ll be working with the humor.sav dataset. As a reminder, it contains 10 items. The odd numbered items represent humor demeaning to others (Don Rickles items), while even numbered items reflect selfdeprecating humor (Woody Allen items). The humor.sav dataset contains responses on the measure from 100 college students. Students responded to each item on a 5point scale with 1 indicating “disagree” and 5 indicating “agree.” The items are as follows:
Item01 I like to make fun of others.
Item02 I make people laugh by making fun of myself.
Item03 People find me funny when I make jokes about others.
Item04 I talk about my problems to make people laugh.
Item05 I frequently make others the target of my jokes.
Item06 People find me funny when I tell them about my failings.
Item07 I love to get people to laugh by using sarcasm.
Item08 I am funniest when I talk about my own weaknesses.
Item09 I make people laugh by exposing other people’s stupidities.
Item10 I am funny when I tell others about the dumb things I have done.
We know there are two kinds of questions here (Rickles versus Allen). So, theoretically, we could create two scale scores, one for the Rickles items and one for the Allen items. In this lab, we’ll focus on the Rickles items. In particular, we will be interested in the reliability (precision, consistency, dependability, and stability) of the Rickles scale. The items that go into the Rickles scale are items 1, 3, 5, 7, and 9.
SingleFactor Model and Omega using AMOS
We will begin by analyzing the Spearman SingleFactor model of the Rickles Scale. To do so, we will go into AMOS and run a confirmatory factor analysis. Open AMOS Graphics.
Drawing Your Model in AMOS
At the top of the third column of icons, click on the icon that displays a single latent factor (i.e., a circle) that has three observed variables (i.e., boxes) and an error term associated with each observed variable (i.e., a small circle attached to each box).
In the large open box, move your cursor near the center. Left click and hold, then drag your mouse to create a latent factor (i.e., a circle). Now release the left click. If you left click your mouse again, AMOS will add an observed variable, along with an error term for that observed variable (i.e., it will add a box and a small circle attached to the box with an arrow). Each time you left click, AMOS will add another observed variable and an associated error term. We have five items for the Rickles scale, so we need to have a total of 5 observed variables.
Now let’s label our latent and observed variables. AMOS interfaces will SPSS, and you need to tell AMOS which SPSS dataset you want to use.
In the middle of the first column of icons, you’ll see one entitled “Select Data File.” Click on this icon. In the window that pops up, select File Name. Select the humor.sav file. AMOS gives you a brief description of the dataset you selected, but if you want to be sure you have the correct dataset, you can select View Data and SPSS will open the selected dataset. Click OK.
Now we need to name our variables and associate error terms. AMOS is quite picky regarding variable names! They need to exactly match the variable names that appear in the SPSS dataset that AMOS is getting its information from. To make your life easier, you can pull variable names directly out of the dataset, rather than type them in yourself.
Click ViewVariables in Dataset.
AMOS will open a window that lists all the variables in your dataset. You can drag these variables into the appropriate locations. Click item01 and drag it over to the first observed variable (i.e., the first box). Repeat this for items 3, 5, 7, 9.
We also need to name our latent factor (i.e., circle). We’ll call this latent factor Rickles Scale.
Finally, we need to name each of our error terms (i.e., small circles attached to boxes). Right click (or try doubleclicking) the error term for item01 and select Object Properties. In the Text tab, you’ll see a space for the Variable Name and Variable Label. In the Variable Name box, type e1. Click the X to close the window. Now Right click the error term for item03, select Object Properties, and name this error term e3. Repeat this process for the 3 remaining error terms, naming them e5, e7, and e9.
Note that the latent variable (factor) leads to (arrows point toward) the five indicators. In order to identify (allow for estimation) the model, AMOS has, by default, constrained one of the regression weights (indicator paths) to 1. [Todd will discuss the idea of model identification in class.] For our purposes, we want AMOS to estimate that path, so we will constrain the variance of the latent variable to 1 as is McDonald’s preference for this model type. To do this: right click the path (arrow) with the 1 on it. Select object properties. Under the Parameter tab, remove the 1 from the regression weight box. Click the X to close the window. Now right click the latent factor. Under the Parameter tab, put a 1 in the regression weight box. Close the window.
Select Analysis PropertiesOutput. Check: Standardized estimates, Sample Moments, and Residual Moments. Exit Analysis Properties.
Click the Calculate Estimates icon (you can find this icon in the third column, toward the middle). In SPSS we’re used to seeing an output box pop up once our analysis is complete. This does not happen in AMOS. Instead, we can tell that this analysis has run by looking at the column of information to the right of the icons. In the fifth box down, we should see some information (usually a chi square value, etc) and the word Finished should be highlighted. We can now view both the unstandardized and standardized factor loadings for our model, as well as the output.
To view the factor loadings: at the top of the column of information to the right of the icons, you’ll see two rectangles. If you click on the rectangle with the red arrow, you can view the factor loadings. In the fourth box down (below the rectangles) you’ll see “Unstandardized” and “Standardized.” If you want to view the unstandardized factor loadings, click on Unstandardized. If you want to view the standardized factor loadings, click on Standardized. AMOS defaults to the unstandardized factor loadings (presented below).
Evaluating How Well the Model Fits the Data
Chi Square Test
AMOS gives us a chi square test for our model. Click on the View Text icon (in the second column, toward the middle). An output window will appear. In the output window, click on Notes for Model.
Result (Default model)
Minimum was achieved
Chisquare = 3.576
Degrees of freedom = 5
Probability level = .612
The null hypothesis associated with this chi square test is that the model fits the data well. So, we DON’T want to reject this null hypothesis. What we’d like to see in the chi square test is a pvalue GREATER THAN .05. In other words, we WANT a nonsignificant chi square. However, it turns out that it’s quite likely that we will reject the chi square test. We often have large sample sizes when conducting CFAs, and as sample size increases, the likelihood of obtaining pvalues less than .05 increases. We actually obtain a nonsignificant chi square, which is good.
Model Fit Indices
Because of the known problem with the chi square test, several alternative indices of model fit have been developed. We’ll focus on just a couple. In the Output window, click on Model Fit.
RMR, GFI
Model

RMR

GFI

AGFI

PGFI

Default model

.023

.987

.960

.329

Saturated model

.000

1.000



Independence model

.191

.694

.541

.463

We focus on the DEFAULT model, which is the model we drew. The saturated model is the model that fits the data perfectly. The independence model assumes that everything in the model is unrelated. Again, we focus on the DEFAULT model.
GFI: A value of 1.00 here indicates perfect fit. .95 is a good fit, and .90 is acceptable fit. We obtained a value of .987, indicating that a single factor model fits the data well.
RMR: A value of 0 here indicates perfect fit. We have a value of .023, which again indicates that a single factor model is a good fit to the data.
Residual Covariance Matrix
We can also look at the standardized covariance matrix in determining the fit of our single factor model. In the output window, select Estimates from the outline on the left. In this matrix, we’d like values to be as close to zero as possible. We use 2 as a rule of thumb, that is, if we see values greater than +2 or 2, we might be concerned. All of the values in our matrix are less than one, so we’re happy.
Standardized Residual Covariances (Group number 1  Default model)

item09

item07

item05

item03

item01

item09

.000





item07

.555

.000




item05

.361

.500

.000



item03

.229

.061

.055

.000


item01

.689

.705

.062

.003

.000

Overall: Based on information from the chi square test, the GFI, and the RMR, it appears that a single factor model is a good fit to the data. Since we now feel safe in assuming that all five Rickles items seem to be tapping the same latent construct, we’ll calculate reliability estimates for this scale in SPSS.
Reliability of the Rickles Scale
Calculating Omega
We can use the information in the covariance matrix along with the factor loadings to calculate the reliability coefficient, omega.
In the output window, click on Sample Moments in the left column to view the covariance matrix:
Sample Covariances (Group number 1)

ITEM09

ITEM07

ITEM05

ITEM03

ITEM01

ITEM09

.604





ITEM07

.187

.768




ITEM05

.228

.221

.722



ITEM03

.224

.315

.429

.730


ITEM01

.043

.159

.147

.179

.607

Summing the factor loadings we get:
= .25+.71+.60+.44+.34 = 2.34
^{2} = (2.34)^{2 }= 5.4756 (truescore variance of the total test score)
Summing the covariance matrix:
Off diagonal = .179+.147+.159+.043+.224+.315+.429+.221+.228+.187 = 2.132
= 2*2.132 = 4.264
Diagonal = .607+.730+.722+.768+.604 = 3.431
Total Scale variance = = 4.264+ 3.431 = 7.695
= 5.4756/ 7.695 = .712
So the omega reliability coefficient for this scale is .712, which is right about the cutoff for “acceptable” reliability. Typically we hope for reliability coefficients around .80 or better.
The Standard Error of Measurement
We can also use omega and our scale variance to calculate the standard error of the measurement:
OR
The standard deviation of the test is the square root of the variance:
=
=
= 2.77
2.77
= 2.77 * .537
= 1.487
Thus, of the variability in the Rickles scale scores (i.e., SD = 2.77), much of that variability is due to measurement error (i.e., 1.487).
Calculating Cronbach’s Alpha (by hand)
Remember that the coefficient alpha is also a measure of reliability. Alpha assumes truescore equivalence (tauequivalence) in addition to the homogeneity assumption for omega. Coefficient alpha and omega will be equivalent when the assumption of tauequivalence holds.
The computation of Cronbach’s alpha depends on the covariance matrix of the items in the scale. The correlation/covariance matrix for the 5 items in the Rickles scale is as follows:
First, inspecting the correlations (the first value in each cell), you see that each item correlates positively with each of the other items. The magnitude of these correlations range from .071 to .591.
To compute Cronbach’s alpha we use the formula
where k is the number of items _{i}^{2} is the sum of the item variances (i.e., the diagonal elements of the covariance matrix) and _{y}^{2} is the variance of the total test score (which equals the sum of all elements in the covariance matrix). Plugging in the numbers
Where
_{i}^{2 }= .614 + .737 + .729 + .776 + .610 = 3.466
and
_{y}^{2} = .614 +.180 +.148 + .161 +.043 + .180 + .737 + .433 + .318 + .226 + .148 + .433 + .729 + .224 + .230 + .161 + .318 + .224 + .776 + .189 + .043 + .226 + .230 + .610
= 7.77.
So the alpha reliability coefficient for this scale is .692, which is right about the cutoff for “acceptable” reliability. Typically we hope for reliability coefficients around .80 or better.
Unfortunately, SPSS will not compute the standard error of measurement for us. We’ll need to do this by hand.
The Standard Error of Measurement
The standard error of measurement for the Rickles scale is as follows:
Plugging in our numbers, the standard error of measurement is
. 2.79 X SqRt of .308 (i.e., .55495) = 1.5469. Thus, of the variability in the Rickles scale scores (i.e., SD = 2.79), much of that variability is due to measurement error (i.e., 1.5469).
Calculating Cronbach’s Alpha Using SPSS
We now focus on using SPSS to compute coefficient alpha for the Rickles Scale.
In SPSS under Analyze Scale Reliability Analysis, select the five items to the Rickles scale (i.e., items 1, 3, 5, 7, and 9). Use the default of Alpha (which is Cronbach’s alpha). Click “OK.”
Reliability
The alpha reliability estimate is .692.
Note that Cronbach’s alpha is slightly smaller than the above omega coefficient. This is due to the added assumption of truescore equivalence.
In summary, the scale based on the five Rickles items appears to have adequate reliability, while our single factor model fits the data rather well.
Other Useful Output from SPSS Reliability
Now we will look at some additional output that is useful when conducting reliability analyses.
In SPSS under Analyze Scale Reliability Analysis, select the five items to the Rickles scale (i.e., items 1, 3, 5, 7, and 9). Use the default of Alpha (which is Cronbach’s alpha). Click the “Statistics” box. Under “Descriptives For” select “scale if item deleted”. Click “continue” and then click “OK.”
Reliability
Again, the alpha reliability estimate is .692.
This output is useful to identify how important each item is for the scale reliability estimate. Looking at the last column we see the alpha would be if that particular item was dropped from creating the scale (i.e., the scale would be based on the other 4 items). Note that alpha improves to .709 if we don’t include the first item in the scale. But it is not much of an improvement. Overall alpha doesn’t improve much if we drop any item so we’ll keep them in based on this criterion. So we’ll keep this first item in the scale. In contrast, alpha drops to .562 if we don’t include item 3 (the second item in the table). This is a big drop. We will want to keep this item in as reliability suffers a lot if we drop it.
Another useful statistic in this table is the corrected itemtotal correlation. An itemtotal correlation is the correlation between each item and the total scale score. We want this number to be big ideally for each item. However each item contributes to the total scale score which will inflate the correlation artificially. The corrected itemtotal correlation is the correlation between each item and the total scale score based on the other items (i.e., the other 4 items). This corrected correlation is not artificially inflated. We want this corrected itemtotal correlation to be large. Inspecting these corrected correlations, we see that the lowest corrected itemtotal correlation is for the first item (i.e., .275) which suggests that this item is not very strongly related to the other items (that we aggregated). Note that this was also the item that improved alpha if deleted. There are no firm guidelines to suggest dropping items based on the corrected itemtotal correlation but this information gives us further information of how our variables relate to one another and the scale scores.
