http://statwiki.kolobkreations.com/api.php?action=feedcontributions&user=128.187.116.4&feedformat=atomStatWiki - User contributions [en]2018-09-21T06:33:31ZUser contributionsMediaWiki 1.24.4http://statwiki.kolobkreations.com/index.php?title=Confirmatory_Factor_Analysis&diff=762107Confirmatory Factor Analysis2016-09-19T19:36:49Z<p>128.187.116.4: /* Validity and Reliability */</p>
<hr />
<div>Confirmatory Factor Analysis (CFA) is the next step after exploratory factor analysis to determine the factor structure of your dataset. In the EFA we ''explore'' the factor structure (how the variables relate and group based on inter-variable correlations); in the CFA we ''confirm'' the factor structure we extracted in the EFA. <br />
*[[File:books.jpg]] '''''LESSON:''''' [http://www.kolobkreations.com/CFA.pptx '''Confirmatory Factor Analysis'''] <br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [http://youtu.be/MCYmyzRZnIY '''CFA part 1''']<br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [http://youtu.be/hqV9RUSwwdA '''CFA part 2''']<br />
----<br />
[[File:citation.png]]''Do you know of some citations that could be used to support the topics and procedures discussed in this section? Please [mailto:james.eric.gaskin@gmail.com email them to me] with the name of the section, procedure, or subsection that they support. '''Thanks!'''''<br />
----<br />
== Model Fit ==<br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [http://www.youtube.com/watch?v=JkZGWUUjdLg '''Handling Model Fit''']<br />
Model fit refers to how well our proposed model (in this case, the model of the factor structure) accounts for the correlations between variables in the dataset. If we are accounting for all the major correlations inherent in the dataset (with regards to the variables in our model), then we will have good fit; if not, then there is a significant "discrepancy" between the correlations proposed and the correlations observed, and thus we have poor model fit. Our proposed model does not "fit" the observed or "estimated" model (i.e., the correlations in the dataset). Refer to the CFA video tutorial for specifics on how to go about performing a model fit analysis during the CFA. <br />
<br />
=== Metrics ===<br />
There are specific measures that can be calculated to determine goodness of fit. The metrics that ought to be reported are listed below, along with their acceptable thresholds. Goodness of fit is inversely related to sample size and the number of variables in the model. Thus, the thresholds below are simply a guideline. For more contextualized thresholds, see Table 12-4 in Hair et al. 2010 on page 654. The thresholds listed in the table below are from Hu and Bentler (1999).<br />
<br />
[[File:GOFMetrics1.png]]<br />
<br />
=== Modification indices ===<br />
Modification indices offer suggested remedies to discrepancies between the proposed and estimated model. In a CFA, there is not much we can do by way of adding regression lines to fix model fit, as all regression lines between latent and observed variables are already in place. Therefore, in a CFA, we look to the modification indices for the covariances. Generally, we should not covary error terms with observed or latent variables, or with other error terms that are not part of the same factor. Thus, the most appropriate modification available to us is to covary error terms that are part of the same factor. The figure below illustrates this guideline - however, there are exceptions. In general, you want to address the largest modification indices before addressing more minor ones. For more information on when it is okay to covary error terms (because there are other appropriate reasons), refer to David Kenny's thoughts on the matter: [http://davidakenny.net/cm/respec.htm David's website]<br />
<br />
[[File:modelfit.png]]<br />
<br />
=== Standardized Residual Covariances ===<br />
Standardized Residual Covariances (SRCs) are much like modification indices; they point out where the discrepancies are between the proposed and estimated models. However, they also indicate whether or not those discrepancies are significant. A significant ''standardized'' residual covariance is one with an absolute value greater than 2.58. Significant residual covariances significantly decrease your model fit. Fixing model fit per the residuals matrix is similar to fixing model fit per the modification indices. The same rules apply. For a more specific run-down of how to calculate and locate residuals, refer to the CFA video tutorial. It should be noted however, that in practice, I never address SRCs unless I cannot achieve adequate fit via modification indices, because addressing the SRCs requires the removal of items.<br />
<br />
== Validity and Reliability ==<br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [http://www.youtube.com/watch?v=yk6DVC7Wg7g '''Testing Validity and Reliability in a CFA''']<br />
<br />
It is absolutely necessary to establish convergent and discriminant validity, as well as reliability, when doing a CFA. If your factors do not demonstrate adequate validity and reliability, moving on to test a causal model will be useless - garbage in, garbage out! There are a few measures that are useful for establishing validity and reliability: Composite Reliability (CR), Average Variance Extracted (AVE), Maximum Shared Variance (MSV), and Average Shared Variance (ASV). The video tutorial will show you how to calculate these values. The thresholds for these values are as follows:<br />
<br />
'''Reliability'''<br />
*CR > 0.7<br />
<br />
'''Convergent Validity'''<br />
*AVE > 0.5<br />
<br />
'''Discriminant Validity'''<br />
*MSV < AVE<br />
*ASV < AVE<br />
*Square root of AVE greater than inter-construct correlations<br />
<br />
If you have convergent validity issues, then your variables do not correlate well with each other within their parent factor; i.e, the latent factor is not well explained by its observed variables. If you have discriminant validity issues, then your variables correlate more highly with variables outside their parent factor than with the variables within their parent factor; i.e., the latent factor is better explained by some other variables (from a different factor), than by its own observed variables.<br />
<br />
If you need to cite these suggested thresholds, please use the following:<br />
<br />
*Hair, J., Black, W., Babin, B., and Anderson, R. (2010). ''Multivariate data analysis'' (7th ed.): Prentice-Hall, Inc. Upper Saddle River, NJ, USA.<br />
<br />
AVE is a strict measure of convergent validity. Malhotra and Dash (2011) note that "AVE is a more conservative measure than CR. On the basis of CR alone, the researcher may conclude that the convergent validity of the construct is adequate, even though more than 50% of the variance is due to error.” (Malhotra and Dash, 2011, p.702).<br />
<br />
*Malhotra N. K., Dash S. (2011). Marketing Research an Applied Orientation (Paperback). London: Pearson Publishing.<br />
<br />
Here is an updated video that uses the most recent Stats Tools Package, which includes a more accurate measure of AVE and CR. <br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [https://youtu.be/CFBUECZgUuo?list=PLnMJlbz3sefJaVv8rBL2_G85HoUko5I-- '''SEM Series (2016) 5. Confirmatory Factor Analysis Part 2''']<br />
<br />
== Common Method Bias (CMB) ==<br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [https://youtu.be/CFBUECZgUuo?list=PLnMJlbz3sefJaVv8rBL2_G85HoUko5I-- '''Zero-constraint approach to CMB''']<br />
*'''REF:''' Podsakoff, P.M., MacKenzie, S.B., Lee, J.Y., and Podsakoff, N.P. "Common method biases in behavioral research: a critical review of the literature and recommended remedies," ''Journal of Applied Psychology'' (88:5) 2003, p 879.<br />
<br />
Common method bias refers to a bias in your dataset due to something external to the measures. Something external to the question may have influenced the response given. For example, collecting data using a single (common) method, such as an online survey, may introduce systematic response bias that will either inflate or deflate responses. A study that has significant common method bias is one in which a majority of the variance can be explained by a single factor. To test for a common method bias you can do a few different tests. Each will be described below. For a step by step guide, refer to the video tutorials.<br />
<br />
=== Harman’s single factor test ===<br />
*''It should be noted that the Harman's single factor test is no longer widely accepted and is considered an outdated and inferior approach.''<br />
A Harman's single factor test tests to see if the majority of the variance can be explained by a single factor. To do this, constrain the number of factors extracted in your EFA to be just one (rather than extracting via eigenvalues). Then examine the unrotated solution. If CMB is an issue, a single factor will account for the majority of the variance in the model (as in the figure below).<br />
<br />
[[File:Harman.png]]<br />
<br />
=== Common Latent Factor ===<br />
This method uses a common latent factor (CLF) to capture the common variance among all observed variables in the model. To do this, simply add a latent factor to your AMOS CFA model (as in the figure below), and then connect it to '''all''' observed items in the model. Then compare the standardised regression weights from this model to the standardized regression weights of a model without the CLF. If there are large differences (like greater than 0.200) then you will want to retain the CLF as you either impute composites from factor scores, or as you move in to the structural model. The CLF video tutorial demonstrates how to do this.<br />
<br />
[[File:CMF.png]]<br />
<br />
=== Marker Variable ===<br />
This method is simply an extended, and more accurate way to do the common latent factor method. For this method, just add another latent factor to the model (as in the figure below), ''but make sure it is something that you would not expect to correlate with the other latent factors in the model'' (i.e., the observed variables for this new factor should have low, or no, correlation with the observed variables from the other factors). Then add the common latent factor. This method teases out truer common variance than the basic common latent factor method because it is finding the common variance between unrelated latent factors. Thus, any common variance is likely due to a common method bias, rather than natural correlations. This method is demonstrated in the common method bias video tutorial.<br />
<br />
[[File:Marker.png]]<br />
<br />
=== Zero-constrained ===<br />
The most current and best approach is the zero-constrained test where you include the CLF and a Marker if available, and then conduct a chi-square difference test between the unconstrained model and a model where all paths from the CLF are constrained to zero. This approach tests whether the amount of shared variance across all variables is significantly different from zero. If it is, then we conclude that method bias does exist in our measures. The video listed at the top of this subsection demonstrates this approach.<br />
<br />
== Measurement Model Invariance ==<br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [http://www.youtube.com/watch?v=6j4_ZrkCxTc '''Measurement Model Invariance''']<br />
Before creating composite variables for a path analysis, configural and metric invariance should be tested during the CFA to validate that the factor structure and loadings are sufficiently equivalent across groups, otherwise your composite variables will not be very useful (because they are not actually measuring the same underlying latent construct for both groups). <br />
<br />
=== Configural ===<br />
Configural invariance tests whether the factor structure represented in your CFA achieves adequate fit when both groups are tested together and freely (i.e., without any cross-group path constraints). To do this, simply build your measurement model as usual, create two groups in AMOS (e.g., male and female), and then split the data along gender. Next, attend to model fit as usual (here’s a reminder: [[Confirmatory Factor Analysis#Model Fit|Model Fit]]). If the resultant model achieves good fit, then you have configural invariance. If you don’t pass the configural invariance test, then you may need to look at the modification indices to improve your model fit or to see how to restructure your CFA.<br />
<br />
=== Metric ===<br />
If we pass the test of configural invariance, then we need to test for metric invariance. To test for metric invariance, simply perform a chi-square difference test on the two groups just as you would for a structural model. The evaluation is the same as in the structural model invariance test: if you have a significant p-value for the chi-square difference test, then you have evidence of differences between groups, otherwise, they are invariant and you may proceed to make your composites from this measurement model (but make sure you use the whole dataset when you create composites, instead of using the split dataset).<br />
<br />
An even simpler and less time-consuming approach to metric invariance is to conduct a multigroup moderation test using critical ratios for differences in AMOS. Below is a video to explain how to do this. The video is about a lot of things in the CFA, but the link below will start you at the time point for testing metric invariance with critical ratios.<br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [http://youtu.be/MCYmyzRZnIY '''Metric Invariance''']<br />
<br />
=== Contingency Plans ===<br />
<br />
If you do not achieve invariant models, here are some appropriate approaches in the order I would attempt them.<br />
*1. '''Modification indices:''' Fit the model for each group using the unconstrained measurement model. You can toggle between groups when looking at modification indices. So, for example, for males, there might be a high MI for the covariance between e1 and e2, but for females this might not be the case. Go ahead and add those covariances appropriately for both groups. When adding them to the model, it does it for both groups, even if you only needed to do it for one of them. If fitting the model this way does not solve your invariance issues, then you will need to look at differences in regression weights. <br />
*2. '''Regression weights:''' You need to figure out which item or items are causing the trouble (i.e., which ones do not measure the same across groups). The cause of the lack of invariance is most likely due to one of two things: the strength of the loading for one or more items differs significantly across groups, or, an item or two load better on a factor other than their own for one or more groups. To address the first issue, just look at the standardized regression weights for each group to see if there are any major differences (just eyeball it). If you find a regression weight that is exceptionally different (for example, item2 on Factor 3 has a loading of 0.34 for males and 0.88 for females), then you may need to remove that item if possible. Retest and see if invariance issues are solved. If not, try addressing the second issue (explained next).<br />
*3. '''Standardized Residual Covariances:''' To address the second issue, you need to analyze the standardized residual covariances (check the residual moments box in the output tab). I talk about this a little bit in my video called “[http://www.youtube.com/watch?v=JkZGWUUjdLg Model fit during a Confirmatory Factor Analysis (CFA) in AMOS]” around the 8:35 mark. This matrix can also be toggled between groups. Here is a small example for CSRs and BCRs. We observe that for the BCR group rd3 and q5 have high standardized residual covariances with sw1. So, we could remove sw1 and see if that fixes things, but SW only has three items right now, so another option is to remove rd3 or q5 and see if that fixes things, and if not, then return to this matrix after rerunning things, and see if there are any other issues. Remove items sparingly, and only one at a time, trying your best to leave at least three items with each factor, although two items will also sometimes work if necessary (two just becomes unstable). If you still have issues, then your groups are exceptionally different… This may be due to small sample size for one of the groups. If such is the case, then you may have to list that as a limitation and just move on.<br />
<br />
== 2nd Order Factors ==<br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [http://www.youtube.com/watch?v=HBQPqj63Y7s '''Handling 2nd Order Factors''']<br />
Handling 2nd order factors in AMOS is not difficult, but it is tricky. And, if you don't get it right, it won't run. The pictures below offer a simple example of how you would model a 2nd order factor in a measurement model and in a structural model. The YouTube video tutorial above demonstrates how to handle 2nd order factors, and explains how to report them.<br />
<br />
[[File:2ndOrderCFA.png]]<br />
[[File:2ndOrderSEM.png]]<br />
<br />
== Common CFA Problems ==<br />
1. CFA that reaches iteration limit.<br />
*Here is a video: *[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [https://youtu.be/B7YOv7hSohY '''Iteration limit reached in AMOS'''] <br />
2. CFA that shows CMB = 0 (sometimes happens when paths from CLF are constrained to be equal)<br />
*The best approach to CMB is just to not constrain them to be equal. Instead, it is best to do a chi-square difference test between the unconstrained model (with CLF and marker if available) and the same model but with all paths from the CLF constrained to zero. This tells us whether the common variance is different from zero.<br />
3. CFA with negative error variances<br />
*This shouldn’t happen if all data screening and EFA worked out well, but it still happens… In such cases, it is permitted to constrain the error variance to a small positive number (e.g., 0.001)<br />
4. CFA with negative error covariances (sometimes shows up as “not positive definite”)<br />
*In such cases, there is usually a measurement issue deeper down (like skewness or kurtosis, or too much missing data, or a variable that is nominal). If it cannot be fixed by addressing these deeper down issues, then you might be able to correct it by moving the latent variable path constraint (usually 1) to another path. Usually this issue accompanies the negative error variance, so we can usually fix it by fixing the negative error variance first. <br />
5. CFA with Heywood cases<br />
*This often happens when we have only two items for a latent variable, and one of them is very dominant. First try moving the latent variable path constraint to a different path. If this doesn’t work then, move the path constraint up to the latent variable variance constraint AND constrain the paths to be equal (by naming them both the same thing, like “aaa”).<br />
<br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [https://youtu.be/Vx24KFf-rAo '''AMOS Heywood Case'''] <br />
6. CFA with discriminant validity issues<br />
*This shouldn’t happen if the EFA solution was satisfactory. However, it still happens sometimes when two latent factors are strongly correlated. This strong correlation is a sign of overlapping traits. For example, confidence and self-efficacy. These two traits are too similar. Either one could be dropped, or you could create a 2nd order factor out of them: <br />
<br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [http://www.youtube.com/watch?v=HBQPqj63Y7s '''Handling 2nd Order Factors''']<br />
7. CFA with “missing constraint” error<br />
*Sometimes the CFA will say you need to impose 1 additional constraint (sometimes it says more than this). This is usually caused by drawing the model incorrectly. Check to see if all latent variables have a single path constrained to 1 (or the latent variable variance constrained to 1).</div>128.187.116.4http://statwiki.kolobkreations.com/index.php?title=Confirmatory_Factor_Analysis&diff=762106Confirmatory Factor Analysis2016-09-19T19:36:29Z<p>128.187.116.4: /* Validity and Reliability */</p>
<hr />
<div>Confirmatory Factor Analysis (CFA) is the next step after exploratory factor analysis to determine the factor structure of your dataset. In the EFA we ''explore'' the factor structure (how the variables relate and group based on inter-variable correlations); in the CFA we ''confirm'' the factor structure we extracted in the EFA. <br />
*[[File:books.jpg]] '''''LESSON:''''' [http://www.kolobkreations.com/CFA.pptx '''Confirmatory Factor Analysis'''] <br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [http://youtu.be/MCYmyzRZnIY '''CFA part 1''']<br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [http://youtu.be/hqV9RUSwwdA '''CFA part 2''']<br />
----<br />
[[File:citation.png]]''Do you know of some citations that could be used to support the topics and procedures discussed in this section? Please [mailto:james.eric.gaskin@gmail.com email them to me] with the name of the section, procedure, or subsection that they support. '''Thanks!'''''<br />
----<br />
== Model Fit ==<br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [http://www.youtube.com/watch?v=JkZGWUUjdLg '''Handling Model Fit''']<br />
Model fit refers to how well our proposed model (in this case, the model of the factor structure) accounts for the correlations between variables in the dataset. If we are accounting for all the major correlations inherent in the dataset (with regards to the variables in our model), then we will have good fit; if not, then there is a significant "discrepancy" between the correlations proposed and the correlations observed, and thus we have poor model fit. Our proposed model does not "fit" the observed or "estimated" model (i.e., the correlations in the dataset). Refer to the CFA video tutorial for specifics on how to go about performing a model fit analysis during the CFA. <br />
<br />
=== Metrics ===<br />
There are specific measures that can be calculated to determine goodness of fit. The metrics that ought to be reported are listed below, along with their acceptable thresholds. Goodness of fit is inversely related to sample size and the number of variables in the model. Thus, the thresholds below are simply a guideline. For more contextualized thresholds, see Table 12-4 in Hair et al. 2010 on page 654. The thresholds listed in the table below are from Hu and Bentler (1999).<br />
<br />
[[File:GOFMetrics1.png]]<br />
<br />
=== Modification indices ===<br />
Modification indices offer suggested remedies to discrepancies between the proposed and estimated model. In a CFA, there is not much we can do by way of adding regression lines to fix model fit, as all regression lines between latent and observed variables are already in place. Therefore, in a CFA, we look to the modification indices for the covariances. Generally, we should not covary error terms with observed or latent variables, or with other error terms that are not part of the same factor. Thus, the most appropriate modification available to us is to covary error terms that are part of the same factor. The figure below illustrates this guideline - however, there are exceptions. In general, you want to address the largest modification indices before addressing more minor ones. For more information on when it is okay to covary error terms (because there are other appropriate reasons), refer to David Kenny's thoughts on the matter: [http://davidakenny.net/cm/respec.htm David's website]<br />
<br />
[[File:modelfit.png]]<br />
<br />
=== Standardized Residual Covariances ===<br />
Standardized Residual Covariances (SRCs) are much like modification indices; they point out where the discrepancies are between the proposed and estimated models. However, they also indicate whether or not those discrepancies are significant. A significant ''standardized'' residual covariance is one with an absolute value greater than 2.58. Significant residual covariances significantly decrease your model fit. Fixing model fit per the residuals matrix is similar to fixing model fit per the modification indices. The same rules apply. For a more specific run-down of how to calculate and locate residuals, refer to the CFA video tutorial. It should be noted however, that in practice, I never address SRCs unless I cannot achieve adequate fit via modification indices, because addressing the SRCs requires the removal of items.<br />
<br />
== Validity and Reliability ==<br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [http://www.youtube.com/watch?v=yk6DVC7Wg7g '''Testing Validity and Reliability in a CFA''']<br />
<br />
It is absolutely necessary to establish convergent and discriminant validity, as well as reliability, when doing a CFA. If your factors do not demonstrate adequate validity and reliability, moving on to test a causal model will be useless - garbage in, garbage out! There are a few measures that are useful for establishing validity and reliability: Composite Reliability (CR), Average Variance Extracted (AVE), Maximum Shared Variance (MSV), and Average Shared Variance (ASV). The video tutorial will show you how to calculate these values. The thresholds for these values are as follows:<br />
<br />
'''Reliability'''<br />
*CR > 0.7<br />
<br />
'''Convergent Validity'''<br />
*AVE > 0.5<br />
<br />
'''Discriminant Validity'''<br />
*MSV < AVE<br />
*ASV < AVE<br />
*Square root of AVE greater than inter-construct correlations<br />
<br />
If you have convergent validity issues, then your variables do not correlate well with each other within their parent factor; i.e, the latent factor is not well explained by its observed variables. If you have discriminant validity issues, then your variables correlate more highly with variables outside their parent factor than with the variables within their parent factor; i.e., the latent factor is better explained by some other variables (from a different factor), than by its own observed variables.<br />
<br />
If you need to cite these suggested thresholds, please use the following:<br />
<br />
Hair, J., Black, W., Babin, B., and Anderson, R. (2010). ''Multivariate data analysis'' (7th ed.): Prentice-Hall, Inc. Upper Saddle River, NJ, USA.<br />
<br />
AVE is a strict measure of convergent validity. Malhotra and Dash (2011) note that "AVE is a more conservative measure than CR. On the basis of CR alone, the researcher may conclude that the convergent validity of the construct is adequate, even though more than 50% of the variance is due to error.” (Malhotra and Dash, 2011, p.702).<br />
<br />
Malhotra N. K., Dash S. (2011). Marketing Research an Applied Orientation (Paperback). London: Pearson Publishing.<br />
<br />
Here is an updated video that uses the most recent Stats Tools Package, which includes a more accurate measure of AVE and CR. <br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [https://youtu.be/CFBUECZgUuo?list=PLnMJlbz3sefJaVv8rBL2_G85HoUko5I-- '''SEM Series (2016) 5. Confirmatory Factor Analysis Part 2''']<br />
<br />
== Common Method Bias (CMB) ==<br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [https://youtu.be/CFBUECZgUuo?list=PLnMJlbz3sefJaVv8rBL2_G85HoUko5I-- '''Zero-constraint approach to CMB''']<br />
*'''REF:''' Podsakoff, P.M., MacKenzie, S.B., Lee, J.Y., and Podsakoff, N.P. "Common method biases in behavioral research: a critical review of the literature and recommended remedies," ''Journal of Applied Psychology'' (88:5) 2003, p 879.<br />
<br />
Common method bias refers to a bias in your dataset due to something external to the measures. Something external to the question may have influenced the response given. For example, collecting data using a single (common) method, such as an online survey, may introduce systematic response bias that will either inflate or deflate responses. A study that has significant common method bias is one in which a majority of the variance can be explained by a single factor. To test for a common method bias you can do a few different tests. Each will be described below. For a step by step guide, refer to the video tutorials.<br />
<br />
=== Harman’s single factor test ===<br />
*''It should be noted that the Harman's single factor test is no longer widely accepted and is considered an outdated and inferior approach.''<br />
A Harman's single factor test tests to see if the majority of the variance can be explained by a single factor. To do this, constrain the number of factors extracted in your EFA to be just one (rather than extracting via eigenvalues). Then examine the unrotated solution. If CMB is an issue, a single factor will account for the majority of the variance in the model (as in the figure below).<br />
<br />
[[File:Harman.png]]<br />
<br />
=== Common Latent Factor ===<br />
This method uses a common latent factor (CLF) to capture the common variance among all observed variables in the model. To do this, simply add a latent factor to your AMOS CFA model (as in the figure below), and then connect it to '''all''' observed items in the model. Then compare the standardised regression weights from this model to the standardized regression weights of a model without the CLF. If there are large differences (like greater than 0.200) then you will want to retain the CLF as you either impute composites from factor scores, or as you move in to the structural model. The CLF video tutorial demonstrates how to do this.<br />
<br />
[[File:CMF.png]]<br />
<br />
=== Marker Variable ===<br />
This method is simply an extended, and more accurate way to do the common latent factor method. For this method, just add another latent factor to the model (as in the figure below), ''but make sure it is something that you would not expect to correlate with the other latent factors in the model'' (i.e., the observed variables for this new factor should have low, or no, correlation with the observed variables from the other factors). Then add the common latent factor. This method teases out truer common variance than the basic common latent factor method because it is finding the common variance between unrelated latent factors. Thus, any common variance is likely due to a common method bias, rather than natural correlations. This method is demonstrated in the common method bias video tutorial.<br />
<br />
[[File:Marker.png]]<br />
<br />
=== Zero-constrained ===<br />
The most current and best approach is the zero-constrained test where you include the CLF and a Marker if available, and then conduct a chi-square difference test between the unconstrained model and a model where all paths from the CLF are constrained to zero. This approach tests whether the amount of shared variance across all variables is significantly different from zero. If it is, then we conclude that method bias does exist in our measures. The video listed at the top of this subsection demonstrates this approach.<br />
<br />
== Measurement Model Invariance ==<br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [http://www.youtube.com/watch?v=6j4_ZrkCxTc '''Measurement Model Invariance''']<br />
Before creating composite variables for a path analysis, configural and metric invariance should be tested during the CFA to validate that the factor structure and loadings are sufficiently equivalent across groups, otherwise your composite variables will not be very useful (because they are not actually measuring the same underlying latent construct for both groups). <br />
<br />
=== Configural ===<br />
Configural invariance tests whether the factor structure represented in your CFA achieves adequate fit when both groups are tested together and freely (i.e., without any cross-group path constraints). To do this, simply build your measurement model as usual, create two groups in AMOS (e.g., male and female), and then split the data along gender. Next, attend to model fit as usual (here’s a reminder: [[Confirmatory Factor Analysis#Model Fit|Model Fit]]). If the resultant model achieves good fit, then you have configural invariance. If you don’t pass the configural invariance test, then you may need to look at the modification indices to improve your model fit or to see how to restructure your CFA.<br />
<br />
=== Metric ===<br />
If we pass the test of configural invariance, then we need to test for metric invariance. To test for metric invariance, simply perform a chi-square difference test on the two groups just as you would for a structural model. The evaluation is the same as in the structural model invariance test: if you have a significant p-value for the chi-square difference test, then you have evidence of differences between groups, otherwise, they are invariant and you may proceed to make your composites from this measurement model (but make sure you use the whole dataset when you create composites, instead of using the split dataset).<br />
<br />
An even simpler and less time-consuming approach to metric invariance is to conduct a multigroup moderation test using critical ratios for differences in AMOS. Below is a video to explain how to do this. The video is about a lot of things in the CFA, but the link below will start you at the time point for testing metric invariance with critical ratios.<br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [http://youtu.be/MCYmyzRZnIY '''Metric Invariance''']<br />
<br />
=== Contingency Plans ===<br />
<br />
If you do not achieve invariant models, here are some appropriate approaches in the order I would attempt them.<br />
*1. '''Modification indices:''' Fit the model for each group using the unconstrained measurement model. You can toggle between groups when looking at modification indices. So, for example, for males, there might be a high MI for the covariance between e1 and e2, but for females this might not be the case. Go ahead and add those covariances appropriately for both groups. When adding them to the model, it does it for both groups, even if you only needed to do it for one of them. If fitting the model this way does not solve your invariance issues, then you will need to look at differences in regression weights. <br />
*2. '''Regression weights:''' You need to figure out which item or items are causing the trouble (i.e., which ones do not measure the same across groups). The cause of the lack of invariance is most likely due to one of two things: the strength of the loading for one or more items differs significantly across groups, or, an item or two load better on a factor other than their own for one or more groups. To address the first issue, just look at the standardized regression weights for each group to see if there are any major differences (just eyeball it). If you find a regression weight that is exceptionally different (for example, item2 on Factor 3 has a loading of 0.34 for males and 0.88 for females), then you may need to remove that item if possible. Retest and see if invariance issues are solved. If not, try addressing the second issue (explained next).<br />
*3. '''Standardized Residual Covariances:''' To address the second issue, you need to analyze the standardized residual covariances (check the residual moments box in the output tab). I talk about this a little bit in my video called “[http://www.youtube.com/watch?v=JkZGWUUjdLg Model fit during a Confirmatory Factor Analysis (CFA) in AMOS]” around the 8:35 mark. This matrix can also be toggled between groups. Here is a small example for CSRs and BCRs. We observe that for the BCR group rd3 and q5 have high standardized residual covariances with sw1. So, we could remove sw1 and see if that fixes things, but SW only has three items right now, so another option is to remove rd3 or q5 and see if that fixes things, and if not, then return to this matrix after rerunning things, and see if there are any other issues. Remove items sparingly, and only one at a time, trying your best to leave at least three items with each factor, although two items will also sometimes work if necessary (two just becomes unstable). If you still have issues, then your groups are exceptionally different… This may be due to small sample size for one of the groups. If such is the case, then you may have to list that as a limitation and just move on.<br />
<br />
== 2nd Order Factors ==<br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [http://www.youtube.com/watch?v=HBQPqj63Y7s '''Handling 2nd Order Factors''']<br />
Handling 2nd order factors in AMOS is not difficult, but it is tricky. And, if you don't get it right, it won't run. The pictures below offer a simple example of how you would model a 2nd order factor in a measurement model and in a structural model. The YouTube video tutorial above demonstrates how to handle 2nd order factors, and explains how to report them.<br />
<br />
[[File:2ndOrderCFA.png]]<br />
[[File:2ndOrderSEM.png]]<br />
<br />
== Common CFA Problems ==<br />
1. CFA that reaches iteration limit.<br />
*Here is a video: *[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [https://youtu.be/B7YOv7hSohY '''Iteration limit reached in AMOS'''] <br />
2. CFA that shows CMB = 0 (sometimes happens when paths from CLF are constrained to be equal)<br />
*The best approach to CMB is just to not constrain them to be equal. Instead, it is best to do a chi-square difference test between the unconstrained model (with CLF and marker if available) and the same model but with all paths from the CLF constrained to zero. This tells us whether the common variance is different from zero.<br />
3. CFA with negative error variances<br />
*This shouldn’t happen if all data screening and EFA worked out well, but it still happens… In such cases, it is permitted to constrain the error variance to a small positive number (e.g., 0.001)<br />
4. CFA with negative error covariances (sometimes shows up as “not positive definite”)<br />
*In such cases, there is usually a measurement issue deeper down (like skewness or kurtosis, or too much missing data, or a variable that is nominal). If it cannot be fixed by addressing these deeper down issues, then you might be able to correct it by moving the latent variable path constraint (usually 1) to another path. Usually this issue accompanies the negative error variance, so we can usually fix it by fixing the negative error variance first. <br />
5. CFA with Heywood cases<br />
*This often happens when we have only two items for a latent variable, and one of them is very dominant. First try moving the latent variable path constraint to a different path. If this doesn’t work then, move the path constraint up to the latent variable variance constraint AND constrain the paths to be equal (by naming them both the same thing, like “aaa”).<br />
<br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [https://youtu.be/Vx24KFf-rAo '''AMOS Heywood Case'''] <br />
6. CFA with discriminant validity issues<br />
*This shouldn’t happen if the EFA solution was satisfactory. However, it still happens sometimes when two latent factors are strongly correlated. This strong correlation is a sign of overlapping traits. For example, confidence and self-efficacy. These two traits are too similar. Either one could be dropped, or you could create a 2nd order factor out of them: <br />
<br />
*[[File:YouTube.png]]'''''VIDEO TUTORIAL:''''' [http://www.youtube.com/watch?v=HBQPqj63Y7s '''Handling 2nd Order Factors''']<br />
7. CFA with “missing constraint” error<br />
*Sometimes the CFA will say you need to impose 1 additional constraint (sometimes it says more than this). This is usually caused by drawing the model incorrectly. Check to see if all latent variables have a single path constrained to 1 (or the latent variable variance constrained to 1).</div>128.187.116.4