# Structural Equation Modeling

“Structural equation modeling (SEM) grows out of and serves purposes similar to multiple regression, but in a more powerful way which takes into account the modeling of interactions, nonlinearities, correlated independents, measurement error, correlated error terms, multiple latent independents each measured by multiple indicators, and one or more latent dependents also each with multiple indicators. SEM may be used as a more powerful alternative to multiple regression, path analysis, factor analysis, time series analysis, and analysis of covariance. That is, these procedures may be seen as special cases of SEM, or, to put it another way, SEM is an extension of the general linear model (GLM) of which multiple regression is a part.“ http://www.pire.org/

SEM is an umbrella concept for analyses such as mediation and moderation. This wiki page provides general instruction and guidance regarding how to write hypotheses for different types of SEMs, what to do with control variables, mediation, interaction, multi-group analyses, and model fit for structural models. Videos and slides presentations are provided in the subsections.

Do you know of some citations that could be used to support the topics and procedures discussed in this section? Please email them to me with the name of the section, procedure, or subsection that they support. Thanks!

## Hypotheses

Hypotheses are a keystone to causal theory. However, wording hypotheses is clearly a struggle for many researchers (just select at random any article from a good academic journal, and count the wording issues!). In this section I offer examples of how you might word different types of hypotheses. These examples are not exhaustive, but they are safe.

### Direct effects

"Diet has a positive effect on weight loss"

"An increase in hours spent watching television will negatively effect weight loss"

### Mediated effects

For mediated effects, be sure to indicate the direction of the mediation (positive or negative), the degree of the mediation (partial, full, or simply indirect), and the direction of the mediated relationship (positive or negative).

"Exercise positively and partially mediates the positive relationship between diet and weight loss"

"Television time positively and fully mediates the positive relationship between diet and weight loss"

"Diet affects weight loss positively and indirectly through exercise"

### Interaction effects

"Exercise positively moderates the positive relationship between diet and weight loss"

"Exercise amplifies the positive relationship between diet and weight loss"

"TV time negatively moderates (dampens) the positive relationship between diet and weight loss"

### Multi-group effects

"Body Mass Index (BMI) moderates the relationship between exercise and weight loss, such that for those with a low BMI, the effect is negative (i.e., you gain weight - muscle mass), and for those with a high BMI, the effect is positive (i.e., exercising leads to weight loss)"

"Age moderates the relationship between exercise and weight loss, such that for age < 40, the positive effect is stronger than for age > 40"

"Diet moderates the relationship between exercise and weight loss, such that for western diets the effect is positive and weak, for eastern (asia) diets, the effect is positive and strong"

### Mediated Moderation

An example of a mediated moderation hypothesis would be something like:

“Ethical concerns strengthen the negative indirect effect (through burnout) between customer rejection and job satisfaction.”

In this case, the IV is customer rejection, the DV is job satisfaction, burnout is the mediator, and the moderator is ethical concerns. The moderation is conducted through an interaction. However, if you have a categorical moderator, it would be something more like this (using gender as the moderator):

“The negative indirect effect between customer rejection and job satisfaction (through burnout) is stronger for men than for women.”

### Handling controls

When including controls in hypotheses (yes, you should include them), simply add at the end of any hypothesis, "when controlling for...[list control variables here]" For example:

"Exercise positively moderates the positive relationship between diet and weight loss when controlling for TV time and diet"

"Diet has a positive effect on weight loss when controlling for TV time and diet"

Another approach is to state somewhere above your hypotheses (while you're setting up your theory) that all your hypotheses take into account the effects of the following controls: A, B, and C. And then make sure to explain why.

### Supporting Hypotheses

Getting the wording right is only part of the battle, and is mostly useless if you cannot support your reasoning for WHY you think the relationships proposed in the hypotheses should exist. Simply saying X has a positive effect on Y is not sufficient to make a causal statement. You must then go an explain the various reasons behind your hypothesized relationship. Take Diet and Weight loss for example. The hypothesis is, "Diet has a positive effect on weight loss". The supporting logic would then be something like:

• Weight is gained as we consume calories. Diet reduces the number of calories consumed. Therefore, the more we diet, the more weight we should lose (or the less weight we should gain).

## Controls

Controls are potentially confounding variables that we need to account for, but that don’t drive our theory. For example, in Dietz and Gortmaker 1985, their theory was that TV time had a negative effect on school performance. But there are many things that could effect school performance, possibly even more than the amount of time spent in front of the TV. So, in order to account for these other potentially confounding variables, the authors control for them. They are basically saying, that regardless of IQ, time spent reading for pleasure, hours spent doing homework, or the amount of time parents spend reading to their child, an increase in TV time still significantly decreases school performance. These relationships are shown in the figure below.

As a cautionary note, you should nearly always include some controls; however, these control variables still count against your sample size calculations. So, the more controls you have, the higher your sample size needs to be. Also you get a higher R square but with increasingly smaller gains for each added control. Sometimes you may even find that adding a control “drowns out” all the effects of the IV’s, in such a case you may need to run your tests without that control variable (but then you can only say that your IVs, though significant, only account for a small amount of the variance in the DV). With that in mind, you can’t and shouldn't control for everything, and as always, your decision to include or exclude controls should be based on theory.

Handling controls in AMOS is easy, but messy (see the figure below). You simply treat them like the other exogenous variables (the ones that don’t have arrows going into them), and have them regress on whichever endogenous variables they may logically affect. In this case, I have valShort, a potentially confounding variable, as a control, with regards to valLong. And I have LoyRepeat as a control on LoyLong. I’ve also covaried the Controls with each other and with the other exogenous variables. When using controls in a moderated mediation analysis, go ahead and put the controls in at the very beginning. Covarying control variables with the other exogenous variables can be done based on theory, rather than as default.

When reporting the model, you do need to include the controls in all your tests and output, but you should consolidate them at the bottom where they can be out of the way. Also, just so you don’t get any crazy ideas, you would not test for any mediation between a control and a dependent variable. However, you may report how the control effects a dependent variable differently based on a moderating variable. For example, valshort may have a stronger effect on valLong for males than for females. This is something that should be reported, but not necessarily focused on, as it is not likely a key part of your theory. Lastly, even if effects from controls are not significant, you do not need trim them from your model (although, there are also other schools of thought on this issue).

## Mediation

### Concept

Mediation models are used to describe chains of causation. Mediation is often used to provide a more accurate explanation for the causal effect the antecedent has on the dependent variable. The mediator is usually that variable that is the missing link in a chain of causation. For example, Intelligence leads to increased performance - but not in all cases, as not all intelligent people are high performers. Thus, some other variable is needed to explain the reason for the inconsistent relationship between IV and DV. This other variable is called a mediator. In this example, work effectiveness, may be a good mediator. We would say that work effectiveness fully and positively mediates the relationship between intelligence and performance. Thus, the direct relationship between intelligence and performance is better explained through the mediator of work effectiveness. The logic is, even if you are intelligent, if you don't work smarter, then you won't perform well. However, intelligent people tend to work smarter (but not always). Thus, when intelligence leads to working smarter, then we observe greater performance.

### Types

There are three main types of simple mediation: 1) partial, 2) full, and 3) indirect. Partial mediation means that both the direct and indirect effects from the IV to DV are significant. Full means that the direct effect drops out of significance when the mediator is present, and that the indirect effect is significant. Indirect means that the direct effect never was significant, but that the indirect effect is. The figure below illustrates these types of mediation. Please refer to the step by step guide listed above for determining significance of the mediation.

There is one less common form of mediation called "competitive" mediation. In this case, the direct effect between IV and DV is “neutralized” when the mediator is absent. When the mediator is added, the direct effect becomes significant (often to the researchers’ surprise) and is usually in the opposite direction theorized, while the indirect path is observed to be significant and in the theorized direction. In such cases, the IV has dual effects on the DV which can only be separated when the mediator, acting somewhat like a prism, bifurcates the competing (and neutralizing) effects. Zhao et al. (2010 "reconsidering Baron and Kenny") discuss this.

## Interaction

### Concept

In factorial designs, interaction effects are the joint effects of two predictor variables in addition to the individual main effects. This is another form of moderation (along with multi-grouping) – i.e., the X to Y relationship changes form (gets stronger, weaker, changes signs) depending on the value of another explanatory variable (the moderator). So, for example

• you lose 1 pound of weight for every hour you exercise
• you lose 1 pound of weight for every 500 calories you cut back from your regular diet
• but when you exercise while dieting, the you lose 2 pounds for every 500 calories you cut back from your regular diet, in addition to the 1 pound you lose for exercising for one hour; thus in total, you lose three pounds

So, the multiplicative effect of exercising while dieting is greater than the additive effects of doing one or the other. Here is another simple example:

• Chocolate is yummy
• Cheese is yummy
• but combining chocolate and cheese is yucky!

The following figure is an example of a simple interaction model.

### Types

Interactions enable more precise explanation of causal effects by providing a method for explaining not only how X affects Y, but also under what circumstances the effect of X changes depending on the moderating variable of Z. Interpreting interactions is somewhat tricky. Interactions should be plotted (as demonstrated in the tutorial video). Once plotted, the interpretation can be made using the following four examples (in the figures below) as a guide. My most recent Stats Tools Package provides these interpretations automatically.

## Model fit again

You already did model fit in your CFA, but you need to do it again in your structural model in order to demonstrate sufficient exploration of alternative models. The method is the same: look at modification indices, residuals, and standard fit measures like CFI, RMSEA etc. The one thing that should be noted here in particular, however, is logic that should determine how you apply the modification indices to error terms.

• If the correlated variables are not logically causally correlated, but merely statistically correlated, then you may covary the error terms in order to account for the systematic statistical correlations without implying a causal relationship.
• e.g., burnout from customers is highly correlated with burnout from management
• We expect these to have similar values (residuals) because they are logically similar and have similar wording in our survey, but they do not necessarily have any causal ties.
• If the correlated variables are logically causally correlated, then simply add a regression line.
• e.g., burnout from customers is highly correlated with satisfaction with customers
• We expect burnC to predict satC, so not accounting for it is negligent.

Lastly, remember, you don't need to create the BEST fit, just good fit. If a BEST fit model (i.e., one in which all modification indices are addressed) isn't logical, or does not fit with your theory, you may need to simply settle for a model that has worse (yet sufficient) fit, and then explain why you did not choose the better fitting model. For more information on when it is okay to covary error terms (because there are other appropriate reasons), refer to David Kenny's thoughts on the matter: David's website

## Multi-group

Multi-group moderation is a special form of moderation in which a dataset is split along values of a categorical variable (such as gender), and then a given model is tested with each set of data. Using the gender example, the model is tested for males and females separately. The use of multi-group moderation is to determine if relationships hypothesized in a model will differ based on the value of the moderator (e.g., gender). Take the diet and weight loss hypothesis for example. A multi-group moderation model would answer the question: does dieting effect weight loss differently for males than for females? In the videos above, you will learn how to set up a multigroup model in AMOS, and test it using chi-square differences, and using critical ratios. Really, using critical ratios takes about a one minute after the model is set up, and it involves no room for human error, whereas using the chi-square method can take upwards of 30 minutes and it involves a lot of room for human error. So, I recommend the easy method!

## From Measurement Model to Structural Model

Many of the examples in the videos so far have taught concepts using a set of composite variables (instead of latent factors with observed items). Many will want to utilize the full power of SEM by building true structural models (with latent factors). This is not a difficult thing. Simply remove the covariance arrows from your measurement model (after CFA), then draw single-headed arrows from IVs to DVs. Make sure you put error terms on the DVs, then run it. It's that easy. Refer to the video for a demonstration.

## Creating Composites from Latent Factors

If you would like to create composite variables (as used in many of the videos) from latent factors, it is an easy thing to do. However, you must remember two very important caveats:

• You are not allowed to have any missing values in the data used. These will need to be imputed beforehand in SPSS or excel (I have two tools for this in my Stats Tools Package - one for imputing, and one for simply removing the entire row that has missing data).
• Latent factor names must not have any spaces or hard returns in them. They must be single continuous strings ("FactorOne" or "Factor_One" instead of "Factor One").

After those two caveats are addressed, then you can simply go to the Analyze menu, and select Data Imputation. Select Regression Imputation, and then click on the Impute button. This will create a new SPSS dataset with the same name as the current dataset except it will be followed by an "_C". This can be found in the same folder as your current dataset.