Guidelines

From StatWiki
Jump to: navigation, search

On this wiki page I share my 10 Steps to building a good quantitative variance model that can be addressed using a well-designed survey, as well as some general guidelines for structuring a quantitative model building/testing paper. These are just off the top of my head and do not come from any sort of published work. However, I have found them useful and hope you do as well.


Example Analysis - needs to be updated...

I've created an example of some quantitative analyses. The most useful part of this example is probably the wording. It is often difficult to figure out how to word your findings, or to figure out how much space to use on findings, or which measures to report and how to report them. This offers just one example of how you might do it.

How to start any (EVERY) Research Project

In a page or less, using only bullet points, answer these questions (or fill out this outline). Then share it with a trusted advisor (not me unless I am actually your advisor) to get early feedback. This way you don't waste your time on a bad or half-baked idea. You might also consider reviewing the editorial by Arun Rai at MISQ called: "Avoiding Type III Errors: Formulating Research Problems that Matter." This is written for the information systems field, but is generalizable to all fields.

  1. What is the problem you are seeking to address? (If there is no problem, then there is usually no research required.)
  2. Why is this an important (not just interesting) contemporary or upcoming problem? (i.e., old problems don't need to be readdressed if they are not still a problem)
  3. Who else has addressed this problem? (Very rarely is the answer to this: "nobody". Be creative. Someone has studied something related to this problem, even if it isn't the exact same problem. This requires a lit review.)
  4. In what way are the prior efforts of others incomplete? (i.e., if others have already addressed the problem, what is left to study - what are the "gaps"?)
  5. How will you go about filling these gaps in prior research? (i.e., study design)
    1. Why is this an appropriate approach?
  6. (If applicable) Who is your target population for studying this problem? (Where are you going to get your data?)
    1. How are you going to get the data you want? (quantity and quality)

Developing Your Quantitative Model

Ten Steps for Formulating a Decent Quantitative Model

  1. Identify and define your dependent variables. These should be the outcome(s) of the phenomenon you are interested in better understanding. They should be the effected thing(s) in your research questions.
  2. Figure out why explaining and predicting these DVs is important.
    1. Why should we care?
    2. For whom will it make a difference?
    3. What can we possibly contribute to knowledge that is not already known?
    4. If these are all answerable and suggest continuing the study, then go to #3, otherwise, go to #1 and try different DVs.
  3. Form one or two research questions around explaining and predicting these DVs.
    1. Scoping your research questions may also require you to identify your population.
  4. Is there some existing theory that would help explore these research questions?
    1. If so, then how can we adopt it for specifically exploring these research questions?
    2. Does that theory also suggest other variables we are not considering?
  5. What do you think (and what has research said) impacts the DVs we have chosen?
    1. These become IVs.
  6. What is it about these IVs that is causing the effect on the DVs?
    1. These become Mediators.
  7. Do these relationships depend on other factors, such as age, gender, race, religion, industry, organization size and performance, etc.?
    1. These become Moderators
  8. What variables could potentially explain and predict the DVs, but are not directly related to our interests?
    1. These become control variables. These are often some of those moderators like age and gender, or variables in extant literature.
  9. Identify your population.
    1. Do you have access to this population?
    2. Why is this population appropriate to sample in order to answer the research questions?
  10. Based on all of the above, but particularly #4, develop an initial conceptual model involving the IVs, DVs, Mediators, Moderators, and Controls.
    1. If tested, how will this model contribute to research (make us think differently) and practice (make us act differently)?

From Model Development to Model Testing

YouTube.png Video explanation of this section

Critical tasks that happen between model development and model testing

  1. Develop a decent quantitative model
    1. see previous section
  2. Find existing scales and develop your own if necessary
    1. You need to find ways to measure the constructs you want to include in your model. Usually this is done through reflective latent measures on a Likert scale. It is conventional and encouraged to leverage existing scales that have already been either proposed or, better yet, validated in extant literature. If you can’t find existing scales that match your construct, then you might need to develop your own.
    2. Find existing scales
      1. I’ve made a YouTube.png VIDEO TUTORIAL about finding existing scales. The easy way is to go to http://inn.theorizeit.org/ and search their database. You can also search google scholar for scale development of your construct. Make sure to note the source for the items, as you will need to report this in your manuscript.
      2. Once you’ve found the measures you need, you’ll most likely need to adapt them to your context. For example, let’s say you’re studying the construct of Enjoyment in the context of Virtual Reality. If the existing scale was “I enjoy using the website”, you’ll want to change that to “I enjoyed the Virtual Reality experience” (or something like that). The key consideration is to retain the “spirit” or intent of the item and construct. If you do adapt the measures, be sure to report your adaptations in the appendix of any paper that uses these adapted measures.
      3. Along this idea of adapting, you can also trim the scale as needed. Many established scales are far too large, consisting of more than 10 items. A reflective construct never requires more than 4 or 5 items. Simply pick the 4-5 items that best capture the construct of interest. If the scale is multidimensional, it is likely formative. In this case, you can either:
        1. Keep the entire scale (this can greatly inflate your survey, but it allows you to use a latent structure)
        2. Keep only one dimension (just pick the one that best reflects the construct you are interested in)
        3. Keep one item from each dimension (this allows you to create an aggregate score; i.e., sum, average, or weighted average)
    3. Develop new scales
      1. Developing new scales is a bit trickier, but is perhaps less daunting than many make it out to be. The first thing you must do before developing your own scales is to precisely define your construct. You cannot develop new measures for a construct if you do not know precisely what it is you are hoping to measure.
      2. Once you have defined your construct, I strongly recommend developing reflective scales where applicable. These are far easier to handle statistically, and are more amenable to conventional SEM approaches. Formative measures can also be used, but they involve several caveats and considerations during the data analysis stage.
        1. For reflective measures, simply create 5 interchangeable statements that can be measured on a 5-point Likert scale of agreement, frequency, or intensity. We develop 5 items so that we have some flexibility in dropping 1 or 2 during the EFA if needed. If the measures are truly reflective, using more than 5 items would be unnecessarily redundant. If we were to create a scale for Enjoyment (defined in our study as the extent to which a user receives joy from interacting with the VR), we might have the following items that the user can answer from strongly disagree to strongly agree:
          1. I enjoyed using the VR
          2. Interacting with the VR was fun
          3. I was happy while using the VR
          4. Using the VR was boring (reverse coded)
          5. Using the VR was pleasurable
  3. If developing your own scales, do pretesting (talk aloud, Q-sort)
    1. To ensure the newly developed scales make sense to others and will hopefully measure the construct you think they should measure, you need to do some pretesting. Two very common pretesting exercises are ‘talk-aloud’ and ‘Q-sort’.
      1. Talk-aloud exercises include sitting down with between five and eight individuals who are within, or close to, your target population. For example, if you plan on surveying nurses, then you should do talk-alouds with nurses. If you are surveying a more difficult to access population, such as CEOs, you can probably get away with doing talk-alouds with upper level management instead. The purpose of the talk-aloud is to see if the newly developed items make sense to others. Invite the participant (just one participant at a time) to read out loud each item and respond to it. If they struggle to read it, then it is worded poorly. If they have to think very long about how to answer, then it needs to be more direct. If they are unsure how to answer, then it needs to be clarified. If they say “well, it depends” then it needs to be simplified or made more contextually specific. You get the idea. After the first talk-aloud, revise your items accordingly, and then do the second talk-aloud. Repeat until you stop getting meaningful corrections.
      2. Q-sort is an exercise where the participant (ideally from the target population, but not strictly required) has a card (physical or digital) for each item in your survey, even existing scales. They then sort these cards into piles based on what construct they think the item is measuring. To do this, you’ll need to let them know your constructs and the construct definitions. This should be done for formative and reflective constructs, but not for non-latent constructs (e.g., gender, industry, education). Here is a video I’ve made for Q-sorting: YouTube.png Q-sorting in Qualtrics. You should have at least 8 people participate in the Q-sort. If you arrive at consensus (>70% agreement between participants) after the first Q-sort, then move on. If not, identify the items that did not achieve adequate consensus, and then try to reword them to be more conceptually distinct from the construct they miss-loaded on while being more conceptually similar to the construct they should have loaded on. Repeat the Q-sort (with different participants) until you arrive at adequate consensus.
  4. Identify target sample and, if necessary, get approval to contact
    1. Before you can submit your study for IRB approval, you must identify who you will be collecting data from. Obtain approval and confirmation from whoever has stewardship over that population. For example, if you plan to collect data from employees at your current or former organization, you should obtain approval from the proper manager over the group you plan to solicit. If you are going to collect data from students, get approval from their professor(s).
  5. Conduct a Pilot Study
    1. It is exceptionally helpful to conduct a pilot study if time and target population permit. A pilot study is a smaller data collection effort (between 30 and 100 participants) used to obtain reliability scores (like Cronbach’s alpha) for your reflective latent factors, and to confirm the direction of relationships, as well as to do preliminary manipulation checks (where applicable). Usually the sample size of a pilot study will not allow you to test the full model (either measurement or structural) altogether, but it can give you sufficient power to test pieces at a time. For example, you could do an EFA with 20 items at a time, or you could run simple linear regressions between an IV and a DV.
    2. Often time and target population do not make a pilot study feasible. For example, you would never want to cannibalize your target population if that population is difficult to access and you are concerned about final sample size. Surgeons, for example, are a hard population to access. Doing a pilot study of surgeons will cannibalize your final sample size. Instead, you could do a pilot study of nurses, or possibly resident surgeons. Deadlines are also real, and pilot studies take time – although, they may save you time in the end. If the results of the pilot study reveal poor Cronbach’s alphas, or poor loadings, or significant cross-loadings, you should revise your items accordingly. Poor Cronbach’s alphas and poor loadings indicate too much conceptual inconsistency between the items within a construct. Significant cross-loadings indicate too much conceptual overlap between items across separate constructs.
  6. Get IRB approval
    1. Once you’ve identified your population and obtained confirmation that you’ll be able to collect data from them, you are now ready to submit your study for approval to your local IRB. You cannot publish any work that includes data collected prior to obtaining IRB approval. This means that if you did a pilot study before obtaining approval, you cannot use that data in the final sample (although you can still say that you did a pilot study). IRB approval can take between 3 days and 6 weeks (or more), depending on the nature of your study and the population you intend to target. Typically studies of organizations regarding performance and employee dispositions and intentions are simple and do not get held up in IRB review. Studies that involve any form of deception or risk (physical, psychological, or financial) to participants require extra consideration and may require oral defense in front of the IRB.
  7. Collect Data
    1. You’ve made it! Time to collect your data. This could take anywhere between three days and three months, depending on many factors. Be prepared to send reminders. Incentives won’t hurt either. Also be prepared to only obtain a fraction of the responses you expected. For example, if you are targeting an email list of 10,000 brand managers, expect half of the emails to return abandoned, three quarters of the remainder to go unread, and then 90% of the remainder to go ignored. That leaves us with only 125 responses, 20% of which may be unusable, thus leaving us with only 100 usable responses from our original 10,000.
  8. Test your model
    1. see next section

Order of Operations for Testing your Model

Some general guidelines for the order to conduct each procedure

  1. Develop a good theoretical model
    1. See the Ten Steps above
    2. Develop hypotheses to represent your model
  2. Case Screening
    1. Missing data in rows
    2. Unengaged responses
    3. Outliers (on continuous variables)
  3. Variable Screening
    1. Missing data in columns
    2. Skewness & Kurtosis
  4. Exploratory Factor Analysis
    1. Iterate until you arrive at a clean pattern matrix
    2. Adequacy
    3. Convergent validity
    4. Discriminant validity
    5. Reliability
  5. Confirmatory Factor Analysis
    1. Obtain a roughly decent model quickly (cursory model fit, validity)
    2. Do configural, metric, and scalar invariance tests (if using grouping variable in causal model)
    3. Validity and Reliability check
    4. Common method bias (marker if possible, CLF either way)
    5. Final measurement model fit
    6. Optionally, impute factor scores
  6. Structural Models
    1. Multivariate Assumptions
      1. Outliers and Influentials
      2. Multicollinearity
    2. Include control variables in all of the following analyses
    3. Mediation
      1. Test indirect effects using bootstrapping
      2. If you have multiple indirect paths from same IV to same DV, use AxB estimand
    4. Interactions
      1. Optionally standardize constituent variables
      2. Compute new product terms
      3. Plot significant interactions
    5. Multigroup Comparisons
      1. Create multiple models
      2. Assign them the proper group data
      3. Test significance of moderation via chi-square difference test
  7. Report findings in a concise table
    1. Ensure global and local tests are met
    2. Include post-hoc power analyses for unsupported direct effects hypotheses
  8. Write paper
    1. See guidelines below

Structuring a Quantitative Paper

Standard outline for quantitative model building/testing paper

  • Title (something catchy and accurate)
  • Abstract (concise – 150-250 words – to explain paper): roughly one sentence each:
    • What is the problem?
    • Why does it matter?
    • How do you address the problem?
    • What did you find?
    • How does this change practice (what people in business do), and how does it change research (existing or future)?
  • Keywords (4-10 keywords that capture the contents of the study)
  • Introduction (2-4 pages)
    • What is the problem and why does it matter? And what have others done to try to address this problem, and why have their efforts been insufficient (i.e., what is the gap in the literature)? (1-2 paragraphs)
    • What is your DV(s) and what is the context you are studying it in? Also briefly define the DV(s). (1-2 paragraphs)
    • One sentence about sample (e.g., "377 undergraduate university students using Excel").
    • How does studying this DV(s) in this context adequately address the problem? (1-2 paragraphs)
    • What existing theory/theories do you leverage, if any, to pursue this study, and why are these appropriate? (1-2 paragraphs)
    • Briefly discuss the primary contributions of this study in general terms without discussing exact findings (i.e., no p-values here).
    • How is the rest of the paper organized? (1 paragraph)
  • Literature review (1-3 pages)
    • Fully define your dependent variable(s) and summarize how it has been studied in existing literature within your broader context (like Information systems, or, Organizations, etc.).
    • If you are basing your model on an existing theory/model, use this next space to explain that theory (1 page) and then explain how you have adapted that theory to your study.
    • If you are not basing your model on an existing theory/model, then use this next space to explain how existing literature in your field has tried to predict your DV(s) or tried to understand related research questions.
    • (Optionally) Explain what other constructs you suspect will help predict your DV(s) and why. Inclusion of a construct should have good logical/theoretical and/or literature support. For example, “we are including construct xyz because the theory we are basing our model on includes xyz.” Or, “we are including construct xyz because the following logic (abc) constrains us to include this variable lest we be careless”. Try to do this without repeating everything you are just going to say in the theory section anyway.
    • (Optionally) Briefly discuss control variables and why they are being included.
  • Theory & Hypotheses (take what space you need, but try to be parsimonious)
    • Briefly summarize your conceptual model and show it with the Hypotheses labeled (if possible).
    • Begin supporting H1 then state H1 formally. Support should include strong causal logic and literature.
    • H2, H3, etc. If you have sub-hypotheses, list them as H1a, H1b, H2a, H2b, etc.
  • Methods (keep it brief; many approaches; this is just a common template)
    • Construct operationalization (where did you get your measures?)
    • Instrument development (if you created your own measures)
    • Explanation of study design (e.g., pretest, pilot, and online survey)
    • Sampling (some descriptive statistics, like demographics (education, experience, etc.), sample size; don`t forget to discuss response rate (number of responses as a percentage of number of people invited to do the study)).
    • Mention that IRB exempt status was granted and protocols were followed if applicable.
    • Method for testing hypotheses (e.g., structural equation modeling in AMOS). If you conducted multi-group comparisons, mediation, and/or interaction, explain how you kept them all straight and how you went about analyzing them. For example, if you did mediation, what approach did you take (hopefully bootstrapping)? Were there multiple models tested, or did you keep all the variables in for all analyses? If you did interaction, did you add that in afterward, or was it in from the beginning?
  • Analysis (1-3 pages; sometimes combined with methods section)
    • Data Screening
    • EFA (report pattern matrix and Cronbach`s alphas in appendix) – mention if items were dropped.
    • CFA (just mention that you did it and bring up any issues you found) – mention any items dropped during CFA. Report model fit for the final measurement model. Supporting material can be placed in the Appendices if necessary.
    • Mention CMB approach and results and actions taken if any (e.g., if you found CMB and had to keep the CLF).
    • Report the correlation matrix, CR and AVE (you can include MSV and ASV if you want), and briefly discuss any issues with validity and reliability – if any.
    • Report whether you used the full latent SEM, or if you imputed factor scores for a path model.
    • Report the final structural model(s) (include R-squares and betas) and the model fit for the model(s).
  • Findings (1-2 pages)
    • Report the results for each hypothesis (supported or not, with evidence).
    • Point out any unsupported or counter-evidence (significant in opposite direction) hypotheses.
    • Provide a table that concisely summarizes your findings.
  • Discussion (2-5 pages)
    • Summarize briefly the study and its intent and findings, focusing mainly on the research question(s) (one paragraph).
    • What insights did we gain from the study that we could not have gained without doing the study?
    • How do these insights change the way practitioners do their work?
    • How do these insights shed light on existing literature and shape future research in this area?
    • What limitations is our study subject to (e.g., surveying students, just survey rather than experiment, statistical limitations like CMB etc.)?
    • What are some opportunities for future research based on the insights of this study?
  • Conclusion (1-2 paragraphs)
    • Summarize the insights gained from this study and how they address existing gaps or problems.
    • Explain the primary contribution of the study.
    • Express your vision for moving forward or how you hope this work will affect the world.
  • References (Please use a reference manager like EndNote)
  • Appendices (Any additional information, like the instrument and measurement model stuff that is necessary for validating or understanding or clarifying content in the main body text.)
    • DO NOT pad the appendices with unnecessary statistics tables and illegible statistical models. Everything in the appendix should add value to the manuscript. If it doesn't add value, remove it.