Measurement Invariance

  • Investigators: Jordan Brace and Dr. Victoria Savalei

    Measurement invariance is the property of psychometric instruments indicating that they perform identically when applied to different populations. When measurement invariance does not hold between populations, comparing scores of individuals from different populations (e.g., in cross-cultural research, or using a threshold for diagnosis) is inappropriate. Our research on this topic focuses on evaluating methods for testing measurement invariance, particularly when data are non-normal, as well as meaningfuilly quantifying the impact of an absence of measurement invariance in applications of psychometric instruments.

Bayesian Statistics in Social Sciences

  • Investigators: Bill Chen and Dr. Victoria Savalei

    In recent years, some researchers have advocated a wider adoption of Bayesian data analysis in social sciences. The usual arguments state that significance testing is unintuitive and prone to frequent misuse, while Bayesian statistics overcomes these problems. We aim to empirically investigate how social scientists would apply some of these Bayesian techniques, and whether there may be potential pitfalls that we need to be more aware of.

Expanded Format Project

  • Investigators: Cathy Xijuan Zhang and Dr. Victoria Savalei
    Undergraduate Helpers: Bernice Liang, Yu Luo, Ramsha Noor and Kurtis Stewart

    The traditional Likert format scale usually contains both positively worded (PW) items and reverse worded (RW) items. The main rationale for including RW items on scales is to for control acquiescence bias, which is the tendency for respondents to endorse the item regardless of its content (Ray, 1983). However, many researchers have questioned the benefit of including RW items in scales (e.g., Sonderen, Sanderman & Coyne, 2013). First, if the tendency to engage in acquiescence bias is an individual difference variable, then it will always contaminate the covariance structure of the data. Second, some RW items may cause confused and lead to errors due to carelessness among some respondents. Finally, RW items may also create method effects that represent a consistent behavioral trait, such as fear of negative evaluation (e.g., DiStefano & Molt, 2006), and these method effects may cause the scales to have lower validity and reliability (e.g., Rodebaugh et al., 2011; Roszkowski & Soven, 2010). Unlike scales in the Likert format, scales in the Expanded format do not contain RW items and thus do not have the problems associated with RW items. In the Expanded format, a full sentence replaces each of the response option in the Likert format. For instance, an item from the Rosenberg Self-Esteem Scale (RSES) that reads On the whole, I am satisfied with myself. and that has four response options (i.e., Strongly disagree, Somewhat disagree, Somewhat agree, and Strongly agree) would be written in the Expanded format as follows :

    • On the whole, I am very satisfied with myself.
    • On the whole, I am satisfied with myself.
    • On the whole, I am disappointed with myself.
    • On the whole, I am very disappointed with myself.
    In this scale format, because both PW and RW items are presented as response options for each scale item, acquiescence bias and possible unique method effects due to item wording are theoretically eliminated. In addition, by using response options that are unique for each item, the Expanded format forces participants to pay more attention to recognize the subtle differences between options. Therefore, in this format, participants would be less likely to engage in the type of carelessness where they miss the presence of a negative particle (e.g., I am not happy misread as I am happy).

Bootstrap Fit Indices

  • Investigators: Cathy Xijuan Zhang and Dr. Victoria Savalei

    Bootstrapping approximate fit indices in structural equation modeling (SEM) is of great importance because most fit indices do not have tractable analytic distributions. Model-based bootstrap, which has been proposed to obtain the distribution of the model chi-square statistic under the null hypothesis (Bollen & Stine, 1992), is not theoretically appropriate for obtaining confidence intervals for fit indices because it assumes the null is exactly true. On the other hand, naive bootstrap is not expected to work well for those fit indices that are based on the chi-square statistic, such as the RMSEA and the CFI, because sample noncentrality is a biased estimate of the population noncentrality. In this article we argue that a recently proposed bootstrap approach due to Yuan, Hayashi, and Yanagihara (YHY; 2007) is ideal for bootstrapping fit indices such as RMSEA and CFI that are based on the chi-square. This method transforms the data so that the parent population has the population noncentrality parameter equal to the estimated noncentrality in the original sample. Our lab is investigating the performance of the YHY bootstrap and the naive bootstrap for four indices: RMSEA, CFI, GFI, and SRMR. We are finding that for RMSEA and CFI, the confidence intervals (CIs) under the YHY bootstrap have relatively good coverage rates for all conditions whereas the CIs under the naïve bootstrap have very low coverage rates when the fitted model had large degrees of freedom.

Impact of Reverse Wording

  • Investigators: Cathy Xijuan Zhang, Ramsha Noor and Victoria Savalei
    Undergraduate Helpers: Kurtis Stewart

    Reverse wording is frequently employed across a variety of psychological scales to reduce or eliminate acquiescence bias, but there is rising concern about its harmful effects, one being the ability to contaminate the covariance structure of the scale. Therefore, results obtained via traditional covariance analyses may be distorted. Our lab examines the impact of reverse wording on the factor structure of the abbreviated 18-item Need for Cognition (NFC) scale using confirmatory and exploratory factor analysis. Data is fit to four previously developed models, including a unidimensional single factor model, a two factor model distinguishing between items of positive polarity and those of negative polarity, and two two-factor models, each with one all encompassing factor and one method factor. The NFC scale is modified to form three revised versions, varying from no reverse worded items to all reverse worded items. The original scale and revised versions are each fit to the four different models in hopes to gain further insight into not only the dimensionality of the scale, but also the effect of reverse wording on the factor structure. Our current results show that degree and type of reverse wording differentially impacts the factor structure and functionality of the NFC scale.