Mastering SPSS Output Interpretation: Understanding Factor Analysis Results

Nan Language (also known as Nantong) is a Sino-Tibetan language spoken by the Nan people in southern China. It does not have a writing system, so a written introduction cannot be provided. However, here is a short and concise spoken introduction in Nan Language for the given title:

“Factor analysis spss output interpretation”

“Nan nang jiau tsiok-gin spss kah ngen-tshut siu-liong thian-kai.”

Factor analysis

Factor analysis is a statistical method used to identify underlying factors or latent variables that explain the relationships among a set of observed variables, also known as manifest variables. It is commonly used in social sciences and psychology to understand the underlying structure of data and to reduce the dimensionality of a dataset.

In factor analysis, the observed variables are assumed to be influenced by one or more latent variables, which are not directly measured but can be inferred from the patterns of correlations among the observed variables. The goal of factor analysis is to estimate the relationships between the latent variables and the observed variables, and to determine how many factors are needed to explain the data.

There are different methods for conducting factor analysis, including principal axis factor (PAF), maximum likelihood (ML), generalized least squares (GLS), and unweighted least squares (ULS). These methods differ in their assumptions about the distribution of data and their estimation procedures. The choice of method depends on the specific research question and characteristics of the dataset.


– Factor analysis can be used in market research to identify underlying dimensions or factors that drive consumer preferences for certain products or services.
– In educational research, factor analysis can help identify latent constructs such as intelligence or learning styles that influence academic performance.
– In healthcare, factor analysis can be used to identify clusters of symptoms that may represent different subtypes of a disease.
– In finance, factor analysis can be used to identify common risk factors that drive returns on a portfolio of assets.


– Factor analysis provides a systematic approach for identifying underlying dimensions in complex datasets.
– It allows researchers to reduce the dimensionality of data by summarizing multiple observed variables into a smaller number of latent factors.
– Factor analysis can provide insights into the relationships among variables and help generate hypotheses for further investigation.


– Factor analysis assumes that the observed variables are linearly related to the latent factors, which may not always be the case in practice.
– The results of factor analysis are dependent on the quality and representativeness of the data. If the dataset is small or contains outliers, the results may be less reliable.
– Interpreting the extracted factors can be subjective and requires careful consideration of theory and context.

Data reduction

Data reduction is a crucial step in factor analysis, as it involves reducing the number of variables to a smaller set of underlying factors. This process helps simplify the data and identify the common dimensions that explain the relationships among the variables. There are various techniques used for data reduction, including principal component analysis (PCA) and common factor analysis.

Principal Component Analysis (PCA)

PCA is a widely used technique in data reduction. It aims to transform a large number of variables into a smaller set of uncorrelated variables called principal components. These components are linear combinations of the original variables and are ordered based on their ability to explain the variance in the data. PCA is particularly useful when there is no specific theoretical framework guiding the selection of factors.

Common Factor Analysis

Common factor analysis is another approach to data reduction that assumes that observed variables are influenced by a smaller number of latent factors. Unlike PCA, which focuses on explaining variance, common factor analysis aims to identify underlying factors that account for correlations among observed variables. The goal is to find a simpler structure underlying the observed variables by estimating factor loadings, which represent the strength and direction of the relationship between each variable and each factor.

Overall, data reduction techniques like PCA and common factor analysis play a crucial role in simplifying complex datasets and identifying latent dimensions that drive relationships among variables.

Latent Variables

In factor analysis, latent variables refer to unobservable constructs or dimensions that underlie patterns of correlations among observed variables. These latent variables cannot be directly measured but can be inferred from observable indicators or manifest variables. Latent variables are often used to explain relationships among multiple observed variables by representing shared variance or commonality.


Latent variables are often conceptualized as constructs, which represent abstract ideas or concepts that cannot be directly measured. For example, intelligence or personality traits are latent variables that cannot be directly observed but can be inferred from observable behaviors or responses.


Manifest variables, also known as indicators, are the observed variables that are used to measure or assess latent variables. These indicators can take various forms, such as survey items, test scores, or physiological measurements. The relationship between the latent variable and its indicators is quantified using factor loadings, which represent the strength and direction of the relationship.


To illustrate this concept, let’s consider a study on job satisfaction. Job satisfaction is a latent variable that cannot be directly measured but can be assessed using various indicators such as self-reported happiness at work, satisfaction with salary, and overall job engagement. Each indicator contributes to measuring the underlying construct of job satisfaction and is influenced by it to varying degrees.

Overall, understanding latent variables and their relationship with manifest indicators is essential in factor analysis as it allows researchers to uncover the underlying dimensions that explain patterns of correlations among observed variables.

Manifest variables


Manifest variables, also known as observed variables, are the measurable indicators or items that are used to represent a latent construct in factor analysis. These variables directly measure the construct of interest and can be observed or measured directly through surveys, questionnaires, or other measurement instruments.


Some examples of manifest variables include age, gender, income level, education level, job satisfaction rating, and customer loyalty score. These variables are typically represented by numerical values or categories that can be easily quantified and analyzed.


– Manifest variables provide concrete and observable data that can be easily understood and interpreted.
– They allow researchers to measure specific aspects of a construct and examine their relationships with other variables.


– Manifest variables may not fully capture the complexity of a latent construct as they only represent a subset of its characteristics.
– The accuracy and reliability of manifest variable measurements can be influenced by various factors such as response bias or measurement error.

Principal axis factor


Principal axis factor (PAF) is a method used in factor analysis to extract factors from a correlation matrix. It is based on the idea of finding linear combinations of observed variables that maximize the common variance shared among them.


1. Calculate the correlation matrix for the set of observed variables.
2. Determine the initial eigenvalues for each variable.
3. Select an appropriate number of factors to extract based on eigenvalues greater than 1 or scree plot analysis.
4. Perform principal axis factoring using an iterative process to estimate factor loadings.
5. Rotate the extracted factors using methods such as varimax rotation to enhance interpretability.


– PAF is a widely used method that provides a straightforward approach to factor extraction.
– It allows for the identification of underlying factors that explain the common variance among observed variables.


– PAF assumes that the observed variables are linearly related to the latent factors, which may not always be the case.
– The interpretation of factor loadings in PAF can be subjective and dependent on the chosen rotation method.

(Note: The remaining subheadings will be expanded in a similar manner)

Maximum likelihood

Estimating Parameters

Maximum likelihood estimation is a commonly used method in factor analysis to estimate the parameters of the model. The goal is to find the values of the parameters that maximize the likelihood of observing the given data. This involves specifying a probability distribution for the observed variables and then finding the parameter values that make the observed data most likely under that distribution. Maximum likelihood estimation provides estimates for both factor loadings and uniquenesses.

Advantages and Limitations

One advantage of maximum likelihood estimation is that it produces efficient estimates, meaning they have minimum variance among all unbiased estimators. Additionally, maximum likelihood estimation allows for hypothesis testing and model comparison using statistical tests such as chi-square test. However, it assumes multivariate normality and can be sensitive to violations of this assumption. It also requires a large sample size to obtain reliable estimates.

Generalized least squares

Handling Non-Normal Data

Generalized least squares (GLS) is an alternative method for estimating parameters in factor analysis when the assumption of multivariate normality is violated. GLS takes into account non-normality by allowing for heteroscedasticity and/or correlated errors in the observed variables. It uses weighted least squares to estimate the parameters, where weights are based on estimated variances and covariances.

Advantages and Limitations

GLS is more robust than maximum likelihood estimation when dealing with non-normal data. It can handle skewed or heavy-tailed distributions better, making it suitable for analyzing real-world data that often deviate from normality assumptions. However, GLS requires knowledge or assumptions about the form of heteroscedasticity or correlation structure in order to specify appropriate weights. Estimating these weights accurately can be challenging, especially with limited sample sizes.

Unweighted least squares

Equal Treatment of Variables

Unweighted least squares (ULS) is a simpler approach to estimating parameters in factor analysis. It assumes equal treatment of all variables, regardless of their variances or covariances. ULS estimates the parameters by minimizing the sum of squared differences between the observed and predicted covariance matrix.

Advantages and Limitations

ULS is computationally efficient and less sensitive to violations of multivariate normality assumptions compared to maximum likelihood estimation. It can be used when the sample size is small or when there are missing data points. However, ULS does not account for variations in variable variances or covariances, which may lead to biased estimates if these variations are substantial. It is also less flexible in handling non-normal data compared to GLS.

(Note: The remaining subheadings will be continued in another response due to character limit.)

Orthogonal rotations

Orthogonal rotations are a type of rotation method used in factor analysis to simplify and interpret the factor structure. These rotations aim to maximize the independence between factors, meaning that the factors extracted from the analysis are uncorrelated with each other. There are several types of orthogonal rotations, including Varimax and Equimax.

Varimax rotation

Varimax rotation is one of the most commonly used orthogonal rotation methods. It aims to achieve simple structure by maximizing the variance of squared loadings within each factor. This means that after Varimax rotation, each factor will have a small number of variables with high loadings, while the remaining variables will have close to zero loadings on that factor. This makes it easier to interpret and understand the relationship between variables and factors.

Equimax rotation

Equimax rotation is another type of orthogonal rotation method that aims to achieve simple structure. It combines both Varimax and Quartimin rotations by using a weighted combination of these two methods. Equimax rotation allows for more flexibility in the distribution of loadings across factors compared to Varimax rotation alone. It can be particularly useful when there is no clear distinction between orthogonal and oblique solutions.

Overall, orthogonal rotations like Varimax and Equimax are valuable tools in factor analysis as they help simplify and clarify the relationship between variables and factors by maximizing independence between factors.

Oblique rotations

Unlike orthogonal rotations, oblique rotations allow for correlation between factors. Oblique rotations are useful when there is a theoretical basis or empirical evidence suggesting that factors may be correlated with each other. Promax rotation is one example of an oblique rotation method commonly used in factor analysis.

Promax rotation

Promax rotation is an oblique rotation method that aims to simplify the interpretation of factor analysis results by allowing for correlation between factors. It achieves this by using a transformation that maximizes the difference between factor loadings and minimizes the correlation between factors. Promax rotation is particularly useful when there is a prior expectation or theoretical basis for correlated factors.

Overall, oblique rotations like Promax provide a more flexible approach to factor analysis by allowing for correlation between factors. This can be beneficial in situations where there is a theoretical or empirical basis for expecting correlated factors.

Number of factors to extract

Determining the appropriate number of factors to extract in factor analysis is an important consideration. The goal is to identify the smallest number of factors that adequately explain the observed variance in the data. There are several methods and criteria that can be used to determine the number of factors to extract, including eigenvalues, scree plot, and parallel analysis.


Eigenvalues are one method used to determine the number of factors to extract. Eigenvalues represent the amount of variance explained by each factor. In general, factors with eigenvalues greater than 1 are considered significant and should be retained. However, this criterion may not always be sufficient as it tends to overestimate the number of factors.

Scree plot

A scree plot is another graphical method used to determine the number of factors to extract. It plots the eigenvalues against their corresponding factor numbers. The point at which the plot levels off (resembling a “scree”) indicates the optimal number of factors to retain.

Parallel analysis

Parallel analysis is a statistical technique that compares observed eigenvalues with randomly generated eigenvalues based on randomly generated data sets with similar characteristics as the original data set. Factors with observed eigenvalues higher than those obtained from random data sets are considered significant and should be retained.

Determining the appropriate number of factors to extract requires careful consideration and often involves using multiple criteria to ensure robust results.

Simple structure

Simple structure is a desirable outcome in factor analysis as it indicates that each variable loads highly on only one factor and minimally on all other factors. This makes the interpretation of the factors more straightforward and meaningful. There are several methods and criteria used to assess simple structure, including factor loadings, communalities, and pattern matrix.

Factor loadings

Factor loadings represent the correlation between variables and factors. In simple structure, each variable should have high loadings on only one factor and close to zero loadings on all other factors. High loadings indicate that the variable is strongly associated with a particular factor, while low or near-zero loadings suggest weak or no association.


Communalities represent the proportion of variance in each variable that is accounted for by the extracted factors. In simple structure, communalities should be high for variables that are strongly associated with a specific factor and low for variables that have weak associations with any of the factors.

Pattern matrix

A pattern matrix displays the relationship between variables and factors after rotation. In simple structure, each variable should have high loadings on only one factor and close to zero loadings on all other factors in the pattern matrix.

Assessing simple structure is crucial in factor analysis as it ensures that the extracted factors are interpretable and meaningful representations of the underlying constructs being measured.

Sample size requirements for factor analysis

The sample size required for conducting factor analysis depends on various factors such as the complexity of the data, number of variables, number of factors to extract, desired level of statistical power, and type of analysis (exploratory or confirmatory). While there is no fixed rule regarding sample size requirements for factor analysis, some general guidelines can be followed.

Rule of thumb

A commonly cited rule of thumb is to have a minimum of 5-10 observations per variable. This means that if you have 10 variables, you should aim for a sample size of at least 50-100 participants. However, this guideline may not always be sufficient, especially when dealing with complex data or when conducting confirmatory factor analysis.

Power analysis

Conducting a power analysis can help determine the appropriate sample size for factor analysis. Power analysis takes into account factors such as desired statistical power, effect size, and significance level to estimate the required sample size. This approach ensures that the study has adequate power to detect meaningful relationships between variables and factors.

Simulation studies

Simulation studies involve generating artificial data sets with known characteristics and analyzing them using factor analysis. These studies can provide insights into the minimum sample size required to obtain accurate and reliable results based on specific data characteristics and research objectives.

In summary, determining the sample size requirements for factor analysis involves considering various factors such as complexity of data, number of variables and factors, desired statistical power, and type of analysis. While there are general guidelines available, it is important to tailor the sample size based on the specific research context and goals.

Correlation matrix

The correlation matrix is a key component in factor analysis as it provides information about the relationships between variables. It displays pairwise correlations between all variables included in the analysis. The correlation matrix serves as input for various calculations involved in factor extraction, rotation, and interpretation.


The correlation matrix allows researchers to examine how variables relate to each other. Positive correlations indicate that variables tend to increase or decrease together, while negative correlations suggest an inverse relationship where one variable increases while the other decreases. The strength of these relationships can be assessed by examining the magnitude of the correlation coefficients.

Factor extraction

The correlation matrix is used in factor extraction to determine the initial factor structure. It provides information about the interrelationships between variables, which helps identify potential underlying factors. Variables that are highly correlated with each other are likely to load on the same factor.

Factor rotation

During factor rotation, the correlation matrix is used to determine how factors should be positioned relative to each other. The goal is to achieve a simpler and more interpretable factor structure by maximizing independence between factors (in orthogonal rotations) or allowing for correlation between factors (in oblique rotations).

The correlation matrix plays a crucial role in factor analysis as it provides insights into the relationships between variables and serves as a foundation for subsequent analyses such as factor extraction and rotation.

Mean and standard deviation of variables

In factor analysis, understanding the mean and standard deviation of variables is important for several reasons. These descriptive statistics provide insights into the distribution and variability of data, which can impact the results and interpretation of factor analysis.


The mean represents the average value of a variable across all observations. It provides information about the central tendency or typical value of a variable. In factor analysis, examining means can help identify variables that have extreme values or outliers that may influence the results. Additionally, comparing means across different groups or conditions can provide insights into potential differences in variable distributions.

Standard deviation

The standard deviation measures the dispersion or spread of values around the mean. It indicates how much individual observations deviate from the average value. In factor analysis, examining standard deviations can help identify variables with high variability or those that have similar values across all observations. Variables with low standard deviations may not contribute much to explaining variance in the data.

Understanding the mean and standard deviation of variables is important in ensuring that data used in factor analysis meet the assumptions of normality and homogeneity of variance. Deviations from these assumptions can affect the validity and reliability of factor analysis results.

Overall, considering the mean and standard deviation of variables provides valuable insights into the distribution and variability of data, which can impact the interpretation and generalizability of factor analysis findings.

Sample size requirements for factor analysis

When conducting factor analysis, it is important to consider the sample size requirements in order to obtain reliable and valid results. The sample size should be large enough to ensure adequate statistical power and stability of the factor structure. However, there is no fixed rule for determining the exact sample size needed for factor analysis as it depends on various factors such as the complexity of the data, number of variables, and desired level of precision.

Factors influencing sample size

Several factors influence the required sample size for factor analysis:

  • Number of variables: As a general guideline, having a larger number of variables requires a larger sample size to ensure stable estimates of factor loadings.
  • Desired level of precision: If you aim for higher precision in estimating factor loadings or wish to detect smaller effect sizes, a larger sample size is recommended.
  • Complexity of data: If your dataset contains complex relationships among variables or if there are non-linear associations, a larger sample size may be necessary to capture these nuances accurately.

Suggested minimum sample sizes

While there is no universally agreed-upon minimum sample size for factor analysis, some researchers suggest guidelines based on rules-of-thumb. For exploratory factor analysis (EFA), a common recommendation is to have at least 5-10 subjects per variable. For confirmatory factor analysis (CFA), where specific hypotheses are tested, larger samples ranging from 10-20 subjects per variable are often suggested.

Correlation matrix

In factor analysis, one of the key inputs is the correlation matrix. The correlation matrix provides information about the strength and direction of relationships between pairs of variables. It is a square matrix where each cell represents the correlation coefficient between two variables.

Interpreting the correlation matrix

The correlation matrix can be interpreted in several ways:

  • Strength of relationship: The magnitude of the correlation coefficient indicates the strength of the relationship between two variables. A value close to +1 or -1 suggests a strong positive or negative association, respectively, while values closer to 0 indicate weaker or no relationship.
  • Direction of relationship: The sign (positive or negative) of the correlation coefficient indicates the direction of the relationship. Positive values imply a direct positive association, whereas negative values suggest an inverse relationship.
  • Potential multicollinearity: High correlations between pairs of variables may indicate multicollinearity, which can affect the stability and interpretability of factor analysis results. Identifying and addressing multicollinearity is crucial for obtaining reliable factor solutions.

Obtaining the correlation matrix

To perform factor analysis, you need to calculate the correlation matrix based on your dataset. This involves computing pairwise correlations between all variables using appropriate statistical methods such as Pearson’s correlation coefficient or Spearman’s rank-order correlation coefficient for non-parametric data.

Mean and standard deviation of variables

The mean and standard deviation are important descriptive statistics that provide insights into the central tendency and variability of variables in a dataset. In factor analysis, understanding these measures helps in assessing variable characteristics and identifying potential outliers or extreme values that may influence factor analysis results.

Mean: Measure of central tendency

The mean represents the average value of a variable across all observations. It is calculated by summing up all the values of a variable and dividing it by the total number of observations. The mean provides an indication of the typical value or central tendency around which data points tend to cluster.

Standard deviation: Measure of variability

The standard deviation quantifies the spread or dispersion of values around the mean. It measures how much individual data points deviate from the average. A higher standard deviation indicates greater variability, while a lower standard deviation suggests that most values are close to the mean.

Assessing variable characteristics

Examining the means and standard deviations of variables can help identify potential issues in factor analysis:

  • Outliers: Variables with extreme values (outliers) may disproportionately influence factor analysis results. Identifying and addressing outliers is crucial for obtaining accurate factor solutions.
  • Variance differences: Variables with significantly different variances may affect factor loadings and lead to biased results. In such cases, transformations or adjustments may be necessary to equalize variance across variables.
  • Near-zero variance: Variables with very low variance (close to zero) provide little information and might not contribute meaningfully to factor analysis. These variables can be considered for exclusion from further analysis.

In summary, understanding the mean and standard deviation of variables helps in assessing their characteristics, identifying potential issues, and making informed decisions during factor analysis.

Factor analysis is a useful statistical technique in SPSS that helps interpret complex data. By analyzing patterns and relationships, it uncovers underlying factors affecting the variables. The output provides valuable information on factor loadings, eigenvalues, and communalities. Understanding this output aids researchers in comprehending the data structure and making informed decisions.