Title: | Determining the Number of Factors in Exploratory Factor Analysis |
---|---|
Description: | Provides a collection of standard factor retention methods in Exploratory Factor Analysis (EFA), making it easier to determine the number of factors. Traditional methods such as the scree plot by Cattell (1966) <doi:10.1207/s15327906mbr0102_10>, Kaiser-Guttman Criterion (KGC) by Guttman (1954) <doi:10.1007/BF02289162> and Kaiser (1960) <doi:10.1177/001316446002000116>, and flexible Parallel Analysis (PA) by Horn (1965) <doi:10.1007/BF02289447> based on eigenvalues form PCA or EFA are readily available. This package also implements several newer methods, such as the Empirical Kaiser Criterion (EKC) by Braeken and van Assen (2017) <doi:10.1037/met0000074>, Comparison Data (CD) by Ruscio and Roche (2012) <doi:10.1037/a0025697>, and Hull method by Lorenzo-Seva et al. (2011) <doi:10.1080/00273171.2011.564527>, as well as some AI-based methods like Comparison Data Forest (CDF) by Goretzko and Ruscio (2024) <doi:10.3758/s13428-023-02122-4> and Factor Forest (FF) by Goretzko and Buhner (2020) <doi:10.1037/met0000262>. Additionally, it includes a deep neural network (DNN) trained on large-scale datasets that can efficiently and reliably determine the number of factors. |
Authors: | Haijiang Qin [aut, cre, cph], Lei Guo [aut, cph] |
Maintainer: | Haijiang Qin <[email protected]> |
License: | GPL-3 |
Version: | 1.1.1 |
Built: | 2024-11-20 07:40:50 UTC |
Source: | CRAN |
This function computes the softmax of a numeric vector. The softmax function transforms a vector of real values into a probability distribution, where each element is between 0 and 1 and the sum of all elements is 1. @seealso DNN_predictor
af.softmax(x)
af.softmax(x)
x |
A numeric vector for which the softmax transformation is to be computed. |
The softmax function is calculated as:
In the case of overflow (i.e., when exp(x_i)
is too large), this function handles
Inf
values by assigning 1
to the corresponding positions and 0
to the
others before Softmax. @seealso DNN_predictor
A numeric vector representing the softmax-transformed values of x
.
x <- c(1, 2, 3) af.softmax(x)
x <- c(1, 2, 3) af.softmax(x)
This function runs the comparison data (CD) approach of Ruscio & Roche (2012).
CD( response, nfact.max = 10, N.pop = 10000, N.Samples = 500, Alpha = 0.3, cor.type = "pearson", use = "pairwise.complete.obs", vis = TRUE, plot = TRUE )
CD( response, nfact.max = 10, N.pop = 10000, N.Samples = 500, Alpha = 0.3, cor.type = "pearson", use = "pairwise.complete.obs", vis = TRUE, plot = TRUE )
response |
A required |
nfact.max |
The maximum number of factors discussed by CD approach. (default = 10) |
N.pop |
Size of finite populations of simulating.. (default = 10,000) |
N.Samples |
Number of samples drawn from each population. (default = 500) |
Alpha |
Alpha level when testing statistical significance (Wilcoxon Rank Sum and Signed Rank Tests) of improvement with additional factor. (default = .30) |
cor.type |
A character string indicating which correlation coefficient (or covariance) is to be computed. One of "pearson" (default), "kendall", or "spearman". @seealso cor. |
use |
an optional character string giving a method for computing covariances in the presence of missing values. This must be one of the strings "everything", "all.obs", "complete.obs", "na.or.complete", or "pairwise.complete.obs" (default). @seealso cor. |
vis |
A Boolean variable that will print the factor retention results when set to TRUE, and will not print when set to FALSE. (default = TRUE) |
plot |
A Boolean variable that will print the CD plot when set to TRUE, and will not print it when set to FALSE. @seealso plot.CD. (Default = TRUE) |
Ruscio and Roche (2012) proposed a method for determining the number of factors through comparison data (CD). This method identifies the appropriate number of factors by finding the solution that best reproduces the pattern of eigenvalues. CD employs an iterative procedure when generating comparison data with a known factor structure, taking into account previous factors. Initially, CD compares whether the simulated comparison data with one latent factor (j=1) reproduces the empirical eigenvalue pattern significantly worse than the two-factor solution (j+1). If so, CD increases the value of j until further improvements are no longer significant or a preset maximum number of factors is reached. Specifically, CD involves five steps:
1. Generate random data with either j or j+1 latent factors and calculate the eigenvalues of the respective correlation matrices.
2. Compute the root mean square error (RMSE) of the difference between the empirical and simulated eigenvalues using the formula
, where:
: The i-th empirical eigenvalue.
: The i-th simulated eigenvalue.
: The number of items or eigenvalues.
. This step produces two RMSEs, corresponding to the different numbers of latent factors.
3. Repeat steps 1 and 2, 500 times ( default in the Package ).
4. Use a one-sided Wilcoxon test (alpha = 0.30) to assess whether the RMSE is significantly reduced under the two-factor condition.
5. If the difference in RMSE is not significant, CD suggests selecting j factors. Otherwise, j is increased by 1, and steps 1 to 4 are repeated.
The code is implemented based on the resources available at:
https://ruscio.pages.tcnj.edu/quantitative-methods-program-code/
https://osf.io/gqma2/?view_only=d03efba1fd0f4c849a87db82e6705668
Since the CD approach requires extensive data simulation and computation, C++ code is used to speed up the process.
An object of class CD
is a list
containing the following components:
nfact |
The number of factors to be retained. |
RMSE.Eigs |
A matrix containing the root mean square error (RMSE) of the eigenvalues produced by each simulation for every discussed number of factors. |
Sig |
A boolean variable indicating whether the significance level of the Wilcoxon Rank Sum and Signed Rank Tests has reached Alpha. |
Haijiang Qin <[email protected]>
Auerswald, M., & Moshagen, M. (2019). How to determine the number of factors to retain in exploratory factor analysis: A comparison of extraction methods under realistic conditions. Psychological methods, 24(4), 468-491. https://doi.org/https://doi.org/10.1037/met0000200.
Goretzko, D., & Buhner, M. (2020). One model to rule them all? Using machine learning algorithms to determine the number of factors in exploratory factor analysis. Psychol Methods, 25(6), 776-786. https://doi.org/10.1037/met0000262.
Ruscio, J., & Roche, B. (2012). Determining the number of factors to retain in an exploratory factor analysis using comparison data of known factorial structure. Psychological Assessment, 24, 282–292. http://dx.doi.org/10.1037/a0025697.
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run CD function with default parameters. CD.obj <- CD(response) print(CD.obj) ## CD plot plot(CD.obj) ## Get the RMSE.Eigs and nfact results. RMSE.Eigs <- CD.obj$RMSE.Eigs nfact <- CD.obj$nfact head(RMSE.Eigs) print(nfact) ## Limit the maximum number of factors to 8, with populations set to 5000. CD.obj <- CD(response, nfact.max=8, N.pop = 5000) print(CD.obj) ## CD plot plot(CD.obj) ## Get the RMSE.Eigs and nfact results. RMSE.Eigs <- CD.obj$RMSE.Eigs nfact <- CD.obj$nfact head(RMSE.Eigs) print(nfact)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run CD function with default parameters. CD.obj <- CD(response) print(CD.obj) ## CD plot plot(CD.obj) ## Get the RMSE.Eigs and nfact results. RMSE.Eigs <- CD.obj$RMSE.Eigs nfact <- CD.obj$nfact head(RMSE.Eigs) print(nfact) ## Limit the maximum number of factors to 8, with populations set to 5000. CD.obj <- CD(response, nfact.max=8, N.pop = 5000) print(CD.obj) ## CD plot plot(CD.obj) ## Get the RMSE.Eigs and nfact results. RMSE.Eigs <- CD.obj$RMSE.Eigs nfact <- CD.obj$nfact head(RMSE.Eigs) print(nfact)
The Comparison Data Forest (CDF; Goretzko & Ruscio, 2019) approach is a combination of Random Forest with the comparison data (CD) approach.
CDF( response, num.trees = 500, mtry = 13, nfact.max = 10, N.pop = 10000, N.Samples = 500, cor.type = "pearson", use = "pairwise.complete.obs", vis = TRUE, plot = TRUE )
CDF( response, num.trees = 500, mtry = 13, nfact.max = 10, N.pop = 10000, N.Samples = 500, cor.type = "pearson", use = "pairwise.complete.obs", vis = TRUE, plot = TRUE )
response |
A required |
num.trees |
the number of trees in the Random Forest. (default = 500) See details. |
mtry |
the maximum depth for each tree. (default = 13) See details. |
nfact.max |
The maximum number of factors discussed by CD approach. (default = 10) |
N.pop |
Size of finite populations of simulating.. (default = 10,000) |
N.Samples |
Number of samples drawn from each population. (default = 500) |
cor.type |
A character string indicating which correlation coefficient (or covariance) is to be computed. One of "pearson" (default), "kendall", or "spearman". @seealso cor. |
use |
an optional character string giving a method for computing covariances in the presence of missing values. This must be one of the strings "everything", "all.obs", "complete.obs", "na.or.complete", or "pairwise.complete.obs" (default). @seealso cor. |
vis |
A Boolean variable that will print the factor retention results when set to TRUE, and will not print when set to FALSE. (default = TRUE) |
plot |
A Boolean variable that will print the CDF plot when set to TRUE, and will not print it when set to FALSE. @seealso plot.CDF. (Default = TRUE) |
The Comparison Data Forest (CDF; Goretzko & Ruscio, 2019) Approach is a combination of random forest with the comparison data (CD) approach. Its basic steps involve using the method of Ruscio & Roche (2012) to simulate data with different factor counts, then extracting features from this data to train a random forest model. Once the model is trained, it can be used to predict the number of factors in empirical data. The algorithm consists of the following steps:
1. **Simulation Data:**
For each value of in the range from 1 to
,
generate a population data using the GenData function.
Each population is based on factors and consists of
observations.
For each generated population, repeat the following for times, For the
-th in
:
a. Draw a sample
from the population that matches the size of the empirical data;
b. Compute a feature set
from each
.
Combine all the generated feature sets
into a data frame as
.
Combine all into a final data frame as the training dataset
.
2. **Training RF:**
Train a Random Forest model using the combined
.
3. **Prediction the Empirical Data:**
Calculate the feature set for the empirical data.
Use the trained Random Forest model to predict the number of factors
for the empirical data:
According to Goretzko & Ruscio (2024) and Breiman (2001), the number of
trees in the Random Forest num.trees
is recommended to be 500.
The Random Forest in CDF performs a classification task, so the recommended maximum
depth for each tree mtry
is (where
is the number of features),
which results in
.
Since the CDF approach requires extensive data simulation and computation, which is much more time consuming than the CD Approach, C++ code is used to speed up the process.
An object of class CDF
is a list
containing the following components:
nfact |
The number of factors to be retained. |
RF |
the trained Random Forest model |
probability |
A matrix containing the probabilities for factor numbers ranging from 1 to nfact.max (1xnfact.max), where the number in the f-th column represents the probability that the number of factors for the response is f. |
features |
A matrix (1×181) containing all the features for determining the number of factors. @seealso extractor.feature.FF |
Haijiang Qin <[email protected]>
Breiman, L. (2001). Random Forests. Machine Learning, 45(1), 5-32. https://doi.org/10.1023/A:1010933404324
Goretzko, D., & Ruscio, J. (2024). The comparison data forest: A new comparison data approach to determine the number of factors in exploratory factor analysis. Behavior Research Methods, 56(3), 1838-1851. https://doi.org/10.3758/s13428-023-02122-4
Ruscio, J., & Roche, B. (2012). Determining the number of factors to retain in an exploratory factor analysis using comparison data of known factorial structure. Psychological Assessment, 24, 282–292. http://dx.doi.org/10.1037/a0025697.
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run CDF function with default parameters. CDF.obj <- CDF(response) print(CDF.obj) ## CDF plot plot(CDF.obj) ## Get the nfact results. nfact <- CDF.obj$nfact print(nfact) ## Limit the maximum number of factors to 8, with populations set to 5000. CDF.obj <- CDF(response, nfact.max=8, N.pop = 5000) print(CDF.obj) ## CDF plot plot(CDF.obj) ## Get the nfact results. nfact <- CDF.obj$nfact print(nfact)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run CDF function with default parameters. CDF.obj <- CDF(response) print(CDF.obj) ## CDF plot plot(CDF.obj) ## Get the nfact results. nfact <- CDF.obj$nfact print(nfact) ## Limit the maximum number of factors to 8, with populations set to 5000. CDF.obj <- CDF(response, nfact.max=8, N.pop = 5000) print(CDF.obj) ## CDF plot plot(CDF.obj) ## Get the nfact results. nfact <- CDF.obj$nfact print(nfact)
This dataset includes 25 self-report personality items sourced from the International Personality Item Pool (ipip.ori.org) as part of the Synthetic Aperture Personality Assessment (SAPA) web-based personality assessment project. The dataset contains responses from 2,800 examinees. Additionally, three demographic variables (sex, education, and age) are included.
A data frame with 2,800 observations on 28 variables. The variables include:
A1
- Am indifferent to the feelings of others. (q_146)
A2
- Inquire about others’ well-being. (q_1162)
A3
- Know how to comfort others. (g_1206)
A4
- Love children. (g_1364)
A5
- Make people feel at ease. (q_1419)
C1
- Am exacting in my work. (q_124)
C2
- Continue until everything is perfect. (q_530)
C3
- Do things according to a plan. (q_619)
C4
- Do things in a half-way manner. (g_626)
C5
- Waste my time. (g_1949)
E1
- Don't talk a lot. (q_712)
E2
- Find it difficult to approach others. (q_901)
E3
- Know how to captivate people. (q_1205)
E4
- Make friends easily. (q_1410)
E5
- Take charge. (g_1768)
N1
- Get angry easily. (q_952)
N2
- Get irritated easily. (q_974)
N3
- Have frequent mood swings. (q_1099)
N4
- Often feel blue. (g_1479)
N5
- Panic easily. (q_1505)
O1
- Am full of ideas. (q_128)
O2
- Avoid difficult reading material. (g_316)
O3
- Carry the conversation to a higher level. (q_492)
O4
- Spend time reflecting on things. (g_1738)
O5
- Will not probe deeply into a subject. (q_1964)
gender
- Gender: Males = 1, Females = 2
education
- Education level: 1 = High School, 2 = Finished High School,
3 = Some College, 4 = College Graduate, 5 = Graduate Degree
age
- Age in years
The 25 items are organized by five factors: Agreeableness, Conscientiousness, Extraversion,
Neuroticism, and Openness. The scoring key is created using make.keys
, and scores are
calculated using score.items
. These factors are useful for IRT-based latent factor analysis
of the polychoric correlation matrix. Endorsement plots and item information functions reveal
variations in item quality. Responses were collected on a 6-point scale:
1 = Very Inaccurate, 2 = Moderately Inaccurate, 3 = Slightly Inaccurate, 4 = Slightly Accurate,
5 = Moderately Accurate, 6 = Very Accurate, as part of the Synthetic Aperture Personality Assessment (SAPA)
project (https://www.sapa-project.org/). For examples of data collection techniques, visit
https://www.sapa-project.org/ or the International Cognitive Ability Resource at
https://icar-project.org. The items were sampled from the International Personality Item Pool of
Lewis Goldberg using SAPA sampling techniques. This dataset is a sample from the larger SAPA data bank.
The data.bfi data set and items should not be confused with the BFI (Big Five Inventory) of Oliver Johnand colleagues (John, O. P, Donahue, E. M., & Kentle, R.L. (1991). The Big Five Inventory Versions 4a and 54. Berkeley, CA: University of California, Berkeley, Institute of Personality and Social Research.)
The items are from the ipip (Goldberg, 1999). The data are from the SAPA project (Revelle, Wiltand Rosenthal, 2010), collected Spring, 2010(https://www.sapa-project.org/).
Goldberg, L.R. (1999). A broad-bandwidth, public domain, personality inventory measuring the lower-level facets of several five-factor models. In Mervielde, I., Deary, I., De Fruyt, F., & Ostendorf, F. (Eds.), Personality psychology in Europe (Vol. 7, pp. 7-28). Tilburg University Press.
Revelle, W., Wilt, J., & Rosenthal, A. (2010). Individual Differences in Cognition: New Methods for Examining the Personality-Cognition Link. In Gruszka, A., Matthews, G., & Szymura, B. (Eds.), Handbook of Individual Differences in Cognition: Attention, Memory and Executive Control (pp. 117-144). Springer.
Revelle, W., Condon, D., Wilt, J., French, J.A., Brown, A., & Elleman, L.G. (2016). Web and phone-based data collection using planned missing designs. In Fielding, N.G., Lee, R.M., & Blank, G. (Eds.), SAGE Handbook of Online Research Methods (2nd ed., pp. 100-116). Sage Publications.
data(data.bfi) head(data.bfi)
data(data.bfi) head(data.bfi)
This dataset is a subset of the full datasets, consisting of 1,000 samples from the original 10,000,000-sample datasets.
A 1,000×55 matrix, where the first 54 columns represent feature values and the last column represents the labels, which correspond to the number of factors associated with the features.
Methods for generating and extracting features from the dataset can be found in DNN_predictor.
DNN_predictor, load_scaler, data.scaler, normalizor
data(data.datasets) head(data.datasets)
data(data.datasets) head(data.datasets)
This dataset contains the means and standard deviations of the 10,000,000 datasets for training Pre-Trained Deep Neural Network (DNN), which can be used to determine the number of factors.
A list
containing two vector
s, each of length 54:
A numeric vector representing the means of the 54 features extracted from the 10,000,000 datasets.
A numeric vector representing the standard deviations of the 54 features extracted from the 10,000,000 datasets.
DNN_predictor, load_scaler, data.datasets, normalizor
data(data.scaler) print(data.scaler) data.scaler <- load_scaler() print(data.scaler)
data(data.scaler) print(data.scaler) data.scaler <- load_scaler() print(data.scaler)
This function will invoke a pre-trained deep neural network that can reliably
perform the task of determining the number of factors. The maximum number of
factors that the network can discuss is 10. The DNN model is implemented in Python
and trained on PyTorch (https://pytorch.org/) with
CUDA 11.8 for acceleration. After training, the DNN was saved as a DNN.onnx
file. The DNN_predictor function performs inference by loading the DNN.onnx
file in both Python and R environments. Therefore, please note that Python (suggested >= 3.10) and the
libraries numpy
and onnxruntime
are required.
To run this function, Python is required, along with the installation of numpy
and onnxruntime
. See more in Details and Note.
DNN_predictor( response, cor.type = "pearson", use = "pairwise.complete.obs", vis = TRUE, plot = TRUE )
DNN_predictor( response, cor.type = "pearson", use = "pairwise.complete.obs", vis = TRUE, plot = TRUE )
response |
A required |
cor.type |
A character string indicating which correlation coefficient (or covariance) is to be computed. One of "pearson" (default), "kendall", or "spearman". @seealso cor. |
use |
An optional character string giving a method for computing covariances in the presence of missing values. This must be one of the strings "everything", "all.obs", "complete.obs", "na.or.complete", or "pairwise.complete.obs" (default). @seealso cor. |
vis |
A Boolean variable that will print the factor retention results when set to TRUE, and will not print when set to FALSE. (default = TRUE) |
plot |
A Boolean variable that will print the DNN_predictor plot when set to TRUE, and will not print it when set to FALSE. @seealso plot.DNN_predictor. (Default = TRUE) |
Due to the improved performance of deep learning models with larger datasets (Chen et al., 2017), a total of 10,000,000 datasets (data.datasets) were simulated to extract features for training deep learning neural networks. Each dataset was generated following the methods described by Auerswald & Moshagen (2019) and Goretzko & Buhner (2020), with the following specifications:
Factor number: F ~ U[1,10]
Sample size: N ~ U[100,1000]
Number of variables per factor: vpf ~ [3,20]
Factor correlation: fc ~ U[0.0,0.4]
Primary loadings: pl ~ U[0.35,0.80]
Cross-loadings: cl ~ U[-0.2,0.2]
A population correlation matrix was created for each data set based on the following decomposition:
where is the loading matrix,
is the factor correlation
matrix, and
is a diagonal matrix,
with
.
The purpose of
is to ensure that the diagonal elements of
are 1.
The response data for each subject was simulated using the following formula:
where follows a normal distribution
, representing the contribution of latent factors,
and
is the residual term following a standard normal distribution.
and
are uncorrelated, and
and
are also uncorrelated.
For each simulated dataset, a total of 6 types of features (which can be classified into 2 types; @seealso extractor.feature.DNN) are extracted and compiled into a feature vector, consisting of 54 features: 8 + 8 + 8 + 10 + 10 + 10. These features are as follows:
1. Clustering-Based Features
Hierarchical clustering is performed with correlation coefficients as dissimilarity. The top 9 tree node heights are calculated, and all heights are divided by the maximum height. The heights from the 2nd to 9th nodes are used as features. @seealso EFAhclust
Hierarchical clustering with Euclidean distance as dissimilarity is performed. The top 9 tree node heights are calculated, and all heights are divided by the maximum height. The heights from the 2nd to 9th nodes are used as features. @seealso EFAhclust
K-means clustering is applied with the number of clusters ranging from 1 to 9. The within-cluster sum of squares (WSS) for clusters 2 to 9 are divided by the WSS for a single cluster. @seealso EFAkmeans
These three features are based on clustering algorithms. The purpose of division is to normalize the data. These clustering metrics often contain information unrelated to the number of factors, such as the number of items and the number of respondents, which can be avoided by normalization. The reason for using the 2nd to 9th data is that only the top F-1 data are needed to determine the number of factors F. The first data point is fixed at 1 after the division operations, so it is excluded. This approach helps in model simplification.
2. Traditional Exploratory Factor Analysis Features (Eigenvalues)
The top 10 largest eigenvalues.
The ratio of the top 10 largest eigenvalues to the corresponding reference eigenvalues from Empirical Kaiser Criterion (EKC; Braeken & van Assen, 2017). @seealso EKC
The cumulative variance proportion of the top 10 largest eigenvalues.
Only the top 10 elements are used to simplify the model.
The DNN model is implemented in Python and trained on PyTorch (https://download.pytorch.org/whl/cu118) with
CUDA 11.8 for acceleration. After training, the DNN was saved as a DNN.onnx
file. The DNN_predictor function
performs inference by loading the DNN.onnx
file in both Python and R environments.
An object of class DNN_predictor
is a list
containing the following components:
nfact |
The number of factors to be retained. |
features |
A matrix (1×54) containing all the features for determining the number of factors by the DNN. |
probability |
A matrix containing the probabilities for factor numbers ranging from 1 to 10 (1x10), where the number in the f-th column represents the probability that the number of factors for the response is f. |
Note that Python and the libraries numpy
and onnxruntime
are required.
First, please ensure that Python is installed on your computer and that Python is included in the system's PATH environment variable. If not, please download and install it from the official website (https://www.python.org/).
If you encounter an error when running this function stating that the numpy
and onnxruntime
modules are missing:
Error in py_module_import(module, convert = convert) :
ModuleNotFoundError: No module named 'numpy'
or
Error in py_module_import(module, convert = convert) :
ModuleNotFoundError: No module named 'onnxruntime'
this means that the numpy
or onnxruntime
library is missing from your Python environment. If you are using Windows or macOS,
please run the command pip install numpy
or pip install onnxruntime
in Command Prompt or Windows PowerShell (Windows), or Terminal (macOS).
If you are using Linux, please ensure that pip
is installed and use the command pip install numpy
or
pip install onnxruntime
to install the missing libraries.
Haijiang Qin <[email protected]>
Auerswald, M., & Moshagen, M. (2019). How to determine the number of factors to retain in exploratory factor analysis: A comparison of extraction methods under realistic conditions. Psychological methods, 24(4), 468-491. https://doi.org/10.1037/met0000200.
Braeken, J., & van Assen, M. A. L. M. (2017). An empirical Kaiser criterion. Psychological methods, 22(3), 450-466. https://doi.org/10.1037/met0000074.
Goretzko, D., & Buhner, M. (2020). One model to rule them all? Using machine learning algorithms to determine the number of factors in exploratory factor analysis. Psychol Methods, 25(6), 776-786. https://doi.org/10.1037/met0000262.
A function performs clustering on items by calling hclust. Hierarchical cluster analysis on a set of dissimilarities and methods for analyzing it. The items will be continuously clustered in pairs until all items are grouped into a single cluster, at which point the process will stop.
EFAhclust( response, dissimilarity.type = "R", method = "ward.D", cor.type = "pearson", use = "pairwise.complete.obs", nfact.max = 10, plot = TRUE )
EFAhclust( response, dissimilarity.type = "R", method = "ward.D", cor.type = "pearson", use = "pairwise.complete.obs", nfact.max = 10, plot = TRUE )
response |
A required |
dissimilarity.type |
A character indicating which kind of dissimilarity is to be computed. One of "R" or "E" (default) for the correlation coefficient or Euclidean distance. |
method |
the agglomeration method to be used. This should be (an unambiguous abbreviation of) one of "ward.D",
"ward.D2", "single", "complete", "average" (= UPGMA), "mcquitty" (= WPGMA), "median" (= WPGMC) or "centroid" (= UPGMC).
(default = "ward.D")
@seealso |
cor.type |
A character string indicating which correlation coefficient (or covariance) is to be computed. One of "pearson" (default), "kendall", or "spearman". @seealso cor. |
use |
An optional character string giving a method for computing covariances in the presence of missing values. This must be one of the strings "everything", "all.obs", "complete.obs", "na.or.complete", or "pairwise.complete.obs" (default). @seealso cor. |
nfact.max |
The maximum number of factors discussed by the Second-Order Difference (SOD) approach. (default = 10) |
plot |
A Boolean variable that will print the EFAhclust plot when set to TRUE, and will not print it when set to FALSE. @seealso plot.EFAhclust. (Default = TRUE) |
Hierarchical cluster analysis always merges the two nodes with the smallest dissimilarity, forming a new node in the process. This continues until all nodes are merged into one large node, at which point the algorithm terminates. This method undoubtedly creates a hierarchical structure by the end of the process, which encompasses the relationships between all items: items with high correlation have short connecting lines between them, while items with low correlation have longer lines. This hierarchical structure is well-suited to be represented as a binary tree. In this representation, the dissimilarity between two nodes can be indicated by the height of the tree nodes; the greater the difference between nodes, the higher the height of the tree nodes connecting them (the longer the line). Researchers can decide whether two nodes belong to the same cluster based on the height differences between nodes, which, in exploratory factor analysis, represents whether these two nodes belong to the same latent factor.
The Second-Order Difference (SOD) approach is a commonly used method for finding the "elbow" (the point of greatest slope change). According to the principles of exploratory factor analysis, items belonging to different latent factors have lower correlations, while items under the same factor are more highly correlated. In hierarchical clustering, this is reflected in the height of the nodes in the dendrogram, with differences in node heights representing the relationships between items. By sorting all node heights in descending order and applying the SOD method to locate the elbow, the number of factors can be determined. @seealso EFAkmeans
An object of class EFAhclust
is a list
containing the following components:
hc |
An object of class |
cor.response |
A matrix of dimension |
clusters |
A list containing all the clusters. |
heights |
A vector containing all the heights of the cluster tree. The heights are arranged in descending order. |
nfact.SOD |
The number of factors to be retained by the Second-Order Difference (SOD) approach. |
Batagelj, V. (1988). Generalized Ward and Related Clustering Problems. In H. H. Bock, Classification and Related Methods of Data Analysis the First Conference of the International Federation of Classification Societies (IFCS), Amsterdam.
Murtagh, F., & Legendre, P. (2014). Ward’s Hierarchical Agglomerative Clustering Method: Which Algorithms Implement Ward’s Criterion? Journal of Classification, 31(3), 274-295. https://doi.org/10.1007/s00357-014-9161-z.
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run EFAhclust function with default parameters. EFAhclust.obj <- EFAhclust(response) plot(EFAhclust.obj) ## Get the heights. heights <- EFAhclust.obj$heights print(heights) ## Get the nfact retained by SOD nfact.SOD <- EFAhclust.obj$nfact.SOD print(nfact.SOD)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run EFAhclust function with default parameters. EFAhclust.obj <- EFAhclust(response) plot(EFAhclust.obj) ## Get the heights. heights <- EFAhclust.obj$heights print(heights) ## Get the nfact retained by SOD nfact.SOD <- EFAhclust.obj$nfact.SOD print(nfact.SOD)
A function performs clustering on items by calling VSS and fa. Apply the Very Simple Structure (VSS), Comparative Fit Index (CFI), MAP, and other criteria to determine the appropriate number of factors.
EFAindex( response, nfact.max = 10, cor.type = "cor", use = "pairwise.complete.obs" )
EFAindex( response, nfact.max = 10, cor.type = "cor", use = "pairwise.complete.obs" )
response |
A required |
nfact.max |
The maximum number of factors discussed by CD approach. (default = 10) |
cor.type |
How to find the correlations: "cor" is Pearson", "cov" is covariance, "tet" is tetrachoric, "poly" is polychoric, "mixed" uses mixed cor for a mixture of tetrachorics, polychorics, Pearsons, biserials, and polyserials, Yuleb is Yulebonett, Yuleq and YuleY are the obvious Yule coefficients as appropriate. |
use |
an optional character string giving a method for computing covariances in the presence of missing values. This must be one of the strings "everything", "all.obs", "complete.obs", "na.or.complete", or "pairwise.complete.obs" (default). @seealso cor. |
A matrix
with the following components:
the Comparative Fit Index
Root Mean Square Error of Approximation (RMSEA) for each number of factors.
Standardized Root Mean Square Residual.
Velicer's MAP values (lower values are better).
Bayesian Information Criterion (BIC) for each number of factors.
Sample-size Adjusted Bayesian Information Criterion (SABIC) for each number of factors.
Chi-square statistic from the factor analysis output.
Degrees of freedom.
Probability that the residual matrix is greater than 0.
Empirically found chi-square statistic.
Empirically found mean residual corrected for degrees of freedom.
Empirically found BIC based on the empirically found chi-square statistic.
VSS fit with complexity 1.
Squared residual correlations.
Factor fit of the complete model.
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run EFAindex function with default parameters. EFAindex.matrix <- EFAindex(response) print(EFAindex.matrix)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run EFAindex function with default parameters. EFAindex.matrix <- EFAindex(response) print(EFAindex.matrix)
A function performs K-means algorithm on items by calling kmeans.
EFAkmeans(response, nfact.max = 10, plot = TRUE)
EFAkmeans(response, nfact.max = 10, plot = TRUE)
response |
A required |
nfact.max |
The maximum number of factors discussed by EFAkmeans approach. (default = 10) |
plot |
A Boolean variable that will print the EFAkmeans plot when set to TRUE, and will not print it when set to
FALSE. @seealso |
K-means is a well-established and widely used classical clustering algorithm. It is an unsupervised machine learning algorithm that requires the number of clusters K to be specified in advance. After K-means terminates, the total within-cluster sum of squares (WSS) can be calculated to represent the goodness of fit of the clustering:
where
is the set of all clusters.
is the k-th cluster.
represents each item in the cluster
.
is the centroid of cluster
.
Similar to the scree plot where eigenvalues decrease as the number of factors increases,
WSS also decreases as K increases. A "significant reduction" in WSS at a particular K may suggest that K is the
most appropriate number of clusters, which in exploratory factor analysis implies that the number of factors is K.
The "significant reduction" can be identified using the Second-Order Difference (SOD) approach. @seealso EFAkmeans
An object of class EFAkmeans
is a list
containing the following components:
wss |
A vector containing all within-cluster sum of squares (WSS). |
nfact.SOD |
The number of factors to be retained by the Second-Order Difference (SOD) approach. |
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run EFAkmeans function with default parameters. EFAkmeans.obj <- EFAkmeans(response) plot(EFAkmeans.obj) ## Get the heights. wss <- EFAkmeans.obj$wss print(wss) ## Get the nfact retained by SOD nfact.SOD <- EFAkmeans.obj$nfact.SOD print(nfact.SOD)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run EFAkmeans function with default parameters. EFAkmeans.obj <- EFAkmeans(response) plot(EFAkmeans.obj) ## Get the heights. wss <- EFAkmeans.obj$wss print(wss) ## Get the nfact retained by SOD nfact.SOD <- EFAkmeans.obj$nfact.SOD print(nfact.SOD)
This function generates a scree plot to display the eigenvalues of the correlation matrix computed from the given response data. The scree plot helps in determining the number of factors to retain in exploratory factor analysis by examining the point at which the eigenvalues start to level off, indicating less variance explained by additional factors.
EFAscreet( response, fa = "pc", nfact = 1, cor.type = "pearson", use = "pairwise.complete.obs" )
EFAscreet( response, fa = "pc", nfact = 1, cor.type = "pearson", use = "pairwise.complete.obs" )
response |
A required |
fa |
A string that determines the method used to obtain eigenvalues. If 'pc', it represents
Principal Component Analysis (PCA); if 'fa', it represents Principal Axis Factoring (a widely
used Factor Analysis method; @seealso |
nfact |
A numeric value that specifies the number of factors to extract, only effective when |
cor.type |
A character string indicating which correlation coefficient (or covariance) is to be computed. One of "pearson" (default), "kendall", or "spearman". @seealso cor. |
use |
An optional character string giving a method for computing covariances in the presence of missing values. This must be one of the strings "everything", "all.obs", "complete.obs", "na.or.complete", or "pairwise.complete.obs" (default). @seealso cor. |
An object of class EFAscreet
is a list
containing the following components:
eigen.value |
A vector containing the empirical eigenvalues |
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run EFAscreet function with default parameters. EFAscreet.obj <- EFAscreet(response) plot(EFAscreet.obj)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run EFAscreet function with default parameters. EFAscreet.obj <- EFAscreet(response) plot(EFAscreet.obj)
This function is used to simulate data that conforms to the theory of exploratory factor analysis, with a high degree of customization for the variables involved.
EFAsim.data( nfact, vpf, N = 500, distri = "normal", fc = "R", pl = "R", cl = "R", low.vpf = 5, up.vpf = 15, a = NULL, b = NULL, vis = TRUE, seed = NULL )
EFAsim.data( nfact, vpf, N = 500, distri = "normal", fc = "R", pl = "R", cl = "R", low.vpf = 5, up.vpf = 15, a = NULL, b = NULL, vis = TRUE, seed = NULL )
nfact |
A numeric value specifying the number of factors to simulate. |
vpf |
A numeric or character value specifying the number of items under each factor.
If a numeric value is provided, the numeric must be larger than 2,
and the number of items under each factor will be fixed
to this value. If a character value is provided, it must be one of 'S', 'M', 'L', or 'R'.
These represent random selection of items under each factor from |
N |
A numeric value specifying the number of examinees to simulate. |
distri |
A character, either 'normal' or 'beta', indicating whether the simulated data will follow a standard multivariate normal distribution or a multivariate beta distribution. |
fc |
A numeric or character value specifying the degree of correlation between factors.
If a numeric value is provided, it must be within the range of 0 to 0.75, and the correlation
between all factors will be fixed at this value. If a character value is provided, it must be 'R',
and the correlations between factors will be randomly selected from |
pl |
A numeric or character value specifying the size of the primary factor loadings.
If a numeric value is provided, it must be within the range of 0 to 1, and all primary factor
loadings in the loading matrix will be fixed at this value. If a character value is provided,
it must be one of 'L', 'M', 'H', or 'R', representing |
cl |
A numeric or character value specifying the size of cross-loadings.
If a numeric value is provided, it must be within the range of 0 to 0.5, and all cross-loadings
in the loading matrix will be fixed at this value. If a character value is provided, it must be
one of 'L', 'H', 'None', or 'R', representing |
low.vpf |
A numeric value specifying the minimum number of items per factor, must be larger than 2, effective only when |
up.vpf |
A numeric value specifying the maximum number of items per factor, effective only when |
a |
A numeric or NULL specifying the 'a' parameter of the beta distribution, effective only when |
b |
A numeric or NULL specifying the 'b' parameter of the beta distribution, effective only when |
vis |
A logical value indicating whether to print process information. (default = TRUE) |
seed |
A numeric or NULL specifying the random seed. If a numeric value is provided, it will be used as the seed. If NULL, the current timestamp will be used. (default = NULL) |
A population correlation matrix was created for each data set based on the following decomposition:
where is the loading matrix,
is the factor correlation
matrix, and
is a diagonal matrix,
with
.
The purpose of
is to ensure that the diagonal elements of
are 1.
The response data for each subject was simulated using the following formula:
where follows a a standard normal distribution (
distri = 'normal'
) or a beta
distribution (distri = 'beta'
), representing the contribution of latent factors.
And is the residual term following a standard normal distribution
(
distri = 'normal'
) or a beta distribution (distri = 'beta'
) . and
are uncorrelated, and
and
are also uncorrelated.
An object of class EFAdata
is a list
containing the following components:
loadings |
A simulated loading matrix. |
items |
A |
cor.factors |
A simulated factor correlation matrix. |
cor.items |
A simulated item correlation matrix. |
response |
A simulated response data matrix. |
Goretzko, D., & Buhner, M. (2020). One model to rule them all? Using machine learning algorithms to determine the number of factors in exploratory factor analysis. Psychological Methods, 25(6), 776-786. https://doi.org/10.1037/met0000262.
Auerswald, M., & Moshagen, M. (2019). How to determine the number of factors to retain in exploratory factor analysis: A comparison of extraction methods under realistic conditions. Psychological methods, 24(4), 468-491. https://doi.org/https://doi.org/10.1037/met0000200
library(EFAfactors) ## Run EFAsim.data function with default parameters. data.obj <- EFAsim.data(nfact = 3, vpf = 5, N=500, distri="normal", fc="R", pl="R", cl="R", low.vpf = 5, up.vpf = 15, a = NULL, b = NULL, vis = TRUE, seed = NULL) head(data.obj$loadings)
library(EFAfactors) ## Run EFAsim.data function with default parameters. data.obj <- EFAsim.data(nfact = 3, vpf = 5, N=500, distri="normal", fc="R", pl="R", cl="R", low.vpf = 5, up.vpf = 15, a = NULL, b = NULL, vis = TRUE, seed = NULL) head(data.obj$loadings)
This function implements a voting method to determine the most appropriate number of factors
in exploratory factor analysis (EFA). The function accepts a vector of votes, where each value
represents the number of factors suggested by different EFA approaches. If there is a clear
winner (a single number of factors with the most votes), that number is returned. In case of
a tie, the function returns the first value among the tied results and outputs a message. The
result is returned as an object of class vote
, which can be printed and plotted.
EFAvote(votes, vis = TRUE, plot = TRUE)
EFAvote(votes, vis = TRUE, plot = TRUE)
votes |
A vector of integers, where each element corresponds to the number of factors suggested by an EFA method. |
vis |
Logical, whether to print the results of the voting. Defaults to |
plot |
Logical, whether to display a pie chart of the voting results. Defaults to |
An object of class EFAvote
, which is a list containing:
nfact |
The number of factors with the most votes. If there is a tie, the first one in the order is returned. |
votes |
The original vector of votes. |
library(EFAfactors) nfacts <- c(5, 5, 5, 6, 6, 4) names(nfacts) <- c("Hull", "CD", "PA", "EKC", "XGB", "DNN") EFAvote.obj <- EFAvote(votes = nfacts) # Visualize the voting results plot(EFAvote.obj)
library(EFAfactors) nfacts <- c(5, 5, 5, 6, 6, 4) names(nfacts) <- c("Hull", "CD", "PA", "EKC", "XGB", "DNN") EFAvote.obj <- EFAvote(votes = nfacts) # Visualize the voting results plot(EFAvote.obj)
This function will apply the Empirical Kaiser Criterion (Braeken & van Assen, 2017) method to determine the number of factors. The method assumes that the distribution of eigenvalues asymptotically follows a Marcenko-Pastur distribution (Marcenko & Pastur, 1967). It calculates the reference eigenvalues based on this distribution and determines whether to retain a factor by comparing the size of the empirical eigenvalues to the reference eigenvalues.
EKC( response, cor.type = "pearson", use = "pairwise.complete.obs", vis = TRUE, plot = TRUE )
EKC( response, cor.type = "pearson", use = "pairwise.complete.obs", vis = TRUE, plot = TRUE )
response |
A required |
cor.type |
A character string indicating which correlation coefficient (or covariance) is to be computed. One of "pearson" (default), "kendall", or "spearman". @seealso cor. |
use |
An optional character string giving a method for computing covariances in the presence of missing values. This must be one of the strings "everything", "all.obs", "complete.obs", "na.or.complete", or "pairwise.complete.obs" (default). @seealso cor. |
vis |
A Boolean variable that will print the factor retention results when set to TRUE, and will not print when set to FALSE. (default = TRUE) |
plot |
A Boolean variable that will print the EKC plot when set to TRUE, and will not print it when set to FALSE. @seealso plot.EKC. (Default = TRUE) |
The Empirical Kaiser Criterion (EKC; Auerswald & Moshagen, 2019; Braeken & van Assen, 2017)
refines Kaiser-Guttman Criterion
by accounting for random sample variations in eigenvalues. At the population level, the EKC is
equivalent to the original Kaiser-Guttman Criterion, extracting all factors whose eigenvalues
from the correlation matrix are greater than one. However, at the sample level, it adjusts for
the distribution of eigenvalues in normally distributed data. Under the null model, the eigenvalue
distribution follows the Marčenko-Pastur distribution (Marčenko & Pastur, 1967) asymptotically.
The upper bound of this distribution serves as the reference eigenvalue for the first eigenvalue , so
, which is determined by N individuals and I items. For subsequent eigenvalues, adjustments are made based on the variance explained by previous factors. The j-th reference eigenvalue is:
The j-th reference eigenvalue is reduced according to the magnitude of earlier eigenvalues since higher previous values mean less unexplained variance remains. As in the original Kaiser-Guttman Criterion, the reference eigenvalue cannot drop below one.
Here, \( F \) represents the number of factors determined by the EKC, and is the
indicator function, which equals 1 when the condition is true, and 0 otherwise.
An object of class EKC
is a list
containing the following components:
nfact |
The number of factors to be retained. |
eigen.value |
A vector containing the empirical eigenvalues |
eigen.ref |
A vector containing the reference eigenvalues |
Haijiang Qin <[email protected]>
Auerswald, M., & Moshagen, M. (2019). How to determine the number of factors to retain in exploratory factor analysis: A comparison of extraction methods under realistic conditions. Psychological methods, 24(4), 468-491. https://doi.org/10.1037/met0000200.
Braeken, J., & van Assen, M. A. L. M. (2017). An empirical Kaiser criterion. Psychological methods, 22(3), 450-466. https://doi.org/10.1037/met0000074.
Marcˇenko, V. A., & Pastur, L. A. (1967). Distribution of eigenvalues for some sets of random matrices. Mathematics of the USSR-Sbornik, 1, 457–483. http://dx.doi.org/10.1070/SM1967v001n04ABEH001994
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run EKC function with default parameters. EKC.obj <- EKC(response) print(EKC.obj) plot(EKC.obj) ## Get the eigen.value, eigen.ref and nfact results. eigen.value <- EKC.obj$eigen.value eigen.ref <- EKC.obj$eigen.ref nfact <- EKC.obj$nfact print(eigen.value) print(eigen.ref) print(nfact)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run EKC function with default parameters. EKC.obj <- EKC(response) print(EKC.obj) plot(EKC.obj) ## Get the eigen.value, eigen.ref and nfact results. eigen.value <- EKC.obj$eigen.value eigen.ref <- EKC.obj$eigen.ref nfact <- EKC.obj$nfact print(eigen.value) print(eigen.ref) print(nfact)
This function is used to extract the features required by the Pre-Trained Deep Neural Network (DNN). @seealso DNN_predictor
extractor.feature.DNN( response, cor.type = "pearson", use = "pairwise.complete.obs" )
extractor.feature.DNN( response, cor.type = "pearson", use = "pairwise.complete.obs" )
response |
A required |
cor.type |
A character string indicating which correlation coefficient (or covariance) is to be computed. One of "pearson" (default), "kendall", or "spearman". @seealso cor. |
use |
An optional character string giving a method for computing covariances in the presence of missing values. This must be one of the strings "everything", "all.obs", "complete.obs", "na.or.complete", or "pairwise.complete.obs" (default). @seealso cor. |
A total of two types of features (6 kinds, making up 54 features in total) will be extracted, and they are as follows: 1. Clustering-Based Features
Hierarchical clustering is performed with correlation coefficients as dissimilarity. The top 9 tree node heights are calculated, and all heights are divided by the maximum height. The heights from the 2nd to 9th nodes are used as features. @seealso EFAhclust
Hierarchical clustering with Euclidean distance as dissimilarity is performed. The top 9 tree node heights are calculated, and all heights are divided by the maximum height. The heights from the 2nd to 9th nodes are used as features. @seealso EFAhclust
K-means clustering is applied with the number of clusters ranging from 1 to 9. The within-cluster sum of squares (WSS) for clusters 2 to 9 are divided by the WSS for a single cluster. @seealso EFAkmeans
These three features are based on clustering algorithms. The purpose of division is to normalize the data. These clustering metrics often contain information unrelated to the number of factors, such as the number of items and the number of respondents, which can be avoided by normalization. The reason for using the 2nd to 9th data is that only the top F-1 data are needed to determine the number of factors F. The first data point is fixed at 1 after the division operations, so it is excluded. This approach helps in model simplification.
2. Traditional Exploratory Factor Analysis Features (Eigenvalues)
The top 10 largest eigenvalues.
The ratio of the top 10 largest eigenvalues to the corresponding reference eigenvalues from Empirical Kaiser Criterion (EKC; Braeken & van Assen, 2017). @seealso EKC
The cumulative variance proportion of the top 10 largest eigenvalues.
Only the top 10 elements are used to simplify the model.
A matrix (1×54) containing all the features for determining the number of factors by the DNN.
Haijiang Qin <[email protected]>
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run extractor.feature.DNN function with default parameters. features <- extractor.feature.DNN(response) print(features)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run extractor.feature.DNN function with default parameters. features <- extractor.feature.DNN(response) print(features)
This function will extract 181 features from the data according to the method by Goretzko & Buhner (2020).
extractor.feature.FF( response, cor.type = "pearson", use = "pairwise.complete.obs" )
extractor.feature.FF( response, cor.type = "pearson", use = "pairwise.complete.obs" )
response |
A required |
cor.type |
A character string indicating which correlation coefficient (or covariance) is to be computed. One of "pearson" (default), "kendall", or "spearman". @seealso cor. |
use |
An optional character string giving a method for computing covariances in the presence of missing values. This must be one of the strings "everything", "all.obs", "complete.obs", "na.or.complete", or "pairwise.complete.obs" (default). @seealso cor. |
The code for the extractor.feature.FF
function is implemented based on the publicly available code by Goretzko & Buhner (2020) (https://osf.io/mvrau/).
The extracted features are completely consistent with the 181 features described in the original text by Goretzko & Buhner (2020).
These features include:
1.
- Number of examinees
2.
- Number of items
3.
- Number of eigenvalues greater than 1
4.
- Proportion of variance explained by the 1st eigenvalue
5.
- Proportion of variance explained by the 2nd eigenvalue
6.
- Proportion of variance explained by the 3rd eigenvalue
7.
- Number of eigenvalues greater than 0.7
8.
- Standard deviation of the eigenvalues
9.
- Number of eigenvalues accounting for 50
10.
- Number of eigenvalues accounting for 75
11.
- L1-norm of the correlation matrix
12.
- Frobenius-norm of the correlation matrix
13.
- Maximum-norm of the correlation matrix
14.
- Average of the off-diagonal correlations
15.
- Spectral-norm of the correlation matrix
16.
- Number of correlations smaller or equal to 0.1
17.
- Average of the initial communality estimates
18.
- Determinant of the correlation matrix
19.
- Measure of sampling adequacy (MSA after Kaiser, 1970)
20.
- Gini coefficient (Gini, 1921) of the correlation matrix
21.
- Kolm measure of inequality (Kolm, 1999) of the correlation matrix
22-101.
- Eigenvalues from Principal Component Analysis (PCA), padded with -1000 if insufficient
102-181.
- Eigenvalues from Factor Analysis (FA), fixed at 1 factor, padded with -1000 if insufficient
A matrix (1×181) containing all the 181 features (Goretzko & Buhner, 2020).
Goretzko, D., & Buhner, M. (2020). One model to rule them all? Using machine learning algorithms to determine the number of factors in exploratory factor analysis. Psychol Methods, 25(6), 776-786. https://doi.org/10.1037/met0000262.
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run extractor.feature.FF function with default parameters. features <- extractor.feature.FF(response) print(features)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run extractor.feature.FF function with default parameters. features <- extractor.feature.FF(response) print(features)
This function performs factor analysis using the Principal Axis Factoring (PAF) method. The process involves extracting factors from an initial correlation matrix and iteratively refining the factor estimates until convergence is achieved.
factor.analysis( data, nfact = 1, iter.max = 1000, criterion = 0.001, cor.type = "pearson", use = "pairwise.complete.obs" )
factor.analysis( data, nfact = 1, iter.max = 1000, criterion = 0.001, cor.type = "pearson", use = "pairwise.complete.obs" )
data |
A data.frame or matrix of response If the matrix is square, it is assumed to be a correlation matrix. Otherwise, correlations (with pairwise deletion) will be computed. |
nfact |
The number of factors to extract. (default = 1) |
iter.max |
The maximum number of iterations for the factor extraction process. Default is 1000. |
criterion |
The convergence criterion for the iterative process. The extraction process will stop when the change in communalities is less than this value. Default is 0.001 |
cor.type |
A character string indicating which correlation coefficient (or covariance) is to be computed. One of "pearson" (default), "kendall", or "spearman". @seealso cor. |
use |
An optional character string giving a method for computing covariances in the presence of missing values. This must be one of the strings "everything", "all.obs", "complete.obs", "na.or.complete", or "pairwise.complete.obs" (default). @seealso cor. |
The Principal Axis Factoring (PAF) method involves the following steps:
Step 1. **Basic Principle**: The core principle of factor analysis using Principal Axis Factoring (PAF) is expressed as:
where is the matrix of factor loadings, and
is the diagonal
matrix of unique variances. Here,
represents the portion of the i-th item's variance explained by the factor model.
reflects the amount of total variance in the variable accounted for by the factors in the model, indicating the
explanatory power of the factor model for that variable.
Step 2. **Factor Extraction by Iteratoin**:
- Initial Communalities: Compute the initial communalities as the squared multiple correlations:
where is the communality of i-th item in the
-th iteration, and
is the i-th
diagonal element of the correlation matrix in the
-th iteration.
- Extract Factors and Update Communalities:
where represents the j-th factor loading for the i-th item,
is the j-th
eigenvalue,
is the communality of i-th item in the
-th iteration, and
is
the j-th value of the i-th item in the eigen vector matrix
.
Step 3. **Iterative Refinement**:
- Calculate the Change between and
:
where represents the change in communalities between iterations
and
.
- Convergence Criterion:
Continue iterating until the change in communalities is less than the specified criterion :
The iterative process is implemented using C++ code to ensure computational speed.
A list containing:
loadings |
The extracted factor loadings. |
eigen.value |
The eigenvalues of the correlation matrix. |
H2 |
A vector that contains the explanatory power of the factor model for all items. |
Haijiang Qin <[email protected]>
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run factor.analysis function to extract 5 factors PAF.obj <- factor.analysis(response, nfact = 5) ## Get the loadings, eigen.value and H2 results. loadings <- PAF.obj$loadings eigen.value <- PAF.obj$eigen.value H2 <- PAF.obj$H2 print(loadings) print(eigen.value) print(H2)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run factor.analysis function to extract 5 factors PAF.obj <- factor.analysis(response, nfact = 5) ## Get the loadings, eigen.value and H2 results. loadings <- PAF.obj$loadings eigen.value <- PAF.obj$eigen.value H2 <- PAF.obj$H2 print(loadings) print(eigen.value) print(H2)
This function will invoke a tuned XGBoost model (Goretzko & Buhner, 2020; Goretzko, 2022; Goretzko & Ruscio, 2024) that can reliably perform the task of determining the number of factors. The maximum number of factors that the network can discuss is 8.
FF( response, cor.type = "pearson", use = "pairwise.complete.obs", vis = TRUE, plot = TRUE )
FF( response, cor.type = "pearson", use = "pairwise.complete.obs", vis = TRUE, plot = TRUE )
response |
A required |
cor.type |
A character string indicating which correlation coefficient (or covariance) is to be computed. One of "pearson" (default), "kendall", or "spearman". @seealso cor. |
use |
An optional character string giving a method for computing covariances in the presence of missing values. This must be one of the strings "everything", "all.obs", "complete.obs", "na.or.complete", or "pairwise.complete.obs" (default). @seealso cor. |
vis |
A Boolean variable that will print the factor retention results when set to TRUE, and will not print when set to FALSE. (default = TRUE) |
plot |
A Boolean variable that will print the FF plot when set to TRUE, and will not print it when set to FALSE. @seealso plot.FF. (Default = TRUE) |
A total of 500,000 datasets were simulated to extract features for training the tuned XGBoost model (Goretzko & Buhner, 2020; Goretzko, 2022). Each dataset was generated according to the following specifications:
Factor number: F ~ U[1,8]
Sample size: N ~ U[200,1000]
Number of variables per factor: vpf ~ U[3,10]
Factor correlation: fc ~ U[0.0,0.4]
Primary loadings: pl ~ U[0.35,0.80]
Cross-loadings: cl ~ U[0.0,0.2]
A population correlation matrix was created for each data set based on the following decomposition:
where is the loading matrix,
is the factor correlation
matrix, and
is a diagonal matrix,
with
.
The purpose of
is to ensure that the diagonal elements of
are 1.
The response data for each subject were simulated using the following formula:
where follows a normal distribution
, representing the contribution of latent factors,
and
is the residual term following a standard normal distribution.
and
are uncorrelated, and
and
are also uncorrelated.
For each simulated dataset, a total of 184 features are extracted and compiled into a feature vector. These features include:
1.
- Number of examinees
2.
- Number of items
3.
- Number of eigenvalues greater than 1
4.
- Proportion of variance explained by the 1st eigenvalue
5.
- Proportion of variance explained by the 2nd eigenvalue
6.
- Proportion of variance explained by the 3rd eigenvalue
7.
- Number of eigenvalues greater than 0.7
8.
- Standard deviation of the eigenvalues
9.
- Number of eigenvalues accounting for 50
10.
- Number of eigenvalues accounting for 75
11.
- L1-norm of the correlation matrix
12.
- Frobenius-norm of the correlation matrix
13.
- Maximum-norm of the correlation matrix
14.
- Average of the off-diagonal correlations
15.
- Spectral-norm of the correlation matrix
16.
- Number of correlations smaller or equal to 0.1
17.
- Average of the initial communality estimates
18.
- Determinant of the correlation matrix
19.
- Measure of sampling adequacy (MSA after Kaiser, 1970)
20.
- Gini coefficient (Gini, 1921) of the correlation matrix
21.
- Kolm measure of inequality (Kolm, 1999) of the correlation matrix
21.
- Number of factors retained by the PA method @seealso PA
23.
- Number of factors retained by the EKC method @seealso EKC
24.
- Number of factors retained by the CD method @seealso CD
25-104.
- Eigenvalues from Principal Component Analysis (PCA), padded with -1000 if insufficient
105-184.
- Eigenvalues from Factor Analysis (FA), fixed at 1 factor, padded with -1000 if insufficient
The code for the FF
function is implemented based on the publicly available code by Goretzko & Buhner (2020) (https://osf.io/mvrau/).
The Tuned XGBoost Model is also obtained from this site. However, to meet the requirements for a streamlined R package, we can only
save the core components of the Tuned XGBoost Model. Although these non-core parts do not affect performance, they include a lot of information
about the model itself, such as the number of features, subsets of samples, and data from the training process, among others.
For the complete Tuned XGBoost Model, please download it from https://osf.io/mvrau/.
An object of class FF
is a list
containing the following components:
nfact |
The number of factors to be retained. |
probability |
A matrix containing the probabilities for factor numbers ranging from 1 to 8 (1x8), where the number in the f-th column represents the probability that the number of factors for the response is f. |
features |
A matrix (1×184) containing all the features for determining the number of factors by the tuned XGBoost Model. |
Goretzko, D., & Buhner, M. (2020). One model to rule them all? Using machine learning algorithms to determine the number of factors in exploratory factor analysis. Psychol Methods, 25(6), 776-786. https://doi.org/10.1037/met0000262.
Goretzko, D. (2022). Factor Retention in Exploratory Factor Analysis With Missing Data. Educ Psychol Meas, 82(3), 444-464. https://doi.org/10.1177/00131644211022031.
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run FF function with default parameters. FF.obj <- FF(response) print(FF.obj) plot(FF.obj) ## Get the probability and nfact results. probability <- FF.obj$probability nfact <- FF.obj$nfact print(probability) print(nfact)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run FF function with default parameters. FF.obj <- FF(response) print(FF.obj) plot(FF.obj) ## Get the probability and nfact results. probability <- FF.obj$probability nfact <- FF.obj$nfact print(probability) print(nfact)
This function simulates data with factors based on empirical data.
It represents the simulation data part of the CD function
and the CDF function. This function improves upon
GenDataPopulation by utilizing C++ code to achieve faster data simulation.
GenData( response, nfact = 1, N.pop = 10000, Max.Trials = 5, lr = 1, cor.type = "pearson", use = "pairwise.complete.obs", isSort = FALSE )
GenData( response, nfact = 1, N.pop = 10000, Max.Trials = 5, lr = 1, cor.type = "pearson", use = "pairwise.complete.obs", isSort = FALSE )
response |
A required |
nfact |
The number of factors to extract in factor analysis. (default = 1) |
N.pop |
Size of finite populations for simulating. (default = 10,000) |
Max.Trials |
The maximum number of consecutive trials without obtaining a lower RMSR. (default = 5) |
lr |
The learning rate for updating the correlation matrix during iteration. (default = 1) |
cor.type |
A character string indicating which correlation coefficient (or covariance) is to be computed. One of "pearson" (default), "kendall", or "spearman". @seealso cor. |
use |
An optional character string specifying a method for computing covariances in the presence of missing values. This must be one of the strings "everything", "all.obs", "complete.obs", "na.or.complete", or "pairwise.complete.obs" (default). @seealso cor. |
isSort |
Logical, determines whether the simulated data needs to be sorted in descending order. (default = FALSE) |
The core idea of GenData
is to start with the empirical data's correlation matrix
and iteratively approach data with nfact
factors. Any value in the simulated data must come
from the empirical data. The specific steps of GenData
are as follows:
Use the empirical data () correlation matrix as the target,
.
Simulate scores for examinees on
factors using a multivariate standard normal distribution:
Simulate noise for examinees on
items:
Initialize , and set the minimum Root
Mean Square Residual
. Start the iteration process.
Extract nfact
factors from , and obtain the factor
loadings matrix
. Ensure that the first element of
is positive to standardize the direction.
Calculate the unique factor matrix :
Calculate the simulated data :
Compute the correlation matrix of the simulated data, .
Calculate the residual correlation matrix between the
target matrix
and the simulated data's correlation matrix
:
Calculate the current RMSR:
If , update
,
, set
,
and reset the count of consecutive trials without improvement
.
If
, update
and increment
.
Repeat steps (4) through (10) until .
Of course C++ code is used to speed up.
A N.pop
* I
matrix containing the simulated data.
Ruscio, J., & Roche, B. (2012). Determining the number of factors to retain in an exploratory factor analysis using comparison data of known factorial structure. Psychological Assessment, 24, 282–292. http://dx.doi.org/10.1037/a0025697.
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 data.simulated <- GenData(response, nfact = 1, N.pop = 10000) head(data.simulated)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 data.simulated <- GenData(response, nfact = 1, N.pop = 10000) head(data.simulated)
The Hull method is a heuristic approach used to determine the optimal number of common factors in factor analysis. It evaluates models with increasing numbers of factors and uses goodness-of-fit indices relative to the model degrees of freedom to select the best-fitting model. The method is known for its effectiveness and reliability compared to other methods like the scree plot.
Hull( response, fa = "pc", nfact = 1, cor.type = "pearson", use = "pairwise.complete.obs", vis = TRUE, plot = TRUE )
Hull( response, fa = "pc", nfact = 1, cor.type = "pearson", use = "pairwise.complete.obs", vis = TRUE, plot = TRUE )
response |
A required |
fa |
A string that determines the method used to obtain eigenvalues in PA. If 'pc', it represents
Principal Component Analysis (PCA); if 'fa', it represents Principal Axis Factoring (a widely
used Factor Analysis method; @seealso |
nfact |
A numeric value that specifies the number of factors to extract, only effective when |
cor.type |
A character string indicating which correlation coefficient (or covariance) is to be computed. One of "pearson" (default), "kendall", or "spearman". @seealso cor. |
use |
an optional character string giving a method for computing covariances in the presence of missing values. This must be one of the strings "everything", "all.obs", "complete.obs", "na.or.complete", or "pairwise.complete.obs" (default). @seealso cor. |
vis |
A Boolean variable that will print the factor retention results when set to TRUE, and will not print when set to FALSE. (default = TRUE) |
plot |
A Boolean variable that will print the CD plot when set to TRUE, and will not print it when set to FALSE. @seealso plot.Hull. (Default = TRUE) |
The Hull method (Lorenzo-Seva & Timmerman, 2011) is a heuristic approach used to determ ine the number of common factors in factor analysis. This method is similar to non-graphical variants of Cattell's scree plot but relies on goodness-of-fit indices relative to the model degrees of freedom. The Hull method finds the optimal number of factors by following these steps:
Calculate the goodness-of-fit index (CFI)
and model degrees of freedom (df; Lorenzo-Seva & Timmerman, 2011; ,
is the number of items, and
is the number of factors)
for models with an increasing number of factors, up to a prespecified maximum,
which is equal to the
nfact of PA method. the GOF will always be Comparative Fit Index (CFI), for it performs best under various conditions than other GOF (Auerswald & Moshagen, 2019; Lorenzo-Seva & Timmerman, 2011), such as RMSEA and SRMR. @seealso EFAindex
Identify and exclude solutions that are less complex (with fewer factors) but have a higher fit index.
Further exclude solutions if their fit indices fall below the line connecting adjacent viable solutions.
Determine the number of factors where the ratio of the difference in goodness-of-fit indices to the difference in degrees of freedom is maximized.
A list with the following components:
nfact |
The optimal number of factors according to the Hull method. |
CFI |
A numeric vector of CFI values for each number of factors considered. |
df |
A numeric vector of model degrees of freedom for each number of factors considered. |
Hull.CFI |
A numeric vector of CFI values with points below the convex Hull curve removed. |
Hull.df |
A numeric vector of model degrees of freedom with points below the convex Hull curve removed. |
Haijiang Qin <[email protected]>
Auerswald, M., & Moshagen, M. (2019). How to determine the number of factors to retain in exploratory factor analysis: A comparison of extraction methods under realistic conditions. Psychological methods, 24(4), 468-491. https://doi.org/https://doi.org/10.1037/met0000200.
Lorenzo-Seva, U., Timmerman, M. E., & Kiers, H. A. L. (2011). The Hull Method for Selecting the Number of Common Factors. Multivariate Behavioral Research, 46(2), 340-364. https://doi.org/10.1080/00273171.2011.564527.
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run EKC function with default parameters. Hull.obj <- Hull(response) print(Hull.obj) plot(Hull.obj) ## Get the CFI, df and nfact results. CFI <- Hull.obj$CFI df <- Hull.obj$df nfact <- Hull.obj$nfact print(CFI) print(df) print(nfact)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run EKC function with default parameters. Hull.obj <- Hull(response) print(Hull.obj) plot(Hull.obj) ## Get the CFI, df and nfact results. CFI <- Hull.obj$CFI df <- Hull.obj$df nfact <- Hull.obj$nfact print(CFI) print(df) print(nfact)
This function implements the Kaiser-Guttman criterion (Guttman, 1954; Kaiser, 1960) for determining the number of factors to retain in factor analysis. It is based on the eigenvalues of the correlation matrix of the responses. According to the criterion, factors are retained if their corresponding eigenvalues are greater than 1.
KGC( response, fa = "pc", nfact = 1, cor.type = "pearson", use = "pairwise.complete.obs", vis = TRUE, plot = TRUE )
KGC( response, fa = "pc", nfact = 1, cor.type = "pearson", use = "pairwise.complete.obs", vis = TRUE, plot = TRUE )
response |
A required |
fa |
A string that determines the method used to obtain eigenvalues. If 'pc', it represents
Principal Component Analysis (PCA); if 'fa', it represents Principal Axis Factoring (a widely
used Factor Analysis method; @seealso |
nfact |
A numeric value that specifies the number of factors to extract, only effective when |
cor.type |
A character string indicating which correlation coefficient (or covariance) is to be computed. One of "pearson" (default), "kendall", or "spearman". @seealso cor. |
use |
An optional character string giving a method for computing covariances in the presence of missing values. This must be one of the strings "everything", "all.obs", "complete.obs", "na.or.complete", or "pairwise.complete.obs" (default). @seealso cor. |
vis |
A Boolean variable that will print the factor retention results when set to TRUE, and will not print when set to FALSE. (default = TRUE) |
plot |
A Boolean variable that will print the KGC plot when set to TRUE, and will not print it when set to FALSE. @seealso plot.KGC. (Default = TRUE) |
An object of class KGC
is a list
containing the following components:
nfact |
The number of factors to be retained. |
eigen.value |
A vector containing the empirical eigenvalues |
Guttman, L. (1954). Some necessary conditions for common-factor analysis. Psychometrika, 19, 149–161. http://dx.doi.org/10.1007/BF02289162.
Kaiser, H. F. (1960). The application of electronic computers to factor analysis. Educational and Psychological Measurement, 20, 141–151. http://dx.doi.org/10.1177/001316446002000116.
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run KGC function with default parameters. KGC.obj <- KGC(response) print(KGC.obj) plot(KGC.obj) ## Get the eigen.value, eigen.ref and nfact results. eigen.value <- KGC.obj$eigen.value nfact <- KGC.obj$nfact print(eigen.value) print(nfact)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run KGC function with default parameters. KGC.obj <- KGC(response) print(KGC.obj) plot(KGC.obj) ## Get the eigen.value, eigen.ref and nfact results. eigen.value <- KGC.obj$eigen.value nfact <- KGC.obj$nfact print(eigen.value) print(nfact)
Loads the Pre-Trained Deep Neural Network (DNN) from the DNN.onnx
.
The function uses the reticulate
package to import the onnxruntime
Python library
and create an inference session for the model.
load_DNN()
load_DNN()
An ONNX runtime inference session object for the DNN model.
Note that Python and the libraries numpy
and onnxruntime
are required.
First, please ensure that Python is installed on your computer and that Python is included in the system's PATH environment variable. If not, please download and install it from the official website (https://www.python.org/).
If you encounter an error when running this function stating that the numpy
and onnxruntime
modules are missing:
Error in py_module_import(module, convert = convert) :
ModuleNotFoundError: No module named 'numpy'
or
Error in py_module_import(module, convert = convert) :
ModuleNotFoundError: No module named 'onnxruntime'
this means that the numpy
or onnxruntime
library is missing from your Python environment. If you are using Windows or macOS,
please run the command pip install numpy
or pip install onnxruntime
in Command Prompt or Windows PowerShell (Windows), or Terminal (macOS).
If you are using Linux, please ensure that pip
is installed and use the command pip install numpy
or
pip install onnxruntime
to install the missing libraries.
Loads the scaler object within the EFAfactors
package. This object is a list
containing a mean vector and
a standard deviation vector, which were computed from the 10,000,000 datasets data.datasets
used for training the Pre-Trained Deep Neural Network (DNN). It serves as a tool for normalizing features in
DNN_predictor.
load_scaler()
load_scaler()
scaler objective.
DNN_predictor, normalizor, data.datasets, data.scaler
library(EFAfactors) scaler <- load_scaler() print(scaler)
library(EFAfactors) scaler <- load_scaler() print(scaler)
Loads the tuned XGBoost model object within the EFAfactors
package
into the global environment and retrieves it for use. Only the core model is retained to reduce the size.
load_xgb()
load_xgb()
The tuned XGBoost model object
library(EFAfactors) xgb_model <- load_xgb() print(xgb_model)
library(EFAfactors) xgb_model <- load_xgb() print(xgb_model)
the Tuned XGBoost Model for Determining the Number of Facotrs
An object of class TuneModel
is the Tuned XGBoost Model for Determining the Number of Facotrs
data(model.xgb) print(model.xgb) model.xgb <- load_xgb() print(model.xgb)
data(model.xgb) print(model.xgb) model.xgb <- load_xgb() print(model.xgb)
This function normalizes a matrix of features using precomputed means and standard deviations.
The function automatically runs load_scaler to read the standard deviations and means of the features,
which are organized into a list
object named scaler
. These means and standard deviations are computed from
the 10,000,000 datasets data.datasets
for training the Pre-Trained Deep Neural Network (DNN).
normalizor(features)
normalizor(features)
features |
A numeric matrix where each row represents an observation and each column represents a feature. |
The function applies z-score normalization to each element in the features
matrix. It uses
the scaler
object, which is expected to contain precomputed means and standard deviations for each feature.
The normalized value for each element is computed as:
where is the original value,
is the mean, and
is the standard deviation.
A matrix of the same dimensions as features
, where each feature has been normalized.
DNN_predictor, load_scaler, data.datasets, data.scaler
This function performs Parallel Analysis (PA), which is a method used to determine the number of factors to retain in exploratory factor analysis. It compares the empirical eigenvalues with those obtained from simulated random data to identify the point where the observed eigenvalues are larger than those expected by chance. The number of empirical eigenvalues that are greater than the corresponding reference eigenvalues is the number of factors recommended to be retained by the PA method.
PA( response, fa = "pc", n.iter = 100, type = "quant", nfact = 1, quant = 0.95, cor.type = "pearson", use = "pairwise.complete.obs", vis = TRUE, plot = TRUE )
PA( response, fa = "pc", n.iter = 100, type = "quant", nfact = 1, quant = 0.95, cor.type = "pearson", use = "pairwise.complete.obs", vis = TRUE, plot = TRUE )
response |
A required |
fa |
A string that determines the method used to obtain eigenvalues in PA. If 'pc', it represents
Principal Component Analysis (PCA); if 'fa', it represents Principal Axis Factoring (a widely
used Factor Analysis method; @seealso |
n.iter |
A numeric value that determines the number of simulations for the random data. (Default = 100) |
type |
A string that determines the method used to calculate the reference eigenvalues from the simulated data.
If 'mean', the reference eigenvalue ( |
nfact |
A numeric value that specifies the number of factors to extract, only effective when |
quant |
A numeric value between 0 and 1, representing the quantile to be used for the reference
eigenvalues calculation when |
cor.type |
A character string indicating the correlation coefficient (or covariance) to be computed. One of "pearson" (default), "kendall", or "spearman". @seealso cor. |
use |
An optional character string specifying the method for computing covariances when there are missing values. This must be one of "everything", "all.obs", "complete.obs", "na.or.complete", or "pairwise.complete.obs" (default). @seealso cor. |
vis |
A Boolean that determines whether to print the factor retention results. Set to |
plot |
A Boolean that determines whether to display the PA plot. Set to |
This function performs Parallel Analysis (PA; Horn, 1965; Auerswald & Moshagen, 2019) to determine the number of factors to retain.
PA is a widely used method and is considered the "gold standard" for factor retention due to its high accuracy and stability,
although it may underperform compared to methods like CD or EKC under certain conditions.
The core idea of PA is to simulate random data multiple times, for example, 100 times, and compute the eigenvalues from each simulation.
These simulated eigenvalues are then processed using either the mean or a quantile method to obtain the reference eigenvalues,
such as the i-th reference eigenvalue .
The relationship between the i-th empirical eigenvalue
and
indicates whether the i-th factor should be retained.
If
, it suggests that the explanatory power of the i-th factor from the original data is stronger than that of the i-th factor from the random data,
and therefore the factor should be retained. Conversely, if
,
it indicates that the explanatory power of the i-th factor from the original data is weaker or equal to that of the random data,
making it indistinguishable from noise, and thus the factor should not be retained. So,
Here, \( F \) represents the number of factors determined by the EKC, and is the
indicator function, which equals 1 when the condition is true, and 0 otherwise.
Auerswald & Moshagen (2019) found that the most accurate results for PA were obtained when
using PCA to extract eigenvalues and using the 95th percentile of the simulated
eigenvalues to calculate the reference eigenvalues. Therefore,
the recommended settings for this function are fa = 'pc'
, type = 'quant'
, and quant = 0.95
.
An object of class PA
, which is a list
containing the following components:
nfact |
The number of factors to retain. |
fa |
Indicates the method used to obtain eigenvalues in PA. 'pc' represents Principal Component Analysis, and 'fa' represents Principal Axis Factoring. |
type |
Indicates the method used to calculate |
eigen.value |
A vector containing the empirical eigenvalues. |
eigen.ref |
A vector containing the reference eigenvalues, which depend on |
eigen.sim |
A matrix containing the simulated eigenvalues for all iterations. |
Haijiang Qin <[email protected]>
Auerswald, M., & Moshagen, M. (2019). How to determine the number of factors to retain in exploratory factor analysis: A comparison of extraction methods under realistic conditions. Psychological methods, 24(4), 468-491. https://doi.org/10.1037/met0000200.
Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30, 179–185. http://dx.doi.org/10.1007/BF02289447.
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run PA function with default parameters. PA.obj <- PA(response) print(PA.obj) plot(PA.obj) ## Get the eigen.value, eigen.ref and nfact results. eigen.value <- PA.obj$eigen.value eigen.ref <- PA.obj$eigen.ref nfact <- PA.obj$nfact print(eigen.value) print(eigen.ref) print(nfact)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run PA function with default parameters. PA.obj <- PA(response) print(PA.obj) plot(PA.obj) ## Get the eigen.value, eigen.ref and nfact results. eigen.value <- PA.obj$eigen.value eigen.ref <- PA.obj$eigen.ref nfact <- PA.obj$nfact print(eigen.value) print(eigen.ref) print(nfact)
This function generates a Comparison Data plot to visualize the Root Mean Square Error (RMSE) of eigenvalues for various numbers of factors. This plot helps in evaluating the fit of different factor models and identifying the optimal number of factors based on RMSE values.
## S3 method for class 'CD' plot(x, ...)
## S3 method for class 'CD' plot(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (plotting).
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 CD.obj <- CD(response) ## CD plot plot(CD.obj)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 CD.obj <- CD(response) ## CD plot plot(CD.obj)
This function generates a bar plot of the classification probabilities predicted by the Comparison Data Forest for determining the number of factors. The plot displays the probability distribution across different numbers of factors, with each bar representing the probability for a specific number of factors.
## S3 method for class 'CDF' plot(x, ...)
## S3 method for class 'CDF' plot(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (plotting).
library(EFAfactors) set.seed(123) ## Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## Load data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 CDF.obj <- CDF(response) ## Plot the CDF probabilities plot(CDF.obj)
library(EFAfactors) set.seed(123) ## Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## Load data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 CDF.obj <- CDF(response) ## Plot the CDF probabilities plot(CDF.obj)
This function generates a bar plot of the classification probabilities predicted by the pre-trained deep neural network for determining the number of factors. The plot displays the probability distribution across different numbers of factors, with each bar representing the probability for a specific number of factors. The maximum number of factors that the network can evaluate is 10. The function also annotates each bar with its probability value.
## S3 method for class 'DNN_predictor' plot(x, ...)
## S3 method for class 'DNN_predictor' plot(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (plotting).
This function generates a dendrogram from hierarchical cluster analysis results. The hierarchical clustering method merges the two nodes with the smallest dissimilarity at each step, forming a new node until all nodes are combined into a single hierarchical structure. The resulting dendrogram represents the hierarchical relationships between items, where items with high correlation are connected by shorter lines, and items with low correlation are connected by longer lines. The height of the tree nodes indicates the dissimilarity between nodes: a greater height reflects a larger difference. Researchers can use this representation to determine if two nodes belong to the same cluster, which in exploratory factor analysis, helps identify whether items belong to the same latent factor.
## S3 method for class 'EFAhclust' plot(x, ...)
## S3 method for class 'EFAhclust' plot(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (plotting).
library(EFAfactors) set.seed(123) ## Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## Load data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 EFAhclust.obj <- EFAhclust(response) ## Plot the hierarchical clustering dendrogram plot(EFAhclust.obj)
library(EFAfactors) set.seed(123) ## Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## Load data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 EFAhclust.obj <- EFAhclust(response) ## Plot the hierarchical clustering dendrogram plot(EFAhclust.obj)
This function creates a plot to visualize the Within-cluster Sum of Squares (WSS) for different numbers of clusters (K) in the context of exploratory factor analysis. The plot helps identify the most appropriate number of factors by showing how WSS decreases as the number of factors (or clusters) increases.
## S3 method for class 'EFAkmeans' plot(x, ...)
## S3 method for class 'EFAkmeans' plot(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (plotting).
library(EFAfactors) set.seed(123) ## Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## Load data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 EFAkmeans.obj <- EFAkmeans(response) ## Plot the EFA K-means clustering results plot(EFAkmeans.obj)
library(EFAfactors) set.seed(123) ## Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## Load data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 EFAkmeans.obj <- EFAkmeans(response) ## Plot the EFA K-means clustering results plot(EFAkmeans.obj)
Plots the Scree Plot from an object of class EFAscreet
. The scree plot visualizes the eigenvalues
of the correlation matrix in descending order and helps in identifying the optimal number of factors
by showing where the eigenvalues start to plateau.
## S3 method for class 'EFAscreet' plot(x, ...)
## S3 method for class 'EFAscreet' plot(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the |
The scree plot is a graphical tool used in exploratory factor analysis. It shows the eigenvalues corresponding to the factors. The number of factors is typically determined by finding the point where the plot levels off ("elbow" point).
A scree plot displaying the eigenvalues against the number of factors.
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run EFAscreet function with default parameters. EFAscreet.obj <- EFAscreet(response) plot(EFAscreet.obj)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 ## Run EFAscreet function with default parameters. EFAscreet.obj <- EFAscreet(response) plot(EFAscreet.obj)
This function creates a pie chart to visualize the results of a voting method used to determine the number of factors in exploratory factor analysis (EFA). The voting method combines the results from multiple EFA techniques, and the pie chart displays the proportions of votes each number of factors received. Each slice of the pie represents the percentage of votes for a specific number of factors, providing a clear visual representation of the most commonly suggested number of factors.
## S3 method for class 'EFAvote' plot(x, ...)
## S3 method for class 'EFAvote' plot(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (plotting).
library(EFAfactors) nfacts <- c(5, 5, 5, 6, 6, 4) names(nfacts) <- c("Hull", "CD", "PA", "EKC", "XGB","DNN") EFAvote.obj <- EFAvote(votes = nfacts) plot(EFAvote.obj)
library(EFAfactors) nfacts <- c(5, 5, 5, 6, 6, 4) names(nfacts) <- c("Hull", "CD", "PA", "EKC", "XGB","DNN") EFAvote.obj <- EFAvote(votes = nfacts) plot(EFAvote.obj)
This function generates an Empirical Kaiser Criterion (EKC) plot to visualize the eigenvalues of the actual data. The EKC method helps in determining the number of factors to retain by identifying the point where the eigenvalues exceed the reference eigenvalue. The plot provides a graphical representation to assist in factor selection.
## S3 method for class 'EKC' plot(x, ...)
## S3 method for class 'EKC' plot(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (plotting).
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 EKC.obj <- EKC(response) ## EKC plot plot(EKC.obj)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 EKC.obj <- EKC(response) ## EKC plot plot(EKC.obj)
This function generates a bar plot of the classification probabilities predicted by the Factor Forest for determining the number of factors. The plot displays the probability distribution across different numbers of factors, with each bar representing the probability for a specific number of factors. Unlike the deep neural network (DNN) model, the Factor Forest can evaluate up to a maximum of 8 factors. The function also annotates each bar with its probability value.
## S3 method for class 'FF' plot(x, ...)
## S3 method for class 'FF' plot(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (plotting).
library(EFAfactors) set.seed(123) ## Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## Load data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 FF.obj <- FF(response) ## Plot the FF probabilities plot(FF.obj)
library(EFAfactors) set.seed(123) ## Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## Load data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 FF.obj <- FF(response) ## Plot the FF probabilities plot(FF.obj)
This function creates a Hull plot to visualize the relationship between the Comparative Fit Index (CFI) and the degrees of freedom (df) for a range of models with different numbers of factors. The Hull plot helps in assessing model fit and identifying optimal models.
## S3 method for class 'Hull' plot(x, ...)
## S3 method for class 'Hull' plot(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (plotting).
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 Hull.obj <- CD(response) ## Hull plot plot(Hull.obj)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 Hull.obj <- CD(response) ## Hull plot plot(Hull.obj)
This function generates a Kaiser-Guttman Criterion (KGC) plot to visualize the eigenvalues of the actual data. The Kaiser-Guttman Criterion, also known as the Kaiser criterion, suggests retaining factors with eigenvalues greater than 1. The plot shows the eigenvalues and includes a reference line at 1 to indicate the threshold for factor retention.
## S3 method for class 'KGC' plot(x, ...)
## S3 method for class 'KGC' plot(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (plotting).
library(EFAfactors) set.seed(123) ## Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## Load data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 KGC.obj <- KGC(response) ## Plot the Kaiser-Guttman Criterion plot(KGC.obj)
library(EFAfactors) set.seed(123) ## Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## Load data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 KGC.obj <- KGC(response) ## Plot the Kaiser-Guttman Criterion plot(KGC.obj)
This function creates a Parallel Analysis (PA) scree plot to compare the eigenvalues of the actual data with the eigenvalues from simulated data. The plot helps in determining the number of factors by visualizing where the eigenvalues of the actual data intersect with those from simulated data. It provides a graphical representation of the results from a parallel analysis to aid in factor selection.
## S3 method for class 'PA' plot(x, ...)
## S3 method for class 'PA' plot(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (plotting).
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 PA.obj <- PA(response) ## PA plot plot(PA.obj)
library(EFAfactors) set.seed(123) ##Take the data.bfi dataset as an example. data(data.bfi) response <- as.matrix(data.bfi[, 1:25]) ## loading data response <- na.omit(response) ## Remove samples with NA/missing values ## Transform the scores of reverse-scored items to normal scoring response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1 PA.obj <- PA(response) ## PA plot plot(PA.obj)
This function performs predictions using a trained XGBoost model with early stopping. The function itself does not have any specific purpose; its existence is solely to ensure the proper operation of FF.
## S3 method for class 'classif.xgboost.earlystop' predictLearner(.learner, .model, .newdata, ...)
## S3 method for class 'classif.xgboost.earlystop' predictLearner(.learner, .model, .newdata, ...)
.learner |
An object representing the learner. |
.model |
The trained XGBoost model used to make predictions. |
.newdata |
A data frame or matrix containing new observations for which predictions are to be made. |
... |
Additional parameters passed to the |
A vector of predicted class labels or a matrix of predicted probabilities.
This function prints the number of factors suggested by the Comparison Data (CD) method.
## S3 method for class 'CD' print(x, ...)
## S3 method for class 'CD' print(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (printing).
This function prints the number of factors suggested by the Comparison Data Forest.
## S3 method for class 'CDF' print(x, ...)
## S3 method for class 'CDF' print(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (printing).
This function prints the number of factors suggested by the deep neural network (DNN) predictor.
## S3 method for class 'DNN_predictor' print(x, ...)
## S3 method for class 'DNN_predictor' print(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (printing).
This function prints the items in factors.
## S3 method for class 'EFAdata' print(x, ...)
## S3 method for class 'EFAdata' print(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (printing).
This function prints the number of factors suggested by the EFAhclust method using the Second-Order Difference (SOD) approach.
## S3 method for class 'EFAhclust' print(x, ...)
## S3 method for class 'EFAhclust' print(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (printing).
This function prints the number of factors suggested by the EFAkmeans method using the Second-Order Difference (SOD) approach.
## S3 method for class 'EFAkmeans' print(x, ...)
## S3 method for class 'EFAkmeans' print(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (printing).
Prints a brief summary of an object of class EFAscreet
. This function will display the scree plot
of the eigenvalues when called, providing a visual representation of the factor analysis results.
## S3 method for class 'EFAscreet' print(x, ...)
## S3 method for class 'EFAscreet' print(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (printing).
This function prints the number of factors suggested by the voting.
## S3 method for class 'EFAvote' print(x, ...)
## S3 method for class 'EFAvote' print(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (printing).
This function prints the number of factors suggested by the Empirical Kaiser Criterion (EKC).
## S3 method for class 'EKC' print(x, ...)
## S3 method for class 'EKC' print(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (printing).
This function prints the number of factors suggested by the Factor Forest.
## S3 method for class 'FF' print(x, ...)
## S3 method for class 'FF' print(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (printing).
This function prints the number of factors suggested by the Hull method.
## S3 method for class 'Hull' print(x, ...)
## S3 method for class 'Hull' print(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the print function. |
None. This function is used for side effects (printing).
This function prints the number of factors suggested by the Kaiser-Guttman Criterion (KGC).
## S3 method for class 'KGC' print(x, ...)
## S3 method for class 'KGC' print(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (printing).
This function prints the number of factors suggested by the Parallel Analysis (PA) method.
## S3 method for class 'PA' print(x, ...)
## S3 method for class 'PA' print(x, ...)
x |
An object of class |
... |
Additional arguments to be passed to the plotting function. |
None. This function is used for side effects (printing).