Title: | PD-Clustering and Related Methods |
---|---|
Description: | Probabilistic distance clustering (PD-clustering) is an iterative, distribution free, probabilistic clustering method. PD-clustering assigns units to a cluster according to their probability of membership, under the constraint that the product of the probability and the distance of each point to any cluster centre is a constant. PD-clustering is a flexible method that can be used with non-spherical clusters, outliers, or noisy data. PDQ is an extension of the algorithm for clusters of different size. GPDC and TPDC uses a dissimilarity measure based on densities. Factor PD-clustering (FPDC) is a factor clustering method that involves a linear transformation of variables and a cluster optimizing the PD-clustering criterion. It works on high dimensional data sets. |
Authors: | Cristina Tortora [aut, cre, cph], Noe Vidales [aut], Francesco Palumbo [aut], Tina Kalra [aut], and Paul D. McNicholas [fnd] |
Maintainer: | Cristina Tortora <[email protected]> |
License: | GPL (>= 2) |
Version: | 2.3.1 |
Built: | 2024-11-25 06:40:49 UTC |
Source: | CRAN |
Data obtained to study sex, sport and body-size dependency of hematology in highly trained athletes.
data(ais)
data(ais)
A data frame with 202 observations and 13 variables.
red blood cell count, in
while blood cell count, in per liter
hematocrit, percent
hemaglobin concentration, in g per decaliter
plasma ferritins, ng
Body mass index, kg
sum of skin folds
percent Body fat
lean body mass, kg
height, cm
weight, kg
a factor with levels f m
a factor with levels B_Ball Field Gym Netball Row Swim T_400m T_Sprnt Tennis W_Polo
R package DAAG
Telford, R.D. and Cunningham, R.B. 1991. Sex, sport and body-size dependency of hematology in highly trained athletes. Medicine and Science in Sports and Exercise 23: 788-794.
data(ais) pairs(ais[,1:11],col=ais$sex)
data(ais) pairs(ais[,1:11],col=ais$sex)
Each cluster has been generated according to a multivariate asymmetric Gaussian distribution, with shape 20, covariance matrix equal to the identity matrix and randomly generated centres.
data(asymmetric20)
data(asymmetric20)
A data frame with 800 observations on the following 101 variables. The first variable is the membership.
Generated with R using the package sn (The skew-normal and skew-t distributions), function rsn
data(asymmetric20) plot(asymmetric20[,2:3])
data(asymmetric20) plot(asymmetric20[,2:3])
Each cluster has been generated according to a multivariate asymmetric Gaussian distribution, with shape 3, covariance matrix equal to the identity matrix and randomly generated centres.
data(asymmetric3)
data(asymmetric3)
A data frame with 800 observations on 101 variables. The first variable is the membership labels.
Generated with R using the package sn (The skew-normal and skew-t distributions), function rsn
data(asymmetric3) plot(asymmetric3[,2:3])
data(asymmetric3) plot(asymmetric3[,2:3])
Ten vables recorded on 167 countries. The goal is to categorize the countries using socio-economic and health indicators that determine the country's overall development. The data set has been donated by the HELP International organization, an international humanitarian NGO that needs to identify the countries that need aid and asked the analysts to categorize the countries.
data(Country_data)
data(Country_data)
A data frame with 167 observations and 10 variables.
country name
Death of children under 5 years of age per 1000 live births
Exports of goods and services per capita. Given as %age of the GDP per capita
Total health spending per capita. Given as %age of GDP per capita
Imports of goods and services per capita. Given as %age of the GDP per capita
Net income per person
The measurement of the annual growth rate of the Total GDP
The average number of years a new born child would live if the current mortality patterns are to remain the same
The number of children that would be born to each woman if the current age-fertility rates remain the same.
The GDP per capita. Calculated as the Total GDP divided by the total population.
https://www.kaggle.com/datasets/rohan0301/unsupervised-learning-on-country-data/metadata?resource=download
R. Kokkula. Unsupervised learning on country data. kaggle, 2022. URL https://www.kaggle.com/datasets/rohan0301/unsupervised-learning-on-country-data/metadata?resource=download
data(Country_data) pairs(Country_data[,2:10])
data(Country_data) pairs(Country_data[,2:10])
An implementation of FPDC, a probabilistic factor clustering algorithm that involves a linear transformation of variables and a cluster optimizing the PD-clustering criterion
FPDC(data = NULL, k = 2, nf = 2, nu = 2)
FPDC(data = NULL, k = 2, nf = 2, nu = 2)
data |
A matrix or data frame such that rows correspond to observations and columns correspond to variables. |
k |
A numerical parameter giving the number of clusters |
nf |
A numerical parameter giving the number of factors for variables |
nu |
A numerical parameter giving the number of factors for units |
A class FPDclustering list with components
label |
A vector of integers indicating the cluster membership for each unit |
centers |
A matrix of cluster centers |
probability |
A matrix of probability of each point belonging to each cluster |
JDF |
The value of the Joint distance function |
iter |
The number of iterations |
explained |
The explained variability |
data |
the data set |
Cristina Tortora and Paul D. McNicholas
Tortora, C., M. Gettler Summa, M. Marino, and F. Palumbo. Factor probabilistic distance clustering (fpdc): a new clustering method for high dimensional data sets. Advanced in Data Analysis and Classification, 10(4), 441-464, 2016. doi:10.1007/s11634-015-0219-5.
Tortora C., Gettler Summa M., and Palumbo F.. Factor pd-clustering. In Lausen et al., editor, Algorithms from and for Nature and Life, Studies in Classification, Data Analysis, and Knowledge Organization DOI 10.1007/978-3-319-00035-011, 115-123, 2013.
Tortora C., Non-hierarchical clustering methods on factorial subspaces, 2012.
## Not run: # Asymmetric data set clustering example (with shape 3). data('asymmetric3') x<-asymmetric3[,-1] #Clustering fpdas3=FPDC(x,4,3,3) #Results table(asymmetric3[,1],fpdas3$label) Silh(fpdas3$probability) summary(fpdas3) plot(fpdas3) ## End(Not run) ## Not run: # Asymmetric data set clustering example (with shape 20). data('asymmetric20') x<-asymmetric20[,-1] #Clustering fpdas20=FPDC(x,4,3,3) #Results table(asymmetric20[,1],fpdas20$label) Silh(fpdas20$probability) summary(fpdas20) plot(fpdas20) ## End(Not run) ## Not run: # Clustering example with outliers. data('outliers') x<-outliers[,-1] #Clustering fpdout=FPDC(x,4,5,4) #Results table(outliers[,1],fpdout$label) Silh(fpdout$probability) summary(fpdout) plot(fpdout) ## End(Not run)
## Not run: # Asymmetric data set clustering example (with shape 3). data('asymmetric3') x<-asymmetric3[,-1] #Clustering fpdas3=FPDC(x,4,3,3) #Results table(asymmetric3[,1],fpdas3$label) Silh(fpdas3$probability) summary(fpdas3) plot(fpdas3) ## End(Not run) ## Not run: # Asymmetric data set clustering example (with shape 20). data('asymmetric20') x<-asymmetric20[,-1] #Clustering fpdas20=FPDC(x,4,3,3) #Results table(asymmetric20[,1],fpdas20$label) Silh(fpdas20$probability) summary(fpdas20) plot(fpdas20) ## End(Not run) ## Not run: # Clustering example with outliers. data('outliers') x<-outliers[,-1] #Clustering fpdout=FPDC(x,4,5,4) #Results table(outliers[,1],fpdout$label) Silh(fpdout$probability) summary(fpdout) plot(fpdout) ## End(Not run)
An implementation of Gaussian PD-Clustering GPDC, an extention of PD-clustering adjusted for cluster size that uses a dissimilarity measure based on the Gaussian density.
GPDC(data=NULL,k=2,ini="kmedoids", nr=5,iter=100)
GPDC(data=NULL,k=2,ini="kmedoids", nr=5,iter=100)
data |
A matrix or data frame such that rows correspond to observations and columns correspond to variables. |
k |
A numerical parameter giving the number of clusters |
ini |
A parameter that selects center starts. Options available are random ("random"), kmedoid ("kmedoid", by default), and PDC ("PDclust"). |
nr |
Number of random starts when ini set to "random" |
iter |
Maximum number of iterations |
A class FPDclustering list with components
label |
A vector of integers indicating the cluster membership for each unit |
centers |
A matrix of cluster means |
sigma |
A list of K elements, with the variance-covariance matrix per cluster |
probability |
A matrix of probability of each point belonging to each cluster |
JDF |
The value of the Joint distance function |
iter |
The number of iterations |
data |
the data set |
Cristina Tortora and Francesco Palumbo
Tortora C., McNicholas P.D., and Palumbo F. A probabilistic distance clustering algorithm using Gaussian and Student-t multivariate density distributions. SN Computer Science, 1:65, 2020.
C. Rainey, C. Tortora and F.Palumbo. A parametric version of probabilistic distance clustering. In: Greselin F., Deldossi L., Bagnato L., Vichi M. (eds) Statistical Learning of Complex Data. CLADAG 2017. Studies in Classification, Data Analysis, and Knowledge Organization. Springer, Cham, 33-43 2019. doi.org/10.1007/978-3-030-21140-0_4
#Load the data data(ais) dataSEL=ais[,c(10,3,5,8)] #Clustering res=GPDC(dataSEL,k=2,ini = "kmedoids") #Results table(res$label,ais$sex) plot(res) summary(res)
#Load the data data(ais) dataSEL=ais[,c(10,3,5,8)] #Clustering res=GPDC(dataSEL,k=2,ini = "kmedoids") #Results table(res$label,ais$sex) plot(res) summary(res)
Each cluster has been generated according to a multivariate Gaussian distribution, with centers c randomly generated. For each cluster, 20% of uniform distributed outliers have been generated at a distance included in max(x-c) and max(x-c)+5 form the center.
data(outliers)
data(outliers)
A data frame with 960 observations on the following 101 variables. The first variable corresponds to the membership
generated with R
data(outliers) plot(outliers[,2:3])
data(outliers) plot(outliers[,2:3])
Probabilistic distance clustering (PD-clustering) is an iterative, distribution free, probabilistic clustering method. PD clustering is based on the constraint that the product of the probability and the distance of each point to any cluster centre is a constant.
PDC(data = NULL, k = 2)
PDC(data = NULL, k = 2)
data |
A matrix or data frame such that rows correspond to observations and columns correspond to variables. |
k |
A numerical parameter giving the number of clusters |
A class FPDclustering list with components
label |
A vector of integers indicating the cluster membership for each unit |
centers |
A matrix of cluster centers |
probability |
A matrix of probability of each point belonging to each cluster |
JDF |
The value of the Joint distance function |
iter |
The number of iterations |
data |
the data set |
Cristina Tortora and Paul D. McNicholas
Ben-Israel C. and Iyigun C. Probabilistic D-Clustering. Journal of Classification, 25(1), 5-26, 2008.
#Normally generated clusters c1 = c(+2,+2,2,2) c2 = c(-2,-2,-2,-2) c3 = c(-3,3,-3,3) n=200 x1 = cbind(rnorm(n, c1[1]), rnorm(n, c1[2]), rnorm(n, c1[3]), rnorm(n, c1[4]) ) x2 = cbind(rnorm(n, c2[1]), rnorm(n, c2[2]),rnorm(n, c2[3]), rnorm(n, c2[4]) ) x3 = cbind(rnorm(n, c3[1]), rnorm(n, c3[2]),rnorm(n, c3[3]), rnorm(n, c3[4]) ) x = rbind(x1,x2,x3) #Clustering pdn=PDC(x,3) #Results plot(pdn)
#Normally generated clusters c1 = c(+2,+2,2,2) c2 = c(-2,-2,-2,-2) c3 = c(-3,3,-3,3) n=200 x1 = cbind(rnorm(n, c1[1]), rnorm(n, c1[2]), rnorm(n, c1[3]), rnorm(n, c1[4]) ) x2 = cbind(rnorm(n, c2[1]), rnorm(n, c2[2]),rnorm(n, c2[3]), rnorm(n, c2[4]) ) x3 = cbind(rnorm(n, c3[1]), rnorm(n, c3[2]),rnorm(n, c3[3]), rnorm(n, c3[4]) ) x = rbind(x1,x2,x3) #Clustering pdn=PDC(x,3) #Results plot(pdn)
An implementation of probabilistic distance clustering adjusted for cluster size (PDQ), a probabilistic distance clustering algorithm that involves optimizing the PD-clustering criterion. The algorithm can be used, on continous, count, or mixed type data setting Euclidean, Chi square, or Gower as dissimilarity measurments.
PDQ(data=NULL,k=2,ini='kmd',dist='euc',cent=NULL, ord=NULL,cat=NULL,bin=NULL,cont=NULL,w=NULL)
PDQ(data=NULL,k=2,ini='kmd',dist='euc',cent=NULL, ord=NULL,cat=NULL,bin=NULL,cont=NULL,w=NULL)
data |
A matrix or data frame such that rows correspond to observations and columns correspond to variables. |
k |
A numerical parameter giving the number of clusters. |
ini |
A parameter that selects center starts. Options available are random ("random"), kmedoid ("kmd", by default"), center ("center", the user inputs the center), and kmode ("kmode", for categoriacal data sets). |
dist |
A parameter that selects the distance measure used. Options available are Eucledean ("euc"), Gower ("gower") and chi square ("chi"). |
cent |
User inputted centers if ini is set to "center". |
ord |
column indices of the x matrix indicating which columns are ordinal variables. |
cat |
column indices of the x matrix indicating which columns are categorical variables. |
bin |
column indices of the x matrix indicating which columns are binary variables. |
cont |
column indices of the x matrix indicating which columns are continuous variables. |
w |
numerical vector same length as the columns of the data, containing the variable weights when using Gower distance, equal weights by default. |
A class FPDclustering list with components
label |
A vector of integers indicating the cluster membership for each unit |
centers |
A matrix of cluster centers |
probability |
A matrix of probability of each point belonging to each cluster |
JDF |
The value of the Joint distance function |
iter |
The number of iterations |
jdfvector |
collection of all jdf calculations at each iteration |
data |
the data set |
Cristina Tortora and Noe Vidales
Iyigun, Cem, and Adi Ben-Israel. Probabilistic distance clustering adjusted for cluster size. Probability in the Engineering and Informational Sciences 22.4 (2008): 603-621. doi.org/10.1017/S0269964808000351.
Tortora and Palumbo. Clustering mixed-type data using a probabilistic distance algorithm. submitted.
#Mixed type data sig=matrix(0.7,4,4) diag(sig)=1###creat a correlation matrix x1=rmvnorm(200,c(0,0,3,3))## cluster 1 x2=rmvnorm(200,c(4,4,6,6),sigma=sig)## cluster 2 x=rbind(x1,x2)# data set with 2 clusters l=c(rep(1,200),rep(2,200))#creating the labels x1=cbind(x1,rbinom(200,4,0.2),rbinom(200,4,0.2))#categorical variables x2=cbind(x2,rbinom(200,4,0.7),rbinom(200,4,0.7)) x=rbind(x1,x2) ##Data set #### Performing PDQ pdq_class<-PDQ(data=x,k=2, ini="random", dist="gower", cont= 1:4, cat = 5:6) ###Output table(l,pdq_class$label) plot(pdq_class) summary(pdq_class) ###Continuous data example # Gaussian Generated Data no overlap x<-rmvnorm(100, mean=c(1,5,10), sigma=diag(1,3)) y<-rmvnorm(100, mean=c(4,8,13), sigma=diag(1,3)) data<-rbind(x,y) #### Performing PDQ pdq1=PDQ(data,2,ini="random",dist="euc") table(rep(c(2,1),each=100),pdq1$label) Silh(pdq1$probability) plot(pdq1) summary(pdq1) # Gaussian Generated Data with overlap x2<-rmvnorm(100, mean=c(1,5,10), sigma=diag(1,3)) y2<-rmvnorm(100, mean=c(2,6,11), sigma=diag(1,3)) data2<-rbind(x2,y2) #### Performing PDQ pdq2=PDQ(data2,2,ini="random",dist="euc") table(rep(c(1,2),each=100),pdq2$label) plot(pdq2) summary(pdq2)
#Mixed type data sig=matrix(0.7,4,4) diag(sig)=1###creat a correlation matrix x1=rmvnorm(200,c(0,0,3,3))## cluster 1 x2=rmvnorm(200,c(4,4,6,6),sigma=sig)## cluster 2 x=rbind(x1,x2)# data set with 2 clusters l=c(rep(1,200),rep(2,200))#creating the labels x1=cbind(x1,rbinom(200,4,0.2),rbinom(200,4,0.2))#categorical variables x2=cbind(x2,rbinom(200,4,0.7),rbinom(200,4,0.7)) x=rbind(x1,x2) ##Data set #### Performing PDQ pdq_class<-PDQ(data=x,k=2, ini="random", dist="gower", cont= 1:4, cat = 5:6) ###Output table(l,pdq_class$label) plot(pdq_class) summary(pdq_class) ###Continuous data example # Gaussian Generated Data no overlap x<-rmvnorm(100, mean=c(1,5,10), sigma=diag(1,3)) y<-rmvnorm(100, mean=c(4,8,13), sigma=diag(1,3)) data<-rbind(x,y) #### Performing PDQ pdq1=PDQ(data,2,ini="random",dist="euc") table(rep(c(2,1),each=100),pdq1$label) Silh(pdq1$probability) plot(pdq1) summary(pdq1) # Gaussian Generated Data with overlap x2<-rmvnorm(100, mean=c(1,5,10), sigma=diag(1,3)) y2<-rmvnorm(100, mean=c(2,6,11), sigma=diag(1,3)) data2<-rbind(x2,y2) #### Performing PDQ pdq2=PDQ(data2,2,ini="random",dist="euc") table(rep(c(1,2),each=100),pdq2$label) plot(pdq2) summary(pdq2)
Probability Silhouette plot, Scatterplot up to MaxVar variables, and parallel coordinate plot up to MaxVar variables, for objects of class FPDclustering.
## S3 method for class 'FPDclustering' plot(x, maxVar=30, ... )
## S3 method for class 'FPDclustering' plot(x, maxVar=30, ... )
x |
an object of class FPDclustering |
maxVar |
a scalar indicating the maximum number of variables to display on the parallel plot, 30 by default |
... |
Additional parameters for the function paris |
Cristina Tortora
Graphical tool to evaluate the clustering partition.
Silh(p)
Silh(p)
p |
A matrix of probabilities such that rows correspond to observations and columns correspond to clusters. |
The probabilistic silhouettes are an adaptation of the ones proposed by Menardi(2011) according to the following formula:
where is such that
belongs to cluster
and
is such that
is maximum for
different from
.
Probabilistic silhouette plot
Cristina Tortora
Menardi G. Density-based Silhouette diagnostics for clustering methods.Statistics and Computing, 21, 295-308, 2011.
## Not run: # Asymmetric data set silhouette example (with shape=3). data('asymmetric3') x<-asymmetric3[,-1] fpdas3=FPDC(x,4,3,3) Silh(fpdas3$probability) ## End(Not run) ## Not run: # Asymmetric data set shiluette example (with shape=20). data('asymmetric20') x<-asymmetric20[,-1] fpdas20=FPDC(x,4,3,3) Silh(fpdas20$probability) ## End(Not run) ## Not run: # Shiluette example with outliers. data('outliers') x<-outliers[,-1] fpdout=FPDC(x,4,4,3) Silh(fpdout$probability) ## End(Not run)
## Not run: # Asymmetric data set silhouette example (with shape=3). data('asymmetric3') x<-asymmetric3[,-1] fpdas3=FPDC(x,4,3,3) Silh(fpdas3$probability) ## End(Not run) ## Not run: # Asymmetric data set shiluette example (with shape=20). data('asymmetric20') x<-asymmetric20[,-1] fpdas20=FPDC(x,4,3,3) Silh(fpdas20$probability) ## End(Not run) ## Not run: # Shiluette example with outliers. data('outliers') x<-outliers[,-1] fpdout=FPDC(x,4,4,3) Silh(fpdout$probability) ## End(Not run)
A 6 class star dataset for star classification with Deep Learned approaches
data(ais)
data(ais)
A data frame with 202 observations and 13 variable.
Absolute Temperature (in K)
Relative Luminosity (L/Lo)
Relative Radius (R/Ro)
Absolute Magnitude (Mv)
Star Color (white,Red,Blue,Yellow,yellow-orange etc)
Spectral Class (O,B,A,F,G,K,,M)
Star Type (Red Dwarf, Brown Dwarf, White Dwarf, Main Sequence , SuperGiants, HyperGiants)
https://www.kaggle.com/deepu1109/star-dataset
data(Star)
data(Star)
Data set collected in 2022 that contains 10 variables recorded on a convenience sample of 253 students enrolled in the first year at the University od Naples FedericoII and attending an introductory Statistics course.
data(Students)
data(Students)
A data frame with 253 observations and 10 variable.
gender, binary
high school type, categorical
prior knowladge of statistics, binary
course modality of attendance (in presence, online, mixed), categorical
parents' education degree, categorical
mathematical prerequisits for psychometric, continuous
statistical anxiety sale, continuous
relative authonomy index, continuous
self-efficacy, continuous
cognitive competence, continuous
R. Fabbricatore. Latent class analysis for proficiency assessment in higher education: integrating multidimensional latent traits and learning topics. Ph.D. thesis, University of Naples Federico II, 2023
data(Students)
data(Students)
Number of elements per cluster.
## S3 method for class 'FPDclustering' summary(object, ... )
## S3 method for class 'FPDclustering' summary(object, ... )
object |
an object of class FPDclustering |
... |
Additional parameters for the function paris |
Cristina Tortora
An implementation of Student-t PD-Clustering TPDC, an extention of PD-clustering adjusted for cluster size that uses a dissimilarity measure based on the multivariate Student-t density.
TPDC(data=NULL,k=2,ini="kmedoids", nr=5,iter=100)
TPDC(data=NULL,k=2,ini="kmedoids", nr=5,iter=100)
data |
A matrix or data frame such that rows correspond to observations and columns correspond to variables. |
k |
A numerical parameter giving the number of clusters |
ini |
A parameter that selects center starts. Options available are random ("random"), kmedoid ("kmedoid", by default), and PDC ("PDclust"). |
nr |
Number of random starts if ini is "random" |
iter |
Maximum number of iterations |
A class FPDclustering list with components
label |
A vector of integers indicating the cluster membership for each unit |
centers |
A matrix of cluster means |
sigma |
A list of K elements, with the variance-covariance matrix per cluster |
df |
A vector of K degrees of freedom |
probability |
A matrix of probability of each point belonging to each cluster |
JDF |
The value of the Joint distance function |
iter |
The number of iterations |
data |
the data set |
Cristina Tortora and Francesco Palumbo
Tortora C., McNicholas P.D., and Palumbo F. A probabilistic distance clustering algorithm using Gaussian and Student-t multivariate density distributions. SN Computer Science, 1:65, 2020.
C. Rainey, C. Tortora and F.Palumbo. A parametric version of probabilistic distance clustering. In: Greselin F., Deldossi L., Bagnato L., Vichi M. (eds) Statistical Learning of Complex Data. CLADAG 2017. Studies in Classification, Data Analysis, and Knowledge Organization. Springer, Cham, 33-43 2019. doi.org/10.1007/978-3-030-21140-0_4
#Load the data data(ais) dataSEL=ais[,c(10,3,5,8)] #Clustering res=TPDC(dataSEL,k=2,ini = "kmedoids") #Results table(res$label,ais$sex) summary(res) plot(res)
#Load the data data(ais) dataSEL=ais[,c(10,3,5,8)] #Clustering res=TPDC(dataSEL,k=2,ini = "kmedoids") #Results table(res$label,ais$sex) summary(res) plot(res)
An empirical way of choosing the number of factors for FPDC. The function returns a graph and a table representing the explained variability varying the number of factors.
TuckerFactors(data = NULL, k = 2)
TuckerFactors(data = NULL, k = 2)
data |
A matrix or data frame such that rows correspond to observations and columns correspond to variables. |
k |
A numerical parameter giving the number of clusters |
A table containing the explained variability varying the number of factors for units (column) and for variables (row) and the corresponding plot
Cristina Tortora
Kiers H, Kinderen A. A fast method for choosing the numbers of components in Tucker3 analysis.British Journal of Mathematical and Statistical Psychology, 56(1), 119-125, 2003.
Kroonenberg P. Applied Multiway Data Analysis. Ebooks Corporation, Hoboken, New Jersey, 2008.
Tortora C., Gettler Summa M., and Palumbo F.. Factor pd-clustering. In Lausen et al., editor, Algorithms from and for Nature and Life, Studies in Classification, Data Analysis, and Knowledge Organization DOI 10.1007/978-3-319-00035-011, 115-123, 2013.
## Not run: # Asymmetric data set example (with shape=3). data('asymmetric3') xp=TuckerFactors(asymmetric3[,-1], nc = 4) ## End(Not run) ## Not run: # Asymmetric data set example (with shape=20). data('asymmetric20') xp=TuckerFactors(asymmetric20[,-1], nc = 4) ## End(Not run)
## Not run: # Asymmetric data set example (with shape=3). data('asymmetric3') xp=TuckerFactors(asymmetric3[,-1], nc = 4) ## End(Not run) ## Not run: # Asymmetric data set example (with shape=20). data('asymmetric20') xp=TuckerFactors(asymmetric20[,-1], nc = 4) ## End(Not run)