Title: | Slow Feature Analysis |
---|---|
Description: | Slow Feature Analysis (SFA), ported to R based on 'matlab' implementations of SFA: 'SFA toolkit' 1.0 by Pietro Berkes and 'SFA toolkit' 2.8 by Wolfgang Konen. |
Authors: | Wolfgang Konen <[email protected]>, Martin Zaefferer, Patrick Koch; Bug hunting and testing by Ayodele Fasika, Ashwin Kumar, Prawyn Jebakumar |
Maintainer: | Martin Zaefferer <[email protected]> |
License: | GPL (>= 2) |
Version: | 1.5 |
Built: | 2024-11-01 06:48:17 UTC |
Source: | CRAN |
Slow Feature Analysis
Package: | rSFA |
Type: | Package |
Version: | 1.5 |
Date: | 29.03.2022 |
Maintainer: | Martin Zaefferer [email protected] |
License: | GPL (>= 2) |
LazyLoad: | yes |
Slow Feature Analysis (SFA), ported to R based on the matlab implementations SFA toolkit 1.0 by Pietro Berkes and SFA toolkit 2.8 by Wolfgang Konen.
Wolfgang Konen [email protected], Martin Zaefferer, Patrick Koch; Bug hunting and testing by Ayodele Fasika, Ashwin Kumar, Prawyn Jebakumar
Given training data X with true labels REALCLASS, add new records to X and REALCLASS, which are noisy copies of the training data.
addNoisyCopies(realclass, x, pars)
addNoisyCopies(realclass, x, pars)
realclass |
true class of training data (can be vector, numerics, integers, factors) |
x |
a matrix containing the training data |
pars |
list of parameters: |
list res
- res
contains two list entries: realclass and x (including added copies)
Computes the eta value of a signal (slowness)
etaval(x, T = length(x))
etaval(x, T = length(x))
x |
The columns of signal correspond to different input components. Must be normalized (zero mean, unit variance) |
T |
Time interval |
returns the eta value of the signal in a time interval T time units long.
Train or apply a Gaussian classifier..
gaussClassifier(gauss, y, realC, method = "train")
gaussClassifier(gauss, y, realC, method = "train")
gauss |
List created by gaussCreate. Contains also the elements:
|
y |
K x M matrix where K is the total number of patterns and M is the number of variables used for classification. I.e. each row of y contains the data for one pattern. |
realC |
1 x K matrix with NCLASS distinct real class labels needed only for method='train'. In case of method="apply" realC is not used and can have any value |
method |
either "train" (default) or "apply" |
list gauss
containing
gauss$predC |
1 x K matrix: the predicted class |
gauss$prob |
K x NCLASS matrix: prob(k,n) is the estimated probability that pattern k belongs to class m |
Create an Gaussian classifier object
gaussCreate(nclass, dimY)
gaussCreate(nclass, dimY)
nclass |
number of classes |
dimY |
dimension |
list of defaults for gauss classifier
Y = sfa1(X) performs linear Slow Feature Analysis on the input data X and returns the output signals Y ordered by increasing temporal variation, i.e. the first signal Y[,1] is the slowest varying one, Y[,2] the next slowest and so on. The input data have to be organized with each variable in a column and each data (time) point in a row, i.e. X(t,i) is the value of variable nr. i at time t.
sfa1(x)
sfa1(x)
x |
Input data, each column a different variable |
list sfaList
with all learned information, where sfaList$y
contains the outputs
Create structured list for linear SFA
sfa1Create(sfaRange, axType = "ORD1", regCt = 0)
sfa1Create(sfaRange, axType = "ORD1", regCt = 0)
sfaRange |
number of slowly-varying functions to be kept |
axType |
is the type of derivative approximation to be used, see |
regCt |
regularization constant, currently not used |
list sfaList
contains all arguments passed into sfa1create plus
deg |
2 |
This list will be expanded by other SFA functions with further SFa results
Y = sfa2(X) performs expanded Slow Feature Analysis on the input data X and returns the output signals Y ordered by increasing temporal variation, i.e. the first signal Y[,1] is the slowest varying one, Y[,2] the next slowest varying one and so on. The input data have to be organized with each variable in a column and each data (time) point in a row, i.e. X(t,i) is the value of variable i at time t. By default an expansion to the space of 2nd degree polynomials is done, this can be changed by using different functions for xpDimFun and sfaExpandFun.
sfa2( x, method = "SVDSFA", ppType = "PCA", xpDimFun = xpDim, sfaExpandFun = sfaExpand )
sfa2( x, method = "SVDSFA", ppType = "PCA", xpDimFun = xpDim, sfaExpandFun = sfaExpand )
x |
input data |
method |
eigenvector calculation method: ="SVDSFA" for singular value decomposition (recommended) or ="GENEIG" for generalized eigenvalues (unstable!). GENEIG is not implemented in the current version, since R lacks an easy option to calculate generalized eigenvalues. |
ppType |
preprocessing type: ="PCA" (principal component analysis) or ="SFA1" (linear sfa) |
xpDimFun |
function to calculate dimension of expanded data |
sfaExpandFun |
function to expand data |
list sfaList
with all SFA information, among them are
y |
a matrix containing the output Y (as described above) |
- |
all input parameters to |
- |
all elements of |
sfa2Step
sfa2Create
sfaExecute
sfa1
## prepare input data for simple demo t=seq.int(from=0,by=0.011,to=2*pi) x1=sin(t)+cos(11*t)^2 x2=cos(11*t) x=data.frame(x1,x2) ## perform sfa2 algorithm with data res = sfa2(x) ## plot slowest varying function of result plot(t, res$y[,1],type="l",main="output of the slowest varying function") ## see http://www.scholarpedia.org/article/Slow_feature_analysis#The_algorithm ## for detailed description of this example
## prepare input data for simple demo t=seq.int(from=0,by=0.011,to=2*pi) x1=sin(t)+cos(11*t)^2 x2=cos(11*t) x=data.frame(x1,x2) ## perform sfa2 algorithm with data res = sfa2(x) ## plot slowest varying function of result plot(t, res$y[,1],type="l",main="output of the slowest varying function") ## see http://www.scholarpedia.org/article/Slow_feature_analysis#The_algorithm ## for detailed description of this example
'Expanded' SFA means that the input data are expanded into a higher-dimensional
space with the function sfaExpandFun. See sfaExpand
for the default
expansion function.
sfa2Create( ppRange, sfaRange, ppType = "SFA1", axType = "ORD1", regCt = 0, opts = NULL, xpDimFun = xpDim, sfaExpandFun = sfaExpand )
sfa2Create( ppRange, sfaRange, ppType = "SFA1", axType = "ORD1", regCt = 0, opts = NULL, xpDimFun = xpDim, sfaExpandFun = sfaExpand )
ppRange |
umber of dimensions to be kept after preprocessing step - or - a two-number vector with lower and upper dimension number |
sfaRange |
umber of slowly-varying functions to be kept |
ppType |
preprocessing type: ="PCA", "PCA2" (principal component analysis) or ="SFA1" (linear sfa) |
axType |
is the type of derivative approximation to be used, see |
regCt |
regularization constant, currently not used |
opts |
optional list of additional options |
xpDimFun |
Function to calculate dimension of expanded data |
sfaExpandFun |
Function to expand data |
list sfaList
contains all arguments passed into sfa2create plus
xpRange |
evaluates to |
deg |
2 |
This list will be expanded by other SFA functions with further SFa results
Create a SFA classification mode, predict & evaluate on new data (xtst,realc_tst).
Author of orig. matlab version: Wolfgang Konen, May 2009 - Jan 2010
See also [Berkes05] Pietro Berkes: Pattern recognition with Slow Feature Analysis.
Cognitive Sciences EPrint Archive (CogPrint) 4104, http://cogprints.org/4104/ (2005)
sfaClassify(x, realclass, xtst = 0, realcTst = 0, opts)
sfaClassify(x, realclass, xtst = 0, realcTst = 0, opts)
x |
NREC x IDIM, training input data |
realclass |
1 x NREC, training class labels |
xtst |
NTST x IDIM, test input data |
realcTst |
1 x NTST, test class labels |
opts |
list with several parameter settings:
|
list res
containing
res$errtrn |
1 x 2 matrix: error rate with / w/o SFA on training set |
res$errtst |
1 x 2 matrix: error rate with / w/o SFA on test set |
res$y |
output from SFA when applied to training data |
res$ytst |
output from SFA when applied to test data |
res$predT |
predictions with SFA + GaussClassifier on test set |
res$predX |
predictions w/o SFA (only GaussClassifier) on test set (only if opts.xFilename exists) |
Use a SFA classification model (stored in opts$*Filename), predict & evaluate on new data (xtst,realc_tst).
Author of orig. matlab version: Wolfgang Konen, Jan 2011-Mar 2011.
See also [Berkes05] Pietro Berkes: Pattern recognition with Slow Feature Analysis.
Cognitive Sciences EPrint Archive (CogPrint) 4104, http://cogprints.org/4104/ (2005)
sfaClassPredict(xtst, realcTst, opts)
sfaClassPredict(xtst, realcTst, opts)
xtst |
NTST x IDIM, test input data |
realcTst |
1 x NTST, test class labels |
opts |
list with several parameter settings:
|
list res
containing
res$errtst |
1 x 2 matrix: error rate with / w/o SFA on test set |
res$ytst |
output from SFA when applied to test data |
res$predT |
predictions with SFA + GaussClassifier on test set |
res$predX |
predictions w/o SFA (only GaussClassifier) on test set (only if opts.xFilename exists) |
After completion of the learning phase (step="sfa") this function can be used
to apply the learned function to the input data.
The execution is completed in 4 steps:
1. projection on the input principal components (dimensionality
reduction)
2. expansion (if necessary)
3. projection on the whitened (expanded) space
4. projection on the slow functions
sfaExecute(sfaList, DATA, prj = NULL, ncomp = NULL)
sfaExecute(sfaList, DATA, prj = NULL, ncomp = NULL)
sfaList |
A list that contains all information about the handled sfa-structure |
DATA |
Input data, each column a different variable |
prj |
If not NULL, the preprocessing step 1 is skipped for SFA2 |
ncomp |
number of learned functions to be used |
matrix DATA
containing the calculated output
Expand a signal in the space of polynomials of degree 2. This is the default expansion function used by rSFA.
sfaExpand(sfaList, DATA)
sfaExpand(sfaList, DATA)
sfaList |
A list that contains all information about the handled sfa-structure |
DATA |
Input data, each column a different variable |
expanded matrix DATA
Given the data in arg, expand them nonlinearly in the same way as it was
done in the SFA-object sfaList (expanded dimension M) and search the vector
RCOEF of M constant coefficients, such that the sum of squared residuals
between a given function in time FUNC and the function
R(t) = (v(t) - v0)' * RCOEF, t=1,...,T,
is minimal
sfaNlRegress(sfaList, arg, func)
sfaNlRegress(sfaList, arg, func)
sfaList |
A list that contains all information about the handled sfa-structure |
arg |
Input data, each column a different variable |
func |
(T x 1) the function to be fitted nonlinearly |
returns a list res
with elements
res$R |
(T x 1) the function fitted by NL-regression |
res$rcoef |
(M x 1) the coefficients for the NL-expanded dimensions |
If training set too small, augment it with parametric bootstrap
sfaPBootstrap(realclass, x, sfaList)
sfaPBootstrap(realclass, x, sfaList)
realclass |
true class of training data (can be vector, numerics, integers, factors) |
x |
matrix containing the training data |
sfaList |
list with several parameter settings, e.g. as created by |
a list list
containing:
x |
training set extended to minimu number of recors1.5*(xpdim+nclass), if necessary |
realclass |
training class labels, extended analogously |
sfaStep() updates the current step of the SFA algorithm. Depending on sfaList$deg
it calls either sfa1Step
or sfa2Step
to do the main work.
See further documentation there
sfaStep(sfaList, arg, step = NULL, method = NULL)
sfaStep(sfaList, arg, step = NULL, method = NULL)
sfaList |
A list that contains all information about the handled sfa-structure |
arg |
Input data, each column a different variable |
step |
Specifies the current SFA step. Must be given in the right sequence:
for SFA1 objects: "preprocessing", "sfa" |
method |
Method to be used: For |
list sfaList
taken from the input, with new information added to this list.
See sfa1Step
or sfa2Step
for details.
sfa1Step
sfa2Step
sfa1Create
sfa2Create
sfaExecute
## Suppose you have divided your training data into two chunks, ## DATA1 and DATA2. Let the number of input dimensions be N. To apply ## SFA on them write: ## Not run: sfaList = sfa2Create(N,xpDim(N)) sfaList = sfaStep(sfaList, DATA1, "preprocessing") sfaList = sfaStep(sfaList, DATA2) sfaList = sfaStep(sfaList, DATA1, "expansion") sfaList = sfaStep(sfaList, DATA2) sfaList = sfaStep(sfaList, NULL, "sfa") output1 = sfaExecute(sfaList, DATA1) output2 = sfaExecute(sfaList, DATA2) ## End(Not run)
## Suppose you have divided your training data into two chunks, ## DATA1 and DATA2. Let the number of input dimensions be N. To apply ## SFA on them write: ## Not run: sfaList = sfa2Create(N,xpDim(N)) sfaList = sfaStep(sfaList, DATA1, "preprocessing") sfaList = sfaStep(sfaList, DATA2) sfaList = sfaStep(sfaList, DATA1, "expansion") sfaList = sfaStep(sfaList, DATA2) sfaList = sfaStep(sfaList, NULL, "sfa") output1 = sfaExecute(sfaList, DATA1) output2 = sfaExecute(sfaList, DATA2) ## End(Not run)
Calculates the first derivative of signal data
sfaTimediff(DATA, axType = "ORD1")
sfaTimediff(DATA, axType = "ORD1")
DATA |
The matrix of signals for which the derivative is calculated (one column per signal) |
axType |
Type of interpolation: "ORD1" (default) first order, "SCD" second ,"TRD" third, "ORD3a" cubic polynom |
matrix DATA
- DATA
contains the derivative signals, with the same structure as the input data.
setting axType to invalid values will lead to first order interpolation.
Compute the dimension of a vector expanded in the space of polynomials of 2nd degree.
xpDim(n)
xpDim(n)
n |
Dimension of input vector |
Dimension of expanded vector