Title: | Linguistic Fuzzy Logic |
---|---|
Description: | Various algorithms related to linguistic fuzzy logic: mining for linguistic fuzzy association rules, composition of fuzzy relations, performing perception-based logical deduction (PbLD), and forecasting time-series using fuzzy rule-based ensemble (FRBE). The package also contains basic fuzzy-related algebraic functions capable of handling missing values in different styles (Bochvar, Sobocinski, Kleene etc.), computation of Sugeno integrals and fuzzy transform. |
Authors: | Michal Burda [aut, cre] |
Maintainer: | Michal Burda <[email protected]> |
License: | GPL-3 |
Version: | 2.2.1 |
Built: | 2024-11-22 06:26:08 UTC |
Source: | CRAN |
Take a character vector of consequent names, a numeric vector representing the degree of consequents' firing and a matrix that models fuzzy sets corresponding to the consequent names, and perform an aggregation of the consequents into a resulting fuzzy set.
aggregateConsequents( conseq, degrees, partition, firing = lukas.residuum, aggreg = pgoedel.tnorm )
aggregateConsequents( conseq, degrees, partition, firing = lukas.residuum, aggreg = pgoedel.tnorm )
conseq |
A character vector of consequents. Each value in the vector
must correspond to a name of some column of the |
degrees |
A numeric vector of membership degrees at which the
corresponding consequents (see the |
partition |
A matrix of membership degrees that describes the meaning of
the consequents in vector |
firing |
A two-argument function used to compute the resulting truth value of the consequent.
Function is evaluated for each consequent in |
aggreg |
An aggregation function to be used to combine fuzzy sets resulting from firing
the consequents with the |
This function is typically used within an inference mechanism, after a set of
firing rules is determined and membership degrees of their antecedents are
computed, to combine the consequents of the firing rules into a resulting
fuzzy set. The result of this function is then typically defuzzified
(see defuzz()
) to obtain a crisp result of the inference.
Function assumes a set of rules with antecedents firing at degrees given in
degrees
and with consequents in conseq
. The meaning of the consequents is
modeled with fuzzy sets whose membership degree values are captured in the
partition
matrix.
With default values of firing
and aggreg
parameters, the function
computes a fuzzy set that results from a conjunction (Goedel minimum t-norm)
of all provided implicative (Lukasiewicz residuum) rules.
In detail, the function first computes the fuzzy set of each fired consequent
by calling part\[i\] <- firing(degrees\[i\], partition\[, conseq\[i\]\])
for each
i
-th consequent and the results are aggregated using the aggreg
parameter: aggreg(part\[1\], part\[2\], ...)
. In order to aggregate consequents
in a Mamdani-Assilian's fashion, set firing
to pgoedel.tnorm()
and aggreg
to pgoedel.tconorm()
.
A vector of membership degrees of fuzzy set elements that correspond
to rows in the partition
matrix. If empty vector of consequents is
provided, vector of 1's is returned. The length of the resulting vector
equals to the number of rows of the partition
matrix.
Michal Burda
fire()
, perceive()
, defuzz()
, fcut()
, lcut()
# create a partition matrix partition <- matrix(c(0:10/10, 10:0/10, rep(0, 5), rep(0, 5), 0:10/10, 10:0/10, 0:12/12, 1, 12:0/12), byrow=FALSE, ncol=3) colnames(partition) <- c('a', 'b', 'c') # the result of aggregation is equal to: # pmin(1, partition[, 1] + (1 - 0.5), partition[, 2] + (1 - 0.8)) aggregateConsequents(c('a', 'b'), c(0.5, 0.8), partition)
# create a partition matrix partition <- matrix(c(0:10/10, 10:0/10, rep(0, 5), rep(0, 5), 0:10/10, 10:0/10, 0:12/12, 1, 12:0/12), byrow=FALSE, ncol=3) colnames(partition) <- c('a', 'b', 'c') # the result of aggregation is equal to: # pmin(1, partition[, 1] + (1 - 0.5), partition[, 2] + (1 - 0.8)) aggregateConsequents(c('a', 'b'), c(0.5, 0.8), partition)
Compute triangular norms (t-norms), triangular conorms (t-conorms), residua, bi-residua, and negations.
algebra(name, stdneg = FALSE, ...) is.algebra(a) goedel.tnorm(...) lukas.tnorm(...) goguen.tnorm(...) pgoedel.tnorm(...) plukas.tnorm(...) pgoguen.tnorm(...) goedel.tconorm(...) lukas.tconorm(...) goguen.tconorm(...) pgoedel.tconorm(...) plukas.tconorm(...) pgoguen.tconorm(...) goedel.residuum(x, y) lukas.residuum(x, y) goguen.residuum(x, y) goedel.biresiduum(x, y) lukas.biresiduum(x, y) goguen.biresiduum(x, y) invol.neg(x) strict.neg(x)
algebra(name, stdneg = FALSE, ...) is.algebra(a) goedel.tnorm(...) lukas.tnorm(...) goguen.tnorm(...) pgoedel.tnorm(...) plukas.tnorm(...) pgoguen.tnorm(...) goedel.tconorm(...) lukas.tconorm(...) goguen.tconorm(...) pgoedel.tconorm(...) plukas.tconorm(...) pgoguen.tconorm(...) goedel.residuum(x, y) lukas.residuum(x, y) goguen.residuum(x, y) goedel.biresiduum(x, y) lukas.biresiduum(x, y) goguen.biresiduum(x, y) invol.neg(x) strict.neg(x)
name |
The name of the algebra to be created. Must be one of: "goedel", "lukasiewicz", "goguen" (or an unambiguous abbreviation). |
stdneg |
(Deprecated.) |
... |
For t-norms and t-conorms, these arguments are numeric vectors
of values to compute t-norms or t-conorms from. Values outside the
For the |
a |
An object to be checked if it is a valid algebra (i.e. a list
returned by the |
x |
Numeric vector of values to compute a residuum or bi-residuum from.
Values outside the |
y |
Numeric vector of values to compute a residuum or bi-residuum from.
Values outside the |
goedel.tnorm
, lukas.tnorm
, and goguen.tnorm
compute the
Goedel, Lukasiewicz, and Goguen triangular norm (t-norm) from all values in
the arguments. If the arguments are vectors they are combined together
firstly so that a numeric vector of length 1 is returned.
pgoedel.tnorm
, plukas.tnorm
, and pgoguen.tnorm
compute
the same t-norms, but in an element-wise manner. I.e. the values
with indices 1 of all arguments are used to compute the t-norm, then the
second values (while recycling the vectors if they do not have the same
size) so that the result is a vector of values.
goedel.tconorm
, lukas.tconorm
, goguen.tconorm
, are
similar to the previously mentioned functions, except that they compute
triangular conorms (t-conorms). pgoedel.tconorm
,
plukas.tconorm
, and pgoguen.tconorm
are their element-wise alternatives.
goedel.residuum
, lukas.residuum
, and goguen.residuum
compute residua (i.e. implications) and goedel.biresiduum
,
lukas.biresiduum
, and goguen.biresiduum
compute bi-residua. Residua and
bi-residua are computed in an element-wise manner, for each corresponding
pair of values in x
and y
arguments.
invol.neg
and strict.neg
compute the involutive and strict
negation, respectively.
Let ,
be values from the interval
. The realized functions
can be defined as follows:
Goedel t-norm: ;
Goguen t-norm: ;
Lukasiewicz t-norm: ;
Goedel t-conorm: ;
Goguen t-conorm: ;
Lukasiewicz t-conorm: ;
Goedel residuum (standard Goedel implication): if
and
otherwise;
Goguen residuum (implication): if
and
otherwise;
Lukasiewicz residuum (standard Lukasiewicz implication): if
and
otherwise;
Involutive negation: ;
Strict negation: if
and
otherwise.
Bi-residuum is derived from residuum
as follows:
where is the operation of infimum, which for all three algebras
corresponds to the
operation.
The arguments have to be numbers from the interval . Values
outside that range cause an error. NaN values are treated as NAs.
If some argument is NA or NaN, the result is NA. For other handling of missing values, see algebraNA.
Selection of a t-norm may serve as a basis for definition of other operations.
From the t-norm, the operation of a residual implication may be defined, which
in turn allows the definition of a residual negation. If the residual negation
is not involutive, the involutive negation is often added as a new operation
and together with the t-norm can be used to define the t-conorm. Therefore,
the algebra
function returns a named list of operations derived from the selected
Goedel, Goguen, or Lukasiewicz t-norm. Concretely:
algebra("goedel")
: returns the strict negation as the residual negation,
the involutive negation, and also the Goedel t-norm, t-conorm, residuum, and bi-residuum;
algebra("goguen")
: returns the strict negation as the residual negation,
the involutive negation, and also the Goguen t-norm, t-conorm, residuum, and bi-residuum;
algebra("lukasiewicz")
: returns involutive negation as both residual and involutive
negation, and also the Lukasiewicz t-norm, t-conorm, residuum, and bi-residuum.
Moreover, algebra
returns the supremum and infimum functions computed as maximum and minimum,
respectively.
is.algebra
tests whether the given a
argument is a valid
algebra, i.e. a list returned by the algebra
function.
Functions for t-norms and t-conorms (such as goedel.tnorm
)
return a numeric vector of size 1 that is the result of the appropriate
t-norm or t-conorm applied on all values of all arguments.
Element-wise versions of t-norms and t-conorms (such as pgoedel.tnorm
)
return a vector of results after applying the appropriate t-norm or t-conorm
on argument in an element-wise (i.e. by indices) way. The
resulting vector is of length of the longest argument (shorter arguments are
recycled).
Residua and bi-residua functions return a numeric vector of length of the longest argument (shorter argument is recycled).
strict.neg
and invol.neg
compute negations and return a
numeric vector of the same size as the argument x
.
algebra
returns a list of functions of the requested algebra:
"n"
(residual negation), "ni"
(involutive negation), "t"
(t-norm),
"pt"
(element-wise t-norm),
"c"
(t-conorm), "pc"
(element-wise t-conorm), "r"
(residuum),
"b"
(bi-residuum), "s"
(supremum),
"ps"
(element-wise supremum), "i"
(infimum), and
"pi"
(element-wise infimum).
For Lukasiewicz algebra, the elements "n"
and "ni"
are the same, i.e.
the invol.neg
function. For Goedel and Goguen algebra, "n"
(the residual
negation) equals strict.neg
and "ni"
(the involutive negation) equals
invol.neg
.
"s"
, "ps"
, "i"
, "pi"
are the same for each type of algebra:
goedel.conorm
, pgoedel.conorm
, goedel.tnorm
, and pgoedel.tnorm
.
Michal Burda
# direct and element-wise version of functions goedel.tnorm(c(0.3, 0.2, 0.5), c(0.8, 0.1, 0.5)) # 0.1 pgoedel.tnorm(c(0.3, 0.2, 0.5), c(0.8, 0.1, 0.5)) # c(0.3, 0.1, 0.5) # algebras x <- runif(10) y <- runif(10) a <- algebra('goedel') a$n(x) # residual negation a$ni(x) # involutive negation a$t(x, y) # t-norm a$pt(x, y) # element-wise t-norm a$c(x, y) # t-conorm a$pc(x, y) # element-wise t-conorm a$r(x, y) # residuum a$b(x, y) # bi-residuum a$s(x, y) # supremum a$ps(x, y) # element-wise supremum a$i(x, y) # infimum a$pi(x, y) # element-wise infimum is.algebra(a) # TRUE
# direct and element-wise version of functions goedel.tnorm(c(0.3, 0.2, 0.5), c(0.8, 0.1, 0.5)) # 0.1 pgoedel.tnorm(c(0.3, 0.2, 0.5), c(0.8, 0.1, 0.5)) # c(0.3, 0.1, 0.5) # algebras x <- runif(10) y <- runif(10) a <- algebra('goedel') a$n(x) # residual negation a$ni(x) # involutive negation a$t(x, y) # t-norm a$pt(x, y) # element-wise t-norm a$c(x, y) # t-conorm a$pc(x, y) # element-wise t-conorm a$r(x, y) # residuum a$b(x, y) # bi-residuum a$s(x, y) # supremum a$ps(x, y) # element-wise supremum a$i(x, y) # infimum a$pi(x, y) # element-wise infimum is.algebra(a) # TRUE
Given a list of rules or an instance of the S3 farules()
class,
the function returns a list of their antecedents (i.e.
left-hand side of rules).
antecedents(rules)
antecedents(rules)
rules |
Either a list of character vectors or an object of class |
This function assumes rules
to be a valid farules()
object or
a list of character vectors where
the first element of each vector is a consequent part and the
rest is an antecedent part of rules. Function returns a list of
antecedents.
A list of character vectors.
Michal Burda
consequents()
, farules()
, searchrules()
rules <- list(c('a', 'b', 'c'), c('d'), c('a', 'e')) antecedents(rules)
rules <- list(c('a', 'b', 'c'), c('d'), c('a', 'e')) antecedents(rules)
farules()
S3 class into a data frame.
Empty farules()
object is converted into an empty data.frame()
.Convert the instance of the farules()
S3 class into a data frame.
Empty farules()
object is converted into an empty data.frame()
.
## S3 method for class 'farules' as.data.frame(x, ...)
## S3 method for class 'farules' as.data.frame(x, ...)
x |
An instance of class |
... |
Unused. |
A data frame of statistics of the rules that are stored in the given
farules()
object. Row names of the resulting data frame are in
the form: A1 & A2 & ... & An => C
, where Ai
are antecedent
predicates and C
is a consequent. An empty farules()
object
is converted into an empty data.frame()
object.
Michal Burda
fsets
class into a matrix or data frame
This function converts an instance of S3 class fsets into a
matrix or a data frame. The vars()
and specs()
attributes
of the original object are deleted.Convert an object of fsets
class into a matrix or data frame
This function converts an instance of S3 class fsets into a
matrix or a data frame. The vars()
and specs()
attributes
of the original object are deleted.
## S3 method for class 'fsets' as.data.frame(x, ...) ## S3 method for class 'fsets' as.matrix(x, ...)
## S3 method for class 'fsets' as.data.frame(x, ...) ## S3 method for class 'fsets' as.matrix(x, ...)
x |
An instance of class fsets to be converted |
... |
arguments further passed to |
A numeric matrix or data frame of membership degrees.
Michal Burda
farules()
and combine them into a single
object. An error is thrown if some argument does not inherit from the farules()
class.Take a sequence of instances of S3 class farules()
and combine them into a single
object. An error is thrown if some argument does not inherit from the farules()
class.
## S3 method for class 'farules' c(..., recursive = FALSE)
## S3 method for class 'farules' c(..., recursive = FALSE)
... |
A sequence of objects of class |
recursive |
This argument has currently no function and is added here
only for compatibility with generic |
An object of class farules()
that is created by merging the
arguments together, i.e. by concatenating the rules and row-binding the
statistics of given objects.
Michal Burda
ori1 <- farules(rules=list(letters[1:3], letters[2:5]), statistics=matrix(runif(16), nrow=2)) ori2 <- farules(rules=list(letters[4], letters[3:8]), statistics=matrix(runif(16), nrow=2)) res <- c(ori1, ori2) print(res)
ori1 <- farules(rules=list(letters[1:3], letters[2:5]), statistics=matrix(runif(16), nrow=2)) ori2 <- farules(rules=list(letters[4], letters[3:8]), statistics=matrix(runif(16), nrow=2)) res <- c(ori1, ori2) print(res)
Take a sequence of objects of class 'fsets' and combine them by columns.
This version of cbind takes care of the vars()
and specs()
attributes of the arguments and merges them to the result. If some argument
does not inherit from the 'fsets' class, an error is thrown.
## S3 method for class 'fsets' cbind(..., deparse.level = 1, warn = TRUE)
## S3 method for class 'fsets' cbind(..., deparse.level = 1, warn = TRUE)
... |
A sequence of objects of class 'fsets' to be merged by columns. |
deparse.level |
This argument has currently no function and is added
here only for compatibility with generic |
warn |
Whether to issue warning when combining two fsets having the same vars about the fact that specs may not be accurate |
The vars()
attribute is merged by concatenating the vars()
attributes
of each argument. Also the specs()
attributes of the arguments are merged together.
An object of class 'fsets' that is created by merging the arguments
by columns. Also the arguments' attributes vars()
and specs()
are merged together.
Michal Burda
vars()
, specs()
, fcut()
, lcut()
d1 <- fcut(CO2[, 1:2]) d2 <- fcut(CO2[, 3:4], breaks=list(conc=1:4*1000/4)) r <- cbind(d1, d2) print(colnames(d1)) print(colnames(d2)) print(colnames(r)) print(vars(d1)) print(vars(d2)) print(vars(r)) print(specs(d1)) print(specs(d2)) print(specs(r))
d1 <- fcut(CO2[, 1:2]) d2 <- fcut(CO2[, 3:4], breaks=list(conc=1:4*1000/4)) r <- cbind(d1, d2) print(colnames(d1)) print(colnames(d2)) print(colnames(r)) print(vars(d1)) print(vars(d2)) print(vars(r)) print(specs(d1)) print(specs(d2)) print(specs(r))
Composition of Fuzzy Relations
compose( x, y, e = NULL, alg = c("goedel", "goguen", "lukasiewicz"), type = c("basic", "sub", "super", "square"), quantifier = NULL, sorting = sort )
compose( x, y, e = NULL, alg = c("goedel", "goguen", "lukasiewicz"), type = c("basic", "sub", "super", "square"), quantifier = NULL, sorting = sort )
x |
A first fuzzy relation to be composed. It must be a numeric matrix
with values within the |
y |
A second fuzzy relation to be composed. It must be a numeric matrix
with values within the |
e |
Deprecated. An excluding fuzzy relation. If not NULL,
it must be a numeric matrix with dimensions equal to the |
alg |
An algebra to be used for composition. It must be one of
|
type |
A type of a composition to be performed. It must be one of
|
quantifier |
Deprecated. If not NULL, it must be a function taking a single
argument, a vector of relative cardinalities, that would be translated into
membership degrees. A result of the |
sorting |
Deprecated. Sorting function used within quantifier application. The given function
must sort the membership degrees and allow the |
Function composes a fuzzy relation x
(i.e. a numeric matrix of size
) with a fuzzy relation
y
(i.e. a numeric matrix of size
) and possibly with the deprecated use of an exclusion fuzzy relation
e
(i.e. a numeric matrix of size ).
The style of composition is determined by the algebra alg
, the
composition type type
, and possibly also by a deprecated quantifier
.
This function performs four main composition types, the basic composition (
also known as direct product), the Bandler-Kohout subproduct (also subdirect
product), the Bandler-Kohout superproduct (also supdirect product), and finally,
the Bandler-Kohout square product. More complicated composition operations
may be performed by using the mult()
function and/or by combining multiple
composition results with the algebra()
operations.
A matrix with rows and
columns, where
is the
number of rows of
x
and is the number of columns of
y
.
Michal Burda
[algebra(), mult()
, lingexpr()
R <- matrix(c(0.1, 0.6, 1, 0, 0, 0, 0, 0.3, 0.7, 0.9, 1, 1, 0, 0, 0.6, 0.8, 1, 0, 0, 1, 0.5, 0, 0, 0, 0, 0, 1, 1, 0, 0), byrow=TRUE, nrow=5) S <- matrix(c(0.9, 1, 0.9, 1, 1, 1, 1, 1, 0.1, 0.2, 0, 0.2, 0, 0, 0, 0, 0.7, 0.6, 0.5, 0.4, 1, 0.9, 0.7, 0.6), byrow=TRUE, nrow=6) RS <- matrix(c(0.6, 0.6, 0.6, 0.6, 1, 0.9, 0.7, 0.6, 0.7, 0.6, 0.5, 0.4, 1, 1, 1, 1, 0.1, 0.2, 0, 0.2), byrow=TRUE, nrow=5) compose(R, S, alg='goedel', type='basic') # should be equal to RS
R <- matrix(c(0.1, 0.6, 1, 0, 0, 0, 0, 0.3, 0.7, 0.9, 1, 1, 0, 0, 0.6, 0.8, 1, 0, 0, 1, 0.5, 0, 0, 0, 0, 0, 1, 1, 0, 0), byrow=TRUE, nrow=5) S <- matrix(c(0.9, 1, 0.9, 1, 1, 1, 1, 1, 0.1, 0.2, 0, 0.2, 0, 0, 0, 0, 0.7, 0.6, 0.5, 0.4, 1, 0.9, 0.7, 0.6), byrow=TRUE, nrow=6) RS <- matrix(c(0.6, 0.6, 0.6, 0.6, 1, 0.9, 0.7, 0.6, 0.7, 0.6, 0.5, 0.4, 1, 1, 1, 1, 0.1, 0.2, 0, 0.2), byrow=TRUE, nrow=5) compose(R, S, alg='goedel', type='basic') # should be equal to RS
Given a list of rules or an instance of the S3 farules()
class,
the function returns a list of their consequents (i.e.
right-hand side of rules).
consequents(rules)
consequents(rules)
rules |
Either a list of character vectors or an object of class |
This function assumes rules
to be a valid farules()
object or
a list of character vectors where
the first element of each vector is a consequent part and the
rest is an antecedent part of rules. Function returns a list of
consequents.
A list of character vectors.
Michal Burda
antecedents()
, farules()
, searchrules()
rules <- list(c('a', 'b', 'c'), c('d'), c('a', 'e')) consequents(rules) unlist(consequents(rules)) # as vector
rules <- list(c('a', 'b', 'c'), c('d'), c('a', 'e')) consequents(rules) unlist(consequents(rules)) # as vector
A context describes a range of allowed values for a data column.
ctx3( low = 0, center = low + (high - low) * relCenter, high = 1, relCenter = 0.5 ) ctx3bilat( negMax = -1, negCenter = origin + (negMax - origin) * relCenter, origin = 0, center = origin + (max - origin) * relCenter, max = 1, relCenter = 0.5 ) ctx5( low = 0, lowerCenter = mean(c(low, center)), center = low + (high - low) * relCenter, upperCenter = mean(c(center, high)), high = 1, relCenter = 0.5 ) ctx5bilat( negMax = -1, negUpperCenter = mean(c(negCenter, negMax)), negCenter = origin + (negMax - origin) * relCenter, negLowerCenter = mean(c(origin, negCenter)), origin = 0, lowerCenter = mean(c(origin, center)), center = origin + (max - origin) * relCenter, upperCenter = mean(c(center, max)), max = 1, relCenter = 0.5 ) as.ctx3(x) ## S3 method for class 'ctx3' as.ctx3(x) ## S3 method for class 'ctx3bilat' as.ctx3(x) ## S3 method for class 'ctx5' as.ctx3(x) ## S3 method for class 'ctx5bilat' as.ctx3(x) ## Default S3 method: as.ctx3(x) as.ctx3bilat(x) ## S3 method for class 'ctx3bilat' as.ctx3bilat(x) ## S3 method for class 'ctx3' as.ctx3bilat(x) ## S3 method for class 'ctx5' as.ctx3bilat(x) ## S3 method for class 'ctx5bilat' as.ctx3bilat(x) ## Default S3 method: as.ctx3bilat(x) as.ctx5(x) ## S3 method for class 'ctx5' as.ctx5(x) ## S3 method for class 'ctx3' as.ctx5(x) ## S3 method for class 'ctx3bilat' as.ctx5(x) ## S3 method for class 'ctx5bilat' as.ctx5(x) ## Default S3 method: as.ctx5(x) as.ctx5bilat(x) ## S3 method for class 'ctx5bilat' as.ctx5bilat(x) ## S3 method for class 'ctx3' as.ctx5bilat(x) ## S3 method for class 'ctx3bilat' as.ctx5bilat(x) ## S3 method for class 'ctx5' as.ctx5bilat(x) ## Default S3 method: as.ctx5bilat(x) is.ctx3(x) is.ctx3bilat(x) is.ctx5(x) is.ctx5bilat(x)
ctx3( low = 0, center = low + (high - low) * relCenter, high = 1, relCenter = 0.5 ) ctx3bilat( negMax = -1, negCenter = origin + (negMax - origin) * relCenter, origin = 0, center = origin + (max - origin) * relCenter, max = 1, relCenter = 0.5 ) ctx5( low = 0, lowerCenter = mean(c(low, center)), center = low + (high - low) * relCenter, upperCenter = mean(c(center, high)), high = 1, relCenter = 0.5 ) ctx5bilat( negMax = -1, negUpperCenter = mean(c(negCenter, negMax)), negCenter = origin + (negMax - origin) * relCenter, negLowerCenter = mean(c(origin, negCenter)), origin = 0, lowerCenter = mean(c(origin, center)), center = origin + (max - origin) * relCenter, upperCenter = mean(c(center, max)), max = 1, relCenter = 0.5 ) as.ctx3(x) ## S3 method for class 'ctx3' as.ctx3(x) ## S3 method for class 'ctx3bilat' as.ctx3(x) ## S3 method for class 'ctx5' as.ctx3(x) ## S3 method for class 'ctx5bilat' as.ctx3(x) ## Default S3 method: as.ctx3(x) as.ctx3bilat(x) ## S3 method for class 'ctx3bilat' as.ctx3bilat(x) ## S3 method for class 'ctx3' as.ctx3bilat(x) ## S3 method for class 'ctx5' as.ctx3bilat(x) ## S3 method for class 'ctx5bilat' as.ctx3bilat(x) ## Default S3 method: as.ctx3bilat(x) as.ctx5(x) ## S3 method for class 'ctx5' as.ctx5(x) ## S3 method for class 'ctx3' as.ctx5(x) ## S3 method for class 'ctx3bilat' as.ctx5(x) ## S3 method for class 'ctx5bilat' as.ctx5(x) ## Default S3 method: as.ctx5(x) as.ctx5bilat(x) ## S3 method for class 'ctx5bilat' as.ctx5bilat(x) ## S3 method for class 'ctx3' as.ctx5bilat(x) ## S3 method for class 'ctx3bilat' as.ctx5bilat(x) ## S3 method for class 'ctx5' as.ctx5bilat(x) ## Default S3 method: as.ctx5bilat(x) is.ctx3(x) is.ctx3bilat(x) is.ctx5(x) is.ctx5bilat(x)
low |
Lowest value of an unilateral context. |
center |
A positive middle value of a bilateral context, or simply a middle value of an unilateral context. |
high |
Highest value of an unilateral context. |
relCenter |
A relative quantity used to compute the |
negMax |
Lowest negative value of a bilateral context. |
negCenter |
A negative middle value. |
origin |
Origin, i.e. the initial point of the bilateral context. It is typically a value of zero. |
max |
Highest value of a bilateral context. |
lowerCenter |
A typical positive value between origin and center. |
upperCenter |
A typical positive value between center and maximum. |
negUpperCenter |
A typical negative value between |
negLowerCenter |
A typical negative value between |
x |
A value to be examined or converted. For |
A context describes a range of allowed values for a data column. For that, only the borders of the interval, i.e. minimum and maximum, are usually needed, but we use contexts to hold more additional information that is crucial for the construction of linguistic expressions.
Currently, four different contexts are supported that determine the types of
possible linguistic expressions, as constructed with lingexpr()
.
Unilateral or bilateral context is allowed in the variants of trichotomy or
pentachotomy. Trichotomy distinguishes three points in the interval: the
lowest value, highest value, and center. Pentachotomy adds lower center and
upper center to them. As opposite to unilateral, the bilateral context
handles explicitly the negative values. That is, bilateral context expects
some middle point, the origin (usually 0), around which the positive and
negative values are placed.
Concretely, the type of the context determines the allowed atomic expressions as follows:
ctx3
: trichotomy (low, center, high) enables atomic expressions:
small, medium, big;
ctx5
: pentachotomy (low, lowerCenter, center, upperCenter, high) enables
atomic expressions: small, lower medium, medium, upper medium, big;
ctx3bilat
: bilateral trichotomy (negMax, negCenter, origin, center, max)
enables atomic expressions: negative big, negative medium, negative small,
zero, small, medium, big;
ctx5bilat
: bilateral pentachotomy (negMax, negCenter, origin, center,
max) enables atomic expressions: negative big, negative medium, negative
small, zero, small, medium, big.
The as.ctx*
functions return instance of the appropriate class. The
functions perform the conversion so that missing points of the new context
are computed from the old context that is being transformed. In the
subsequent table, rows represent compatible values of different context
types:
ctx3 | ctx5 | ctx3bilat | ctx5bilat |
negMax | negMax | ||
negUpperCenter | |||
negCenter | negCenter | ||
negLowerCenter | |||
low | low | origin | origin |
lowerCenter | lowerCenter | ||
center | center | center | center |
upperCenter | upperCenter | ||
high | high | max | max |
The as.ctx*
conversion is performed by replacing values by rows, as
indicated in the table above. When converting from a context with less
points to a context with more points (e.g. from unilateral to bilateral, or
from trichotomy to pentachotomy), missing points are computed as follows:
center
is computed as a mean of origin
(or low
) and max
(or high
).
lowerCenter
is computed as a mean of origin
(or low
) and center
.
upperCenter
is computed as a mean of mas
(or high
) and center
.
negative points (such as negMax
, negCenter
etc.) are computed
symmetrically around origin
to the corresponding positive points.
The code as.ctx*
functions allow the parameter to be also a numeric
vector of size equal to the number of points required for the given context
type, i.e. 3 (ctx3
), 5 (ctx3bilat
, ctx5
), or 9 (ctx5bilat
).
ctx*
and as.ctx*
return an instance of the appropriate
class. is.ctx*
returns TRUE
or FALSE
.
Michal Burda
minmax()
, lingexpr()
, horizon()
, hedge()
, fcut()
, lcut()
ctx3(low=0, high=10) as.ctx3bilat(ctx3(low=0, high=10))
ctx3(low=0, high=10) as.ctx3bilat(ctx3(low=0, high=10))
A list of the parameters that define the shape of the hedges.
defaultHedgeParams
defaultHedgeParams
An object of class list
of length 9.
Take a discretized fuzzy set (i.e. a vector of membership degrees and a vector of numeric values that correspond to that degrees) and perform a selected type of defuzzification, i.e. conversion of the fuzzy set into a single crisp value.
defuzz( degrees, values, type = c("mom", "fom", "lom", "dee", "cog", "expun", "expw1", "expw2") )
defuzz( degrees, values, type = c("mom", "fom", "lom", "dee", "cog", "expun", "expw1", "expw2") )
degrees |
A fuzzy set in the form of a numeric vector of membership
degrees of values provided as the |
values |
A universe for the fuzzy set. |
type |
Type of the requested defuzzification method. The possibilities are:
|
Function converts input fuzzy set into a crisp value. The definition of
input fuzzy set is provided by the arguments degrees
and
values
. These arguments should be numeric vectors of the same length,
the former containing membership degrees in the interval and
the latter containing the corresponding crisp values: i.e.,
values[i]
has a membership degree degrees[i]
.
A defuzzified value.
Michal Burda
fire()
, aggregateConsequents()
, perceive()
, pbld()
, fcut()
, lcut()
# returns mean of maxima, i.e., mean of 6, 7, 8 defuzz(c(0, 0, 0, 0.1, 0.3, 0.9, 0.9, 0.9, 0.2, 0), 1:10, type='mom')
# returns mean of maxima, i.e., mean of 6, 7, 8 defuzz(c(0, 0, 0, 0.1, 0.3, 0.9, 0.9, 0.9, 0.2, 0), 1:10, type='mom')
If both left
and right
equal to "none"
, the function returns a vector of n
values from x
that divide the range of values in x
into n - 1
equidistant intervals.
equidist( x, n, left = c("infinity", "same", "none"), right = c("infinity", "same", "none") )
equidist( x, n, left = c("infinity", "same", "none"), right = c("infinity", "same", "none") )
x |
A numeric vector of input values |
n |
The number of breaks of |
left |
The left border of the returned vector of breaks: |
right |
The right border of the returned vector of breaks: |
If the left
(resp. right
) argument equals to "infinity"
, -Inf
(resp. Inf
) is prepended
(resp. appended) to the result. If it equals to "same"
, the first (resp. last) value is doubled.
Such functionality is beneficial if using the result of this function with e.g. the fcut()
function:
Inf
values at the beginning (resp. at the end) of the vector of breaks means that the fuzzy set
partition starts with a fuzzy set with kernel going to negative (resp. positive) infinity; the doubled
value at the beginning (resp. end) results in half-cut (trimmed) fuzzy set.
A vector of equidistant breaks, which can be used e.g. in fcut()
Michal Burda
If both left
and right
equal to "none"
, the function returns a vector of n
values from x
that divide the range of values in x
into n - 1
equidistant intervals. If the left
(resp. right
)
argument equals to "infinity"
, -Inf
(resp. Inf
) is prepended (resp. appended) to the result. If
it equals to "same"
, the first (resp. last) value is doubled. See fcut()
for what such vectors
mean.
equifreq( x, n, left = c("infinity", "same", "none"), right = c("infinity", "same", "none") )
equifreq( x, n, left = c("infinity", "same", "none"), right = c("infinity", "same", "none") )
x |
A numeric vector of input values |
n |
The number of breaks of |
left |
The left border of the returned vector of breaks: |
right |
The right border of the returned vector of breaks: |
If the left
(resp. right
) argument equals to "infinity"
, -Inf
(resp. Inf
) is prepended
(resp. appended) to the result. If it equals to "same"
, the first (resp. last) value is doubled.
Such functionality is beneficial if using the result of this function with e.g. the fcut()
function:
Inf
values at the beginning (resp. at the end) of the vector of breaks means that the fuzzy set
partition starts with a fuzzy set with kernel going to negative (resp. positive) infinity; the doubled
value at the beginning (resp. end) results in half-cut (trimmed) fuzzy set.
A vector of equifrequent breaks
Michal Burda
Take a FRBE forecast and compare it with real values using arbitrary error function.
evalfrbe(fit, real, error = c("smape", "mase", "rmse"))
evalfrbe(fit, real, error = c("smape", "mase", "rmse"))
fit |
A FRBE model of class |
real |
A numeric vector of real (known) values. The vector must
correspond to the values being forecasted, i.e. the length must be the same
as the horizon forecasted by |
error |
Error measure to be computed. It can be either Symmetric Mean
Absolute Percentage Error ( |
Take a FRBE forecast and compare it with real values by evaluating a given error measure. FRBE forecast should be made for a horizon of the same value as length of the vector of real values.
Function returns a data.frame with single row and columns corresponding to the error of the individual forecasting methods that the FRBE is computed from. Additionally to this, a column "avg" is added with error of simple average of the individual forecasting methods and a column "frbe" with error of the FRBE forecasts.
Michal Burda
Štěpnička, M., Burda, M., Štěpničková, L. Fuzzy Rule Base Ensemble Generated from Data by Linguistic Associations Mining. FUZZY SET SYST. 2015.
frbe()
, smape()
, mase()
, rmse()
# prepare data (from the forecast package) library(forecast) horizon <- 10 train <- wineind[-1 * (length(wineind)-horizon+1):length(wineind)] test <- wineind[(length(wineind)-horizon+1):length(wineind)] f <- frbe(ts(train, frequency=frequency(wineind)), h=horizon) evalfrbe(f, test)
# prepare data (from the forecast package) library(forecast) horizon <- 10 train <- wineind[-1 * (length(wineind)-horizon+1):length(wineind)] test <- wineind[(length(wineind)-horizon+1):length(wineind)] f <- frbe(ts(train, frequency=frequency(wineind)), h=horizon) evalfrbe(f, test)
farules
which represents a set of fuzzy
association rules and their statistical characteristics.This function is a constructor that returns an instance of the farules
S3 class.
To search for fuzzy association rules, refer to the searchrules()
function.
farules(rules, statistics)
farules(rules, statistics)
rules |
A list of character vectors, where each vector represents a rule and each value of the vector represents a predicate. The first value of the vector is assumed to be a rule's consequent, the rest is a rule's antecedent. |
statistics |
A numeric matrix of various statistical characteristics of
the rules. Each column of that matrix corresponds to some statistic (such as
support, confidence, etc.) and each row corresponds to a rule in the list of
|
Returns an object of class farules
.
Michal Burda
fsets
S3 class using shapes derived from
triangles or raised cosinesThis function creates a set of fuzzy attributes from crisp data. Factors,
numeric vectors, matrix or data frame columns are transformed into a set of
fuzzy attributes, i.e. columns with membership degrees. Unlike
lcut()
, for transformation is not used the linguistic linguistic
approach, but partitioning using regular shapes of the fuzzy sets (such as
triangle, raised cosine).
fcut(x, ...) ## Default S3 method: fcut(x, ...) ## S3 method for class 'factor' fcut(x, name = deparse(substitute(x)), ...) ## S3 method for class 'logical' fcut(x, name = deparse(substitute(x)), ...) ## S3 method for class 'numeric' fcut( x, breaks, name = deparse(substitute(x)), type = c("triangle", "raisedcos"), merge = 1, parallel = FALSE, ... ) ## S3 method for class 'data.frame' fcut( x, breaks = NULL, name = NULL, type = c("triangle", "raisedcos"), merge = 1, parallel = FALSE, ... ) ## S3 method for class 'matrix' fcut(x, ...)
fcut(x, ...) ## Default S3 method: fcut(x, ...) ## S3 method for class 'factor' fcut(x, name = deparse(substitute(x)), ...) ## S3 method for class 'logical' fcut(x, name = deparse(substitute(x)), ...) ## S3 method for class 'numeric' fcut( x, breaks, name = deparse(substitute(x)), type = c("triangle", "raisedcos"), merge = 1, parallel = FALSE, ... ) ## S3 method for class 'data.frame' fcut( x, breaks = NULL, name = NULL, type = c("triangle", "raisedcos"), merge = 1, parallel = FALSE, ... ) ## S3 method for class 'matrix' fcut(x, ...)
x |
Data to be transformed: a vector, matrix, or data frame. Non-numeric data are allowed. |
... |
Other parameters to some methods. |
name |
A name to be added as a suffix to the created fuzzy attribute
names. This parameter can be used only if |
breaks |
This argument determines the break-points of the positions of
the fuzzy sets (see also I.e. the minimum number of breaks-points is 3; If considering an i-th fuzzy set (of The resulting fuzzy sets would be named after the original data by adding
dot (".") and a number Unlike For non-numeric data, this argument is ignored. For |
type |
The type of fuzzy sets to create. Currently,
|
merge |
This argument determines whether to derive additional fuzzy
sets by merging the elementary fuzzy sets (whose position is determined with
the
The names of the derived (merged) fuzzy sets is derived from the names of the original elementary fuzzy sets by concatenating them with the "|" (pipe) separator. |
parallel |
Whether the processing should be run in parallel or not.
Parallelization is implemented using the |
The aim of this function is to transform numeric data into a set of fuzzy attributes. The result is in the form of the object of class "fsets", i.e. a numeric matrix whose columns represent fuzzy sets (fuzzy attributes) with values being the membership degrees.
The function behaves differently to the type of input x
.
If x
is a factor or a logical vector (or other non-numeric data) then
for each distinct value of an input, a fuzzy set is created, and data would
be transformed into crisp membership degrees 0 or 1 only.
If x
is a numeric vector then fuzzy sets are created accordingly to
break-points specified in the breaks
argument with 1st, 2nd and 3rd
break-point specifying the first fuzzy set, 2nd, 3rd and 4th break-point
specifying th second fuzzy set etc. The shape of the fuzzy set is determined
by the type
argument that may be equal either to a string
'triangle'
or 'raisedcos'
or it could be a function that
computes the membership degrees for itself (see triangular()
or
raisedcosine()
functions for details). Additionally, super-sets of
these elementary sets may be created by specifying the merge
argument. Values of this argument specify how many consecutive fuzzy sets
should be combined (by using the Lukasiewic's t-conorm) to produce
super-sets - see the description of merge
above.
If a matrix (resp. data frame) is provided to this function instead of
single vector, all columns are processed separately as described above and
the result is combined with the cbind.fsets()
function.
The function sets up properly the vars()
and specs()
properties of the result.
An object of class "fsets" is returned, which is a numeric matrix
with columns representing the fuzzy attributes. Each source column of the
x
argument corresponds to multiple columns in the resulting matrix.
Columns have names that indicate the name of the source as well as a index
of fuzzy set(s) – see the description of arguments
breaks
and merge
above.
The resulting object would also have set the vars()
and
specs()
properties with the former being created from original
column names (if x
is a matrix or data frame) or the name
argument (if x
is a numeric vector). The specs()
incidency matrix would be created to reflect the superset-hood of the merged
fuzzy sets.
Michal Burda
lcut()
, equidist()
, farules()
, pbld()
, vars()
, specs()
,
cbind.fsets()
# fcut on non-numeric data ff <- factor(substring("statistics", 1:10, 1:10), levels = letters) fcut(ff) # transform a single vector into a single fuzzy set x <- runif(10) fcut(x, breaks=c(0, 0.5, 1), name='age') # transform single vector into a partition of the interval 0-1 # (the boundary triangles are right-angled) fcut(x, breaks=c(0, 0, 0.5, 1, 1), name='age') # also create supersets fcut(x, breaks=c(0, 0, 0.5, 1, 1), name='age', merge=c(1, 2)) # transform all columns of a data frame # with different breakpoints data <- CO2[, c('conc', 'uptake')] fcut(data, breaks=list(conc=c(95, 95, 350, 1000, 1000), uptake=c(7, 7, 28.3, 46, 46))) # using a custom 3-argument function (a function factory): f <- function(a, b, c) { return(function(x) ifelse(a <= x & x <= b, 1, 0)) } fcut(x, breaks=c(0, 0.5, 1), name='age', type=f) # using a custom 4-argument function: f <- function(x, a, b, c) { return(ifelse(a <= x & x <= b, 1, 0)) } fcut(x, breaks=c(0, 0.5, 1), name='age', type=f)
# fcut on non-numeric data ff <- factor(substring("statistics", 1:10, 1:10), levels = letters) fcut(ff) # transform a single vector into a single fuzzy set x <- runif(10) fcut(x, breaks=c(0, 0.5, 1), name='age') # transform single vector into a partition of the interval 0-1 # (the boundary triangles are right-angled) fcut(x, breaks=c(0, 0, 0.5, 1, 1), name='age') # also create supersets fcut(x, breaks=c(0, 0, 0.5, 1, 1), name='age', merge=c(1, 2)) # transform all columns of a data frame # with different breakpoints data <- CO2[, c('conc', 'uptake')] fcut(data, breaks=list(conc=c(95, 95, 350, 1000, 1000), uptake=c(7, 7, 28.3, 46, 46))) # using a custom 3-argument function (a function factory): f <- function(a, b, c) { return(function(x) ifelse(a <= x & x <= b, 1, 0)) } fcut(x, breaks=c(0, 0.5, 1), name='age', type=f) # using a custom 4-argument function: f <- function(x, a, b, c) { return(ifelse(a <= x & x <= b, 1, 0)) } fcut(x, breaks=c(0, 0.5, 1), name='age', type=f)
Given truth degrees of predicates, compute the truth value of given list of rules.
fire( x, rules, tnorm = c("goedel", "goguen", "lukasiewicz"), onlyAnte = TRUE, parallel = FALSE )
fire( x, rules, tnorm = c("goedel", "goguen", "lukasiewicz"), onlyAnte = TRUE, parallel = FALSE )
x |
Truth degrees of predicates. |
rules |
Either an object of S3 class |
tnorm |
A character string representing a triangular norm to be used
(either |
onlyAnte |
If |
parallel |
Deprecated parameter. Computation is done sequentially. |
The aim of this function is to compute the truth value of each rule in a
rules
list by assigning truth values to rule's predicates given by data x
.
x
is a numeric vector or numeric matrix of truth values of predicates. If
x
is vector then names(x)
must correspond to the predicate names
in rules
. If x
is a matrix then each column should represent a predicate
and thus colnames(x)
must correspond to predicate names in rules
.
Values of x
are interpreted as truth values, i.e., they must be from the
interval . If matrix is given, the resulting truth values are
computed row-wisely.
rules
may be a list of character vectors or an instance of the S3 class
farules()
. The character vectors in the rules
list represent formulae
in conjunctive form. If onlyAnte=FALSE
, fire()
treats the rule as
a conjunction of all predicates, i.e., a conjunction of all predicates is
computed. If onlyAnte=TRUE
, the first element of each rule is removed
prior evaluation, i.e., a conjunction of all predicates except the first
are computed: this is useful if rules
is a farules()
object, since
farules()
objects save a rule's consequent as the first element (see also
antecedents()
and consequents()
functions).
The type of conjunction to be computed can be specified with the tnorm
parameter.
If x
is a matrix then the result of this function is a list
of numeric vectors with truth values of each rule, i.e., each element of the
resulting list corresponds to a rule and each value of the vector in the resulting
list corresponds to a row of the original data matrix x
.
x
as a vector is treated as a single-row matrix.
Michal Burda
aggregateConsequents()
, defuzz()
, perceive()
, pbld()
, fcut()
, lcut()
, farules()
# fire whole rules on a vector x <- 1:10 / 10 names(x) <- letters[1:10] rules <- list(c('a', 'c', 'e'), c('b'), c('d', 'a'), c('c', 'a', 'b')) fire(x, rules, tnorm='goguen', onlyAnte=FALSE) # fire antecedents of the rules on a matrix x <- matrix(1:20 / 20, nrow=2) colnames(x) <- letters[1:10] rules <- list(c('a', 'c', 'e'), c('b'), c('d', 'a'), c('c', 'a', 'b')) fire(x, rules, tnorm='goedel', onlyAnte=TRUE) # the former command should be equal to fire(x, antecedents(rules), tnorm='goedel', onlyAnte=FALSE)
# fire whole rules on a vector x <- 1:10 / 10 names(x) <- letters[1:10] rules <- list(c('a', 'c', 'e'), c('b'), c('d', 'a'), c('c', 'a', 'b')) fire(x, rules, tnorm='goguen', onlyAnte=FALSE) # fire antecedents of the rules on a matrix x <- matrix(1:20 / 20, nrow=2) colnames(x) <- letters[1:10] rules <- list(c('a', 'c', 'e'), c('b'), c('d', 'a'), c('c', 'a', 'b')) fire(x, rules, tnorm='goedel', onlyAnte=TRUE) # the former command should be equal to fire(x, antecedents(rules), tnorm='goedel', onlyAnte=FALSE)
This function computes the fuzzy rule-based ensemble of time-series forecasts. Several forecasting methods are used to predict future values of given time-series and a weighted sum is computed from them with weights being determined from a fuzzy rule base.
frbe(d, h = 10)
frbe(d, h = 10)
d |
A source time-series in the ts time-series format. Note that the frequency of the time-series must to be set properly. |
h |
A forecasting horizon, i.e. the number of values to forecast. |
This function computes the fuzzy rule-based ensemble of time-series forecasts. The evaluation comprises of the following steps:
Several features are extracted from the given time-series d
:
length of the time-series;
strength of trend;
strength of seasonality;
skewness;
kurtosis;
variation coefficient;
stationarity;
frequency. These features are used later to infer weights of the forecasting methods.
Several forecasting methods are applied on the given time-series d
to
obtain forecasts. Actually, the following methods are used:
ARIMA - by calling forecast::auto.arima()
;
Exponential Smoothing - by calling forecast::ets()
;
Random Walk with Drift - by calling forecast::rwf()
;
Theta - by calling [forecast::thetaf().
Computed features are input to the fuzzy rule-based inference mechanism
which yields into weights of the forecasting methods. The fuzzy rule base is
hardwired in this package and it was obtained by performing data mining with
the use of the farules()
function.
A weighted sum of forecasts is computed and returned as a result.
Result is a list of class frbe
with the following elements:
features
- a data frame with computed features of the given time-series;
forecasts
- a data frame with forecasts to be ensembled;
weights
- weights of the forecasting methods as inferred from the features
and the hard-wired fuzzy rule base;
mean
- the resulting ensembled forecast (computed as a weighted sum
of forecasts).
Michal Burda
Štěpnička, M., Burda, M., Štěpničková, L. Fuzzy Rule Base Ensemble Generated from Data by Linguistic Associations Mining. FUZZY SET SYST. 2015.
# prepare data (from the forecast package) library(forecast) horizon <- 10 train <- wineind[-1 * (length(wineind)-horizon+1):length(wineind)] test <- wineind[(length(wineind)-horizon+1):length(wineind)] # perform FRBE f <- frbe(ts(train, frequency=frequency(wineind)), h=horizon) # evaluate FRBE forecasts evalfrbe(f, test) # display forecast results f$mean
# prepare data (from the forecast package) library(forecast) horizon <- 10 train <- wineind[-1 * (length(wineind)-horizon+1):length(wineind)] test <- wineind[(length(wineind)-horizon+1):length(wineind)] # perform FRBE f <- frbe(ts(train, frequency=frequency(wineind)), h=horizon) # evaluate FRBE forecasts evalfrbe(f, test) # display forecast results f$mean
The aim of the fsets
S3 class is to store several fuzzy sets in the
form of numeric matrix where columns represent fuzzy sets, rows are
elements from the universe, and therefore a value of i
-th row and j
-th
column is a membership degree of i
-th element of the universe to j
-th fuzzy
set. The fsets
object also stores the information about
the origin of the fuzzy sets as well as a relation of specificity among
them.
fsets( x, vars = rep(deparse(substitute(x)), ncol(x)), specs = matrix(0, nrow = ncol(x), ncol = ncol(x)) ) vars(f) vars(f) <- value specs(f) specs(f) <- value
fsets( x, vars = rep(deparse(substitute(x)), ncol(x)), specs = matrix(0, nrow = ncol(x), ncol = ncol(x)) ) vars(f) vars(f) <- value specs(f) specs(f) <- value
x |
A matrix of membership degrees. Columns of the matrix represent fuzzy sets, colnames are names of the fuzzy sets (and must not be NULL). Rows of the matrix represent elements of the universe. |
vars |
A character vector that must correspond to the
columns of |
specs |
A square numeric matrix containing values from |
f |
An instance of S3 class |
value |
Attribute values to be set to the object. |
The fsets()
function is a constructor of an object of type fsets
.
Each object stores two attributes: vars
and specs
. The functions vars()
and specs()
). can be used to access these attributes.
It is assumed that the fuzzy sets
are derived from some raw variables, e.g. numeric vectors or factors. vars
attribute is a character vector of names of raw variables with size equal
to the number of fuzzy sets in fsets
object. It is then assumed that
two fuzzy sets with the same name in vars()
attribute are derived from
the same variable.
specs
attribute gives a square numeric matrix of size equal to the number
of fuzzy sets in fsets
. specs[i][j] == 1
if and only if the i
-th fuzzy
set is more specific than j
-th fuzzy set. Specificity of fuzzy sets means
the nestedness of fuzzy set: for instance, very small
is more specific than
small
; similarly, extremely big
is more specific than very big
; on the
other hand, very big
and extremely small
are incomparable. A necessary
condition for specificity is subsethood.
fsets()
returns an object of S3 class fsets
. vars()
returns
a vector of original variable names of the fsets
object. specs
returns the specificity matrix.
Michal Burda
# create a matrix of random membership degrees m <- matrix(runif(30), ncol=5) colnames(m) <- c('a1', 'a2', 'a12', 'b1', 'b2') # create vars - first three (a1, a2, a3) and next two (b1, b2) # fuzzy sets originate from the same variable v <- c('a', 'a', 'a', 'b', 'b') names(v) <- colnames(m) # create specificity matrix - a1 and a2 are more specific than a12, # the rest is incomparable s <- matrix(c(0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), byrow=TRUE, ncol=5) colnames(s) <- colnames(m) rownames(s) <- colnames(m) # create a valid instance of the fsets class o <- fsets(m, v, s)
# create a matrix of random membership degrees m <- matrix(runif(30), ncol=5) colnames(m) <- c('a1', 'a2', 'a12', 'b1', 'b2') # create vars - first three (a1, a2, a3) and next two (b1, b2) # fuzzy sets originate from the same variable v <- c('a', 'a', 'a', 'b', 'b') names(v) <- colnames(m) # create specificity matrix - a1 and a2 are more specific than a12, # the rest is incomparable s <- matrix(c(0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), byrow=TRUE, ncol=5) colnames(s) <- colnames(m) rownames(s) <- colnames(m) # create a valid instance of the fsets class o <- fsets(m, v, s)
Compute a fuzzy tranform of the given input matrix x
.
ft(x, xmemb, y, order = 1)
ft(x, xmemb, y, order = 1)
x |
the numeric matrix of input values |
xmemb |
the partitioning of input values, i.e., a |
y |
the numeric vector of target values |
order |
the order of the fuzzy transform (0, 1, 2, ...) |
the instance of the S3 class ft
Michal Burda
Perfilieva I. Fuzzy transforms: Theory and applications. FUZZY SET SYST, volume 157, issue 8, p. 993-1023. 2006.
# create the fuzzy transform object y <- (1:30)^2 x <- as.matrix(data.frame(a = 1:30, b = 30:1)) xmemb <- fcut(x, breaks = list(a = equidist(x[, 'a'], 3), b = equidist(x[, 'b'], 3))) fit <- ft(x, xmemb, y, order = 1) # obtain function values x2 <- as.matrix(data.frame(a = 10:20, b = 20:10)) xmemb2 <- fcut(x2, breaks = list(a = equidist(x[, 'a'], 3), b = equidist(x[, 'b'], 3))) y2 <- ftinv(fit, x2, xmemb2) print(y2) # compare original values with those obtained by the fuzzy transform y[10:20] - y2
# create the fuzzy transform object y <- (1:30)^2 x <- as.matrix(data.frame(a = 1:30, b = 30:1)) xmemb <- fcut(x, breaks = list(a = equidist(x[, 'a'], 3), b = equidist(x[, 'b'], 3))) fit <- ft(x, xmemb, y, order = 1) # obtain function values x2 <- as.matrix(data.frame(a = 10:20, b = 20:10)) xmemb2 <- fcut(x2, breaks = list(a = equidist(x[, 'a'], 3), b = equidist(x[, 'b'], 3))) y2 <- ftinv(fit, x2, xmemb2) print(y2) # compare original values with those obtained by the fuzzy transform y[10:20] - y2
Compute an inverse of fuzzy transform fit
for values x
with corresponding
membership degrees xmemb
.
ftinv(fit, x, xmemb)
ftinv(fit, x, xmemb)
fit |
The fuzzy transform object as the instance of the |
x |
The numeric matrix of input values, for which the inverse fuzzy transform has to be computed |
xmemb |
the partitioning of input values, i.e., a |
The inverse of the fuzzy transform fit
, i.e., the approximated values
of the original function that was the subject of the fuzzy transform
Michal Burda
Perfilieva I. Fuzzy transforms: Theory and applications. FUZZY SET SYST, volume 157, issue 8, p. 993-1023. 2006.
# create the fuzzy transform object y <- (1:30)^2 x <- as.matrix(data.frame(a = 1:30, b = 30:1)) xmemb <- fcut(x, breaks = list(a = equidist(x[, 'a'], 3), b = equidist(x[, 'b'], 3))) fit <- ft(x, xmemb, y, order = 1) # obtain function values x2 <- as.matrix(data.frame(a = 10:20, b = 20:10)) xmemb2 <- fcut(x2, breaks = list(a = equidist(x[, 'a'], 3), b = equidist(x[, 'b'], 3))) y2 <- ftinv(fit, x2, xmemb2) print(y2) # compare original values with those obtained by the fuzzy transform y - y2
# create the fuzzy transform object y <- (1:30)^2 x <- as.matrix(data.frame(a = 1:30, b = 30:1)) xmemb <- fcut(x, breaks = list(a = equidist(x[, 'a'], 3), b = equidist(x[, 'b'], 3))) fit <- ft(x, xmemb, y, order = 1) # obtain function values x2 <- as.matrix(data.frame(a = 10:20, b = 20:10)) xmemb2 <- fcut(x2, breaks = list(a = equidist(x[, 'a'], 3), b = equidist(x[, 'b'], 3))) y2 <- ftinv(fit, x2, xmemb2) print(y2) # compare original values with those obtained by the fuzzy transform y - y2
Returns a function that realizes linguistic hedging - i.e. transformation of linguistic
horizon (see horizon()
) into a linguistic expression.
hedge( type = c("ex", "si", "ve", "ty", "-", "ml", "ro", "qr", "vr"), hedgeParams = defaultHedgeParams )
hedge( type = c("ex", "si", "ve", "ty", "-", "ml", "ro", "qr", "vr"), hedgeParams = defaultHedgeParams )
type |
The type of the required linguistic hedge |
hedgeParams |
Parameters that determine the shape of the hedges |
hedge()
returns a function that realizes the selected linguistic hedge on its parameter:
ex
: extremely,
si
: significantly,
ve
: very,
ty
: typically,
-
: empty hedge (no hedging),
ml
: more or less,
ro
: roughly,
qr
: quite roughly,
vr
: very roughly.
This function is quite low-level. Perhaps a more convenient way to create linguistic expressions
is to use the lingexpr()
function.
Returns a function with a single argument, which has to be a numeric vector.
Michal Burda
horizon()
, lingexpr()
, fcut()
, lcut()
, ctx()
a <- horizon(ctx3(), 'sm') plot(a) h <- hedge('ve') plot(h) verySmall <- function(x) h(a(x)) plot(verySmall) # the last plot should be equal to: plot(lingexpr(ctx3(), atomic='sm', hedge='ve'))
a <- horizon(ctx3(), 'sm') plot(a) h <- hedge('ve') plot(h) verySmall <- function(x) h(a(x)) plot(verySmall) # the last plot should be equal to: plot(lingexpr(ctx3(), atomic='sm', hedge='ve'))
Based on given context
and atomic
expression, this function returns a function that computes a linguistic
horizon, i.e. a triangular function representing basic limits of what humans treat as "small", "medium", "big" etc.
within given context
. Linguistic horizon stands as a base for creation of linguistic expressions. A linguistic
expression is created by applying a hedge()
on horizon. (Atomic linguistic expression is created from horizon by
applying an empty (-
) hedge).
horizon( context, atomic = c("sm", "me", "bi", "lm", "um", "ze", "neg.sm", "neg.me", "neg.bi", "neg.lm", "neg.um") )
horizon( context, atomic = c("sm", "me", "bi", "lm", "um", "ze", "neg.sm", "neg.me", "neg.bi", "neg.lm", "neg.um") )
context |
A context of linguistic expressions (see |
atomic |
An atomic expression whose horizon we would like to obtain |
The values of the atomic
parameter have the following meaning (in ascending order):
neg.bi
: big negative (far from zero)
neg.um
: upper medium negative (between medium negative and big negative)
neg.me
: medium negative
neg.lm
: lower medium negative (between medium negative and small negative)
neg.sm
: small negative (close to zero)
ze
: zero
sm
: small
lm
: lower medium
me
: medium
um
: upper medium
bi
: big
Based on the context type, the following atomic expressions are allowed:
ctx3()
(trichotomy): small, medium, big;
ctx5()
(pentachotomy): small, lower medium, medium, upper medium, big;
ctx3bilat()
(bilateral trichotomy): negative big, negative medium, negative small,
zero, small, medium, big;
ctx5bilat()
(bilateral pentachotomy): negative big, negative medium, negative
small, zero, small, medium, big.
This function is quite low-level. Perhaps a more convenient way to create linguistic expressions
is to use the lingexpr()
function.
A function of single argument that must be a numeric vector
Michal Burda
ctx3()
, ctx5()
, ctx3bilat()
, ctx5bilat()
, hedge()
, fcut()
, lcut()
plot(horizon(ctx3(), 'sm'), from=-1, to=2) plot(horizon(ctx3(), 'me'), from=-1, to=2) plot(horizon(ctx3(), 'bi'), from=-1, to=2) a <- horizon(ctx3(), 'sm') plot(a) h <- hedge('ve') plot(h) verySmall <- function(x) h(a(x)) plot(verySmall)
plot(horizon(ctx3(), 'sm'), from=-1, to=2) plot(horizon(ctx3(), 'me'), from=-1, to=2) plot(horizon(ctx3(), 'bi'), from=-1, to=2) a <- horizon(ctx3(), 'sm') plot(a) h <- hedge('ve') plot(h) verySmall <- function(x) h(a(x)) plot(verySmall)
x
inherits from the S3 farules
class.Test whether x
inherits from the S3 farules
class.
is.farules(x)
is.farules(x)
x |
An object being tested. |
TRUE
if x
is a valid farules()
object and FALSE
otherwise.
Michal Burda
x
is a valid object of the S3 frbe
classTest whether x
has a valid format for the objects of the S3 frbe
class.
is.frbe(x)
is.frbe(x)
x |
An object being tested. |
This function tests whether x
inherits from frbe
i.e. whether
it is a list with the following elements: forecasts
data frame,
features
data frame, weights
vector, and mean
vector.
Instances of the S3 class frbe
are usually created by the frbe()
function.
TRUE
if x
is a valid frbe
object and FALSE
otherwise.
Michal Burda
Štěpnička, M., Burda, M., Štěpničková, L. Fuzzy Rule Base Ensemble Generated from Data by Linguistic Associations Mining. FUZZY SET SYST. 2015.
x
is a valid object of the S3 fsets
classThis function tests whether x
inherits from S3 fsets
class.
is.fsets(x)
is.fsets(x)
x |
An object being tested. |
TRUE if x
is a valid fsets
object and FALSE otherwise.
Michal Burda
x
is a valid object of the S3 ft
classTest whether x
has a valid format for objects of the S3 ft
class that represents
the Fuzzy Transform.
is.ft(x)
is.ft(x)
x |
An object to be tested |
This function tests whether x
is an instance of the ft
class and whether it is a list
with the following elements: inputs
character vector, partitions
list, order
number,
antecedents
matrix and consequents
matrix.
TRUE
if x
is a valid ft
object and FALSE
otherwise.
Michal Burda
x
of predicates is more specific (or equal)
than y
with respect to vars
and specs
.The function takes two character vectors of predicates and determines whether
x
is more specific (or equal w.r.t. the specificity) than y
. The
specificity relation is fully determined with the values of the vars()
vector
and the specs()
incidence matrix that is encapsulated in the given fsets
object.
is.specific(x, y, fsets, vars = NULL, specs = NULL)
is.specific(x, y, fsets, vars = NULL, specs = NULL)
x |
The first character vector of predicates. |
y |
The second character vector of predicates. |
fsets |
A valid instance of the |
vars |
Deprecated parameter must be |
specs |
Deprecated parameter must be |
Let and
represent some predicates of vectors
x
and y
,
respectively. Function assumes that each vector x
and y
does not
contain two or more predicates with the same value of vars()
.
This function returns TRUE iff all of the following conditions hold:
for any there exists
such that
;
for any there either does not exist
such that
, or
, or
.
TRUE or FALSE (see description).
Michal Burda
perceive()
, pbld()
, fsets()
, vars()
, specs()
# prepare fsets object v <- c(rep('a', 3), rep('b', 3), rep('c', 3), rep('d', 3)) s <- matrix(c(0,1,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,1,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,1,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,1,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0), byrow=TRUE, ncol=12) m <- matrix(0, nrow=1, ncol=12) colnames(m) <- paste(rep(c('VeSm', 'Sm', 'Bi'), times=4), rep(c('a', 'b', 'c', 'd'), each=3), sep='.') f <- fsets(m, v, s) # returns TRUE is.specific(c('VeSm.a', 'Bi.c'), c('VeSm.a', 'Bi.c'), f) # returns TRUE (x and y swapped return FALSE) is.specific(c('VeSm.a', 'Bi.c', 'Sm.d'), c('Sm.a', 'Bi.c', 'Sm.d'), f) # returns TRUE (x and y swapped return FALSE) is.specific(c('VeSm.a', 'Bi.c', 'Sm.d'), c('VeSm.a', 'Bi.c'), f) # returns TRUE (x and y swapped return FALSE) is.specific(c('VeSm.a', 'Bi.c', 'Sm.d'), character(), f) # returns FALSE is.specific(c('Sm.a'), c('Bi.c'), f) # returns FALSE is.specific(c('VeSm.a', 'Sm.c'), c('Sm.a', 'Bi.c'), f)
# prepare fsets object v <- c(rep('a', 3), rep('b', 3), rep('c', 3), rep('d', 3)) s <- matrix(c(0,1,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,1,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,1,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,1,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0, 0,0,0), byrow=TRUE, ncol=12) m <- matrix(0, nrow=1, ncol=12) colnames(m) <- paste(rep(c('VeSm', 'Sm', 'Bi'), times=4), rep(c('a', 'b', 'c', 'd'), each=3), sep='.') f <- fsets(m, v, s) # returns TRUE is.specific(c('VeSm.a', 'Bi.c'), c('VeSm.a', 'Bi.c'), f) # returns TRUE (x and y swapped return FALSE) is.specific(c('VeSm.a', 'Bi.c', 'Sm.d'), c('Sm.a', 'Bi.c', 'Sm.d'), f) # returns TRUE (x and y swapped return FALSE) is.specific(c('VeSm.a', 'Bi.c', 'Sm.d'), c('VeSm.a', 'Bi.c'), f) # returns TRUE (x and y swapped return FALSE) is.specific(c('VeSm.a', 'Bi.c', 'Sm.d'), character(), f) # returns FALSE is.specific(c('Sm.a'), c('Bi.c'), f) # returns FALSE is.specific(c('VeSm.a', 'Sm.c'), c('Sm.a', 'Bi.c'), f)
fsets
S3 class of linguistic fuzzy attributesThis function creates a set of linguistic fuzzy attributes from crisp data.
Numeric vectors, matrix or data frame columns are transformed into a set of
fuzzy attributes, i.e. columns with membership degrees. Factors and other
data types are transformed to fuzzy attributes by calling the fcut()
function.
lcut(x, ...) ## Default S3 method: lcut(x, ...) ## S3 method for class 'factor' lcut(x, name = deparse(substitute(x)), ...) ## S3 method for class 'logical' lcut(x, name = deparse(substitute(x)), ...) ## S3 method for class 'numeric' lcut( x, context = minmax, atomic = c("sm", "me", "bi", "lm", "um", "ze", "neg.sm", "neg.me", "neg.bi", "neg.lm", "neg.um"), hedges = c("ex", "si", "ve", "ty", "-", "ml", "ro", "qr", "vr"), name = deparse(substitute(x)), hedgeParams = defaultHedgeParams, ... ) ## S3 method for class 'data.frame' lcut( x, context = minmax, atomic = c("sm", "me", "bi", "lm", "um", "ze", "neg.sm", "neg.me", "neg.bi", "neg.lm", "neg.um"), hedges = c("ex", "si", "ve", "ty", "-", "ml", "ro", "qr", "vr"), ... ) ## S3 method for class 'matrix' lcut(x, ...)
lcut(x, ...) ## Default S3 method: lcut(x, ...) ## S3 method for class 'factor' lcut(x, name = deparse(substitute(x)), ...) ## S3 method for class 'logical' lcut(x, name = deparse(substitute(x)), ...) ## S3 method for class 'numeric' lcut( x, context = minmax, atomic = c("sm", "me", "bi", "lm", "um", "ze", "neg.sm", "neg.me", "neg.bi", "neg.lm", "neg.um"), hedges = c("ex", "si", "ve", "ty", "-", "ml", "ro", "qr", "vr"), name = deparse(substitute(x)), hedgeParams = defaultHedgeParams, ... ) ## S3 method for class 'data.frame' lcut( x, context = minmax, atomic = c("sm", "me", "bi", "lm", "um", "ze", "neg.sm", "neg.me", "neg.bi", "neg.lm", "neg.um"), hedges = c("ex", "si", "ve", "ty", "-", "ml", "ro", "qr", "vr"), ... ) ## S3 method for class 'matrix' lcut(x, ...)
x |
Data to be transformed: if it is a numeric vector, matrix, or data
frame, then the creation of linguistic fuzzy attributes takes place. For
other data types the |
... |
Other parameters to some methods. |
name |
A name to be added as a suffix to the created fuzzy attribute
names. This parameter can be used only if |
context |
A definition of context of a numeric attribute. It must be an
instance of an S3 class If |
atomic |
A vector of atomic linguistic expressions to be used for creation of fuzzy attributes. |
hedges |
A vector of linguistic hedges to be used for creation of fuzzy attributes. |
hedgeParams |
Parameters that determine the shape of the hedges |
The aim of this function is to transform numeric data into a set of fuzzy
attributes. The resulting fuzzy attributes have direct linguistic
interpretation. This is a unique variant of fuzzification that is suitable
for the inference mechanism based on Perception-based Linguistic Description
(PbLD) – see pbld()
.
A numeric vector is transformed into a set of fuzzy attributes accordingly to the following scheme:
where is an atomic linguistic expression, a value
from the following possibilities (note that the allowance of atomic
expressions is influenced with
context
being used - see ctx for details):
neg.bi
: big negative (far from zero)
neg.um
: upper medium negative (between medium negative and big negative)
neg.me
: medium negative
neg.lm
: lower medium negative (between medium negative and small
negative)
neg.sm
: small negative (close to zero)
ze
: zero
sm
: small
lm
: lower medium
me
: medium
um
: upper medium
bi
: big
A is a modifier that further concretizes the atomic expression
(note that not each combination of hedge and atomic expression is allowed -
see allowed.lingexpr for more details):
ex
: extremely,
si
: significantly,
ve
: very,
ty
: typically,
-
: empty hedge (no hedging),
ml
: more or less,
ro
: roughly,
qr
: quite roughly,
vr
: very roughly.
Accordingly to the theory developed by Novak (2008), not every hedge is
suitable with each atomic #' expression (see the description of the hedges
argument). The hedges to be used can be selected with the hedges
argument.
Function takes care of not to use hedge together with an unapplicable atomic
expression by itself.
Obviously, distinct data have different meaning of what is "small", "medium",
or "big" etc. Therefore, a context
has to be set that specifies sensible
values for these linguistic expressions.
If a matrix (resp. data frame) is provided to this function instead of a single vector, all columns are processed the same way.
The function also sets up properly the vars()
and specs()
properties of
the result.
An object of S3 class fsets
is returned, which is a numeric matrix
with columns representing the fuzzy attributes. Each source column of the
x
argument corresponds to multiple columns in the resulting matrix.
Columns will have names derived from used , atomic expression,
and
specified as the optional parameter.
The resulting object would also have set the vars()
and specs()
properties with the former being created from original column names (if x
is a matrix or data frame) or the name
argument (if x
is a numeric
vector). The specs()
incidency matrix would be created to reflect the
following order of the hedges: and
. Fuzzy
attributes created from the same source numeric vector (or column) would be
ordered that way, with other fuzzy attributes (from the other source) being
incomparable.
Michal Burda
V. Novak, A comprehensive theory of trichotomous evaluative linguistic expressions, Fuzzy Sets and Systems 159 (22) (2008) 2939–2969.
fcut()
, fsets()
, vars()
, specs()
# transform a single vector x <- runif(10) lcut(x, name='age') # transform single vector with a custom context lcut(x, context=ctx5(0, 0.2, 0.5, 0.7, 1), name='age') # transform all columns of a data frame # and do not use any hedges data <- CO2[, c('conc', 'uptake')] lcut(data) # definition of custom contexts for different columns # of a data frame while selecting only "ve" and "ro" hedges. lcut(data, context=list(conc=minmax, uptake=ctx3(0, 25, 50)), hedges=c('ve', 'ro')) # lcut on non-numeric data is the same as fcut() ff <- factor(substring("statistics", 1:10, 1:10), levels = letters) lcut(ff)
# transform a single vector x <- runif(10) lcut(x, name='age') # transform single vector with a custom context lcut(x, context=ctx5(0, 0.2, 0.5, 0.7, 1), name='age') # transform all columns of a data frame # and do not use any hedges data <- CO2[, c('conc', 'uptake')] lcut(data) # definition of custom contexts for different columns # of a data frame while selecting only "ve" and "ro" hedges. lcut(data, context=list(conc=minmax, uptake=ctx3(0, 25, 50)), hedges=c('ve', 'ro')) # lcut on non-numeric data is the same as fcut() ff <- factor(substring("statistics", 1:10, 1:10), levels = letters) lcut(ff)
A linguistic expression represents vague human terms such as "very small", "extremely big" etc. Such notions are
always reasoned within a given context. lingexpr
returns a function that models a selected linguistic expression.
Accordingly to the given context
, atomic
expression (such as "small", "big") and a linguistic hedge
(such as
very
, extremely
), the returned function transforms numeric values into degrees (from [0, 1]
interval),
at which the values correspond to the expression.
lingexpr( context, atomic = c("sm", "me", "bi", "lm", "um", "ze", "neg.sm", "neg.me", "neg.bi", "neg.lm", "neg.um"), hedge = c("ex", "si", "ve", "ty", "-", "ml", "ro", "qr", "vr"), negated = FALSE, hedgeParams = defaultHedgeParams ) allowed.lingexpr
lingexpr( context, atomic = c("sm", "me", "bi", "lm", "um", "ze", "neg.sm", "neg.me", "neg.bi", "neg.lm", "neg.um"), hedge = c("ex", "si", "ve", "ty", "-", "ml", "ro", "qr", "vr"), negated = FALSE, hedgeParams = defaultHedgeParams ) allowed.lingexpr
context |
A context of linguistic expressions (see |
atomic |
An atomic expression whose horizon we would like to obtain |
hedge |
The type of the required linguistic hedge ('-' for no hedging) |
negated |
Negate the expression? (For instance, "not very small".) Negation
is done using the |
hedgeParams |
Parameters that determine the shape of the hedges |
An object of class matrix
(inherits from array
) with 9 rows and 11 columns.
Based on the context type, the following atomic expressions are allowed:
ctx3()
(trichotomy): small, medium, big;
ctx5()
(pentachotomy): small, lower medium, medium, upper medium, big;
ctx3bilat()
(bilateral trichotomy): negative big, negative medium, negative small,
zero, small, medium, big;
ctx5bilat()
(bilateral pentachotomy): negative big, negative medium, negative
small, zero, small, medium, big.
The values of the atomic
parameter have the following meaning (in ascending order):
neg.bi
: big negative (far from zero)
neg.um
: upper medium negative (between medium negative and big negative)
neg.me
: medium negative
neg.lm
: lower medium negative (between medium negative and small negative)
neg.sm
: small negative (close to zero)
ze
: zero
sm
: small
lm
: lower medium
me
: medium
um
: upper medium
bi
: big
hedge
parameter has the following meaning:
ex
: extremely,
si
: significantly,
ve
: very,
ty
: typically,
-
: empty hedge,
ml
: more or less,
ro
: roughly,
qr
: quite roughly,
vr
: very roughly.
Accordingly to the theory of linguistic expressions by Novak, not every hedge is applicable to each atomic expression. The combinations of allowed pairs can be found in allowed.lingexpr. Trying to create forbidden combination results in error.
Returns a function with a single argument, which has to be a numeric vector.
Michal Burda
horizon()
, hedge()
, fcut()
, lcut()
, ctx()
small <- lingexpr(ctx3(0, 0.5, 1), atomic='sm', hedge='-') small(0) # 1 small(0.8) # 0 plot(small) verySmall <- lingexpr(ctx3(0, 0.5, 1), atomic='sm', hedge='ve') plot(verySmall)
small <- lingexpr(ctx3(0, 0.5, 1), atomic='sm', hedge='-') small(0) # 1 small(0.8) # 0 plot(small) verySmall <- lingexpr(ctx3(0, 0.5, 1), atomic='sm', hedge='ve') plot(verySmall)
MASE is computed as .
mase(forecast, validation)
mase(forecast, validation)
forecast |
A numeric vector of forecasted values |
validation |
A numeric vector of actual (real) values |
A Mean Absolute Scaled Error (MASE)
Michal Burda
This function creates a context (i.e. an instance of S3 class
ctx3()
, ctx3bilat()
, ctx5()
, or ctx5bilat()
) based on values
of the numeric vector x
. In default, the context is based on minimum
and maximum value of x
in the following way:
ctx3
, ctx5
: low = minimum, high = maximum value of x
;
ctx3bilat
, ctx5bilat
: negMax = minimum, max = maximum value of x
,
origin = mean of minimum and maximum.
minmax(x, type = c("ctx3", "ctx5", "ctx3bilat", "ctx5bilat"), ...)
minmax(x, type = c("ctx3", "ctx5", "ctx3bilat", "ctx5bilat"), ...)
x |
A numeric vector to compute the context from |
type |
A type of the context to be returned. Must be one of:
|
... |
other parameters to be passed to the appropriate constructor
( |
Other values are computed accordingly to defaults as defined in the constructors
ctx3()
, ctx3bilat()
, ctx5()
, and ctx5bilat()
).
minmax(0:100) # returns ctx3: 0, 50, 100 minmax(0:100, high=80) # returns ctx3: 0, 40, 80 minmax(0:100, relCenter=0.4) # returns ctx3: 0, 40, 100 minmax(0:100, type='ctx5') # returns ctx5: 0, 25, 50, 75, 100
minmax(0:100) # returns ctx3: 0, 50, 100 minmax(0:100, high=80) # returns ctx3: 0, 40, 80 minmax(0:100, relCenter=0.4) # returns ctx3: 0, 40, 100 minmax(0:100, type='ctx5') # returns ctx5: 0, 25, 50, 75, 100
Perform a custom multiplication of the matrices x
and y
by
using the callback function f
.
mult(x, y, f, ...)
mult(x, y, f, ...)
x |
A first matrix. The number of columns must match with the number of
rows of the |
y |
A second matrix. The number of rows must match with the number of
columns of the |
f |
A function to be applied to the matrices in order to compute the multiplication. It must accept at least two arguments. |
... |
Additional arguments that are passed to the function |
For a matrix x
of size and a matrix
y
of size
,
mult
calls the function f
-times to
create a resulting matrix of size
. Each
-th element
of the resulting matrix is obtained from a call of the function
f
with x
's -th row and
y
's -th column passed as its arguments.
A matrix with rows and
columns, where
is the
number of rows of
x
and is the number of columns of
y
.
Michal Burda
x <- matrix(runif(24, -100, 100), ncol=6) y <- matrix(runif(18, -100, 100), nrow=6) mult(x, y, function(xx, yy) sum(xx * yy)) # the same as "x %*% y"
x <- matrix(runif(24, -100, 100), ncol=6) y <- matrix(runif(18, -100, 100), nrow=6) mult(x, y, function(xx, yy) sum(xx * yy)) # the same as "x %*% y"
Take a set of rules (a rule-base) and perform a Perception-based Logical
Deduction (PbLD) on each row of a given fsets()
object.
pbld( x, rules, partition, values, type = c("global", "local"), parallel = FALSE )
pbld( x, rules, partition, values, type = c("global", "local"), parallel = FALSE )
x |
Input to the inference. It should be an object of class
Each row represents a single case of inference. Columns should be named after predicates in rules' antecedents. |
rules |
A rule-base (a.k.a. linguistic description) either in the form
of the |
partition |
A |
values |
Crisp values that correspond to rows of membership degrees in
the |
type |
The type of inference to use. It can be either |
parallel |
Whether the processing should be run in parallel or not.
Parallelization is implemented using the |
Perform a Perception-based Logical Deduction (PbLD) with given rule-base
rules
on each row of input x
. Columns of x
are truth
values of predicates that appear in the antecedent part of rules
,
partition
together with values
determine the shape of
predicates in consequents: each element in values
corresponds to a
row of membership degrees in partition
.
A vector of inferred defuzzified values. The number of resulting
values corresponds to the number of rows of the x
argument.
Michal Burda
A. Dvořák, M. Štěpnička, On perception-based logical deduction and its variants, in: Proc. 16th World Congress of the International Fuzzy Systems Association and 9th Conference of the European Society for Fuzzy Logic and Technology (IFSA-EUSFLAT 2015), Advances in Intelligent Systems Research, Atlantic Press, Gijon, 2015.
lcut()
, searchrules()
, fire()
, aggregateConsequents()
, defuzz()
# --- TRAINING PART --- # custom context of the RHS variable uptakeContext <- ctx3(7, 28.3, 46) # convert data into fuzzy sets d <- lcut(CO2, context=list(uptake=uptakeContext)) # split data into the training and testing set testingIndices <- 1:5 trainingIndices <- setdiff(seq_len(nrow(CO2)), testingIndices) training <- d[trainingIndices, ] testing <- d[testingIndices, ] # search for rules r <- searchrules(training, lhs=1:38, rhs=39:58, minConfidence=0.5) # --- TESTING PART --- # prepare values and partition v <- seq(uptakeContext[1], uptakeContext[3], length.out=1000) p <- lcut(v, name='uptake', context=uptakeContext) # do the inference pbld(testing, r, p, v)
# --- TRAINING PART --- # custom context of the RHS variable uptakeContext <- ctx3(7, 28.3, 46) # convert data into fuzzy sets d <- lcut(CO2, context=list(uptake=uptakeContext)) # split data into the training and testing set testingIndices <- 1:5 trainingIndices <- setdiff(seq_len(nrow(CO2)), testingIndices) training <- d[trainingIndices, ] testing <- d[testingIndices, ] # search for rules r <- searchrules(training, lhs=1:38, rhs=39:58, minConfidence=0.5) # --- TESTING PART --- # prepare values and partition v <- seq(uptakeContext[1], uptakeContext[3], length.out=1000) p <- lcut(v, name='uptake', context=uptakeContext) # do the inference pbld(testing, r, p, v)
Examine rules in a list and remove all of them for whose other more specific
rules are present in the list. The specificity is determined by calling the
is.specific()
function. This operation is a part of the
pbld()
inference mechanism.
perceive( rules, fsets, type = c("global", "local"), fired = NULL, vars = NULL, specs = NULL )
perceive( rules, fsets, type = c("global", "local"), fired = NULL, vars = NULL, specs = NULL )
rules |
A list of character vectors where each element is a fuzzy set name (a predicate) and thus each such vector forms a rule. |
fsets |
A valid instance of the |
type |
The type of perception to use. It can be either |
fired |
If |
vars |
A deprecated parameter that must be |
specs |
A deprecated parameter that must be |
In other words, for each rule x
in the rules
list, it searches for another
rule y
such that is.specific(y, x)
returns TRUE. If yes then
x
is removed from the list.
A modified list of rules for which no other more specific rule exists. (Each rule is a vector.)
Michal Burda
is.specific()
, fsets()
, fcut()
, lcut()
# prepare fsets f <- lcut(data.frame(a=0:1, b=0:1, c=0:1, d=0:1)) # run perceive function: (sm.a, bi.c) has # more specific rule (ve.sm.a, bi.c) perceive(list(c('sm.a', 'bi.c'), c('ve.sm.a', 'bi.c'), c('sm.b', 'sm.d')), f)
# prepare fsets f <- lcut(data.frame(a=0:1, b=0:1, c=0:1, d=0:1)) # run perceive function: (sm.a, bi.c) has # more specific rule (ve.sm.a, bi.c) perceive(list(c('sm.a', 'bi.c'), c('ve.sm.a', 'bi.c'), c('sm.b', 'sm.d')), f)
fsets()
as a line diagram.This function plots the membership degrees stored in the instance of the
fsets()
class. Internally, the membership degrees are
transformed into a time-series object and viewed in a plot using the
ts.plot()
function. This function is useful mainly to see the
shape of fuzzy sets on regularly sampled inputs.
## S3 method for class 'fsets' plot(x, ...)
## S3 method for class 'fsets' plot(x, ...)
x |
An instance of class |
... |
Other arguments that are passed to the underlying |
Result of the ts.plot()
method.
Michal Burda
fsets()
, fcut()
, lcut()
, ts.plot()
d <- lcut(0:1000/1000, name='x') plot(d) # Additional arguments are passed to the ts.plot method # Here thick lines represent atomic linguistic expressions, # i.e. ``small'', ``medium'', and ``big''. plot(d, ylab='membership degree', xlab='values', gpars=list(lwd=c(rep(1, 3), 5, rep(1, 5), 5, rep(1, 7), 5, rep(1,4))))
d <- lcut(0:1000/1000, name='x') plot(d) # Additional arguments are passed to the ts.plot method # Here thick lines represent atomic linguistic expressions, # i.e. ``small'', ``medium'', and ``big''. plot(d, ylab='membership degree', xlab='values', gpars=list(lwd=c(rep(1, 3), 5, rep(1, 5), 5, rep(1, 7), 5, rep(1,4))))
algebra()
S3 class in a human readable form.Print an instance of the algebra()
S3 class in a human readable form.
## S3 method for class 'algebra' print(x, ...)
## S3 method for class 'algebra' print(x, ...)
x |
An instance of the |
... |
Unused. |
None.
Michal Burda
Format an object of the ctx3()
, ctx5()
, ctx3bilat()
and the ctx5bilat()
class into human readable form and print it to the output.
## S3 method for class 'ctx3' print(x, ...) ## S3 method for class 'ctx5' print(x, ...) ## S3 method for class 'ctx3bilat' print(x, ...) ## S3 method for class 'ctx5bilat' print(x, ...)
## S3 method for class 'ctx3' print(x, ...) ## S3 method for class 'ctx5' print(x, ...) ## S3 method for class 'ctx3bilat' print(x, ...) ## S3 method for class 'ctx5bilat' print(x, ...)
x |
A linguistic context to be printed |
... |
Unused. |
Nothing.
Michal Burda
ctx3()
, ctx5()
, ctx3bilat()
, ctx5bilat()
, minmax()
context <- ctx3() print(context)
context <- ctx3() print(context)
farules()
S3 class in a human readable form.Print an instance of the farules()
S3 class in a human readable form.
## S3 method for class 'farules' print(x, ...)
## S3 method for class 'farules' print(x, ...)
x |
An instance of the |
... |
Unused. |
None.
Michal Burda
frbe()
classFormat an object of the frbe()
class into human readable form
and print it to the output.
## S3 method for class 'frbe' print(x, ...)
## S3 method for class 'frbe' print(x, ...)
x |
An instance of |
... |
Unused. |
Format an object of the frbe()
class into human readable form
and print it to the output.
None.
Michal Burda
Štěpnička, M., Burda, M., Štěpničková, L. Fuzzy Rule Base Ensemble Generated from Data by Linguistic Associations Mining. FUZZY SET SYST. 2015.
# prepare data (from the forecast package) library(forecast) horizon <- 10 train <- wineind[-1 * (length(wineind)-horizon+1):length(wineind)] test <- wineind[(length(wineind)-horizon+1):length(wineind)] f <- frbe(ts(train, frequency=frequency(wineind)), h=horizon) print(f) print(test)
# prepare data (from the forecast package) library(forecast) horizon <- 10 train <- wineind[-1 * (length(wineind)-horizon+1):length(wineind)] test <- wineind[(length(wineind)-horizon+1):length(wineind)] f <- frbe(ts(train, frequency=frequency(wineind)), h=horizon) print(f) print(test)
fsets()
classFormat an object of the fsets()
class into human readable form
and print it to the output.
## S3 method for class 'fsets' print(x, ...)
## S3 method for class 'fsets' print(x, ...)
x |
An instance of the |
... |
Unused. |
Nothing
Michal Burda
d <- fcut(CO2[, 1:2]) print(d)
d <- fcut(CO2[, 1:2]) print(d)
A quantifier is a function that computes a fuzzy truth value of a claim about the quantity. This function creates the <1>-type quantifier. (See the examples below on how to use it as a quantifier of the <1,1> type.)
quantifier( quantity = c("all", "almost.all", "most", "many", "few", "several", "some", "at.least"), n = NULL, alg = c("lukasiewicz", "goedel", "goguen") )
quantifier( quantity = c("all", "almost.all", "most", "many", "few", "several", "some", "at.least"), n = NULL, alg = c("lukasiewicz", "goedel", "goguen") )
quantity |
the quantity to be evaluated. 'all' computes the degree of
truth to which all elements of the universe have the given property,
'almost.all', #' 'most', and 'many' evaluate whether the property is
present in extremely big, very big, or not small number of elements from
the universe, where these linguistic expressions are internally modeled
using the |
n |
the number of elements in the 'at.least n' quantifier |
alg |
the underlying algebra in which to compute the quantifier.
Note that the algebra must have properly defined the |
A two-argument function, which expects two numeric vectors of equal length
(the vector elements are recycled to ensure equal lengths). The first argument, x
,
is a vector of membership degrees to be measured, the second argument, w
, is
the vector of weights to which the element belongs to the universe.
Let be the set of input vector indices (1 to
length(x)
). Then the quantifier
computes the truth values accordingly to the following formula:
,
where
for
"some"
and "at.least
and otherwise.
See
sugeno()
for more details on how the quantifier is evaluated.
Setting w
to 1 yields to operation of the <1> quantifier as developed by Dvořák et al.
To compute the <1,1> quantifier as developed by Dvořák et al., e.g. "almost all A are B", w
must
be set again to 1 and x
to the result of the implication .
To compute the <1,1> quantifier as proposed by Murinová et al., e.g. "almost all A are B",
x
must be set to the result of the implication and
w
to the membership
degrees of . See the examples below.
Michal Burda
Dvořák, A., Holčapek, M. L-fuzzy quantifiers of type <1> determined by fuzzy measures. Fuzzy Sets and Systems vol.160, issue 23, 3425-3452, 2009.
Dvořák, A., Holčapek, M. Type <1,1> fuzzy quantifiers determined by fuzzy measures. IEEE International Conference on Fuzzy Systems (FuzzIEEE), 2010.
Murinová, P., Novák, V. The theory of intermediate quantifiers in fuzzy natural logic revisited and the model of "Many". Fuzzy Sets and Systems, vol 388, 2020.
# Dvorak <1> "almost all" quantifier q <- quantifier('almost.all') a <- c(0.9, 1, 1, 0.2, 1) q(x=a, w=1) # Dvorak <1,1> "almost all" quantifier (w set to 1) a <- c(0.9, 1, 1, 0.2, 1) b <- c(0.2, 1, 0, 0.5, 0.8) q <- quantifier('almost.all') q(x=lukas.residuum(a, b), w=1) # Murinová <1,1> "almost all" quantifier (note w set to a) a <- c(0.9, 1, 1, 0.2, 1) b <- c(0.2, 1, 0, 0.5, 0.8) q <- quantifier('almost.all') q(x=lukas.residuum(a, b), w=a) # Murinová <1,1> "some" quantifier a <- c(0.9, 1, 1, 0.2, 1) b <- c(0.2, 1, 0, 0.5, 0.8) q <- quantifier('some') q(x=plukas.tnorm(a, b), w=a)
# Dvorak <1> "almost all" quantifier q <- quantifier('almost.all') a <- c(0.9, 1, 1, 0.2, 1) q(x=a, w=1) # Dvorak <1,1> "almost all" quantifier (w set to 1) a <- c(0.9, 1, 1, 0.2, 1) b <- c(0.2, 1, 0, 0.5, 0.8) q <- quantifier('almost.all') q(x=lukas.residuum(a, b), w=1) # Murinová <1,1> "almost all" quantifier (note w set to a) a <- c(0.9, 1, 1, 0.2, 1) b <- c(0.2, 1, 0, 0.5, 0.8) q <- quantifier('almost.all') q(x=lukas.residuum(a, b), w=a) # Murinová <1,1> "some" quantifier a <- c(0.9, 1, 1, 0.2, 1) b <- c(0.2, 1, 0, 0.5, 0.8) q <- quantifier('some') q(x=plukas.tnorm(a, b), w=a)
This function computes rule base coverage, i.e. a an average of maximum membership degree at which each row of data fires the rules in rule base.
rbcoverage( x, rules, tnorm = c("goedel", "goguen", "lukasiewicz"), onlyAnte = TRUE )
rbcoverage( x, rules, tnorm = c("goedel", "goguen", "lukasiewicz"), onlyAnte = TRUE )
x |
Data for the rules to be evaluated on. Could be either a numeric
matrix or numeric vector. If matrix is given then the rules are evaluated
on rows. Each value of the vector or column of the matrix represents a
predicate - it's numeric value represents the truth values (values in the
interval |
rules |
Either an object of class "farules" or list of character
vectors where each vector is a rule with consequent being the first element
of the vector. Elements of the vectors (predicate names) must correspond to
the |
tnorm |
A character string representing a triangular norm to be used
(either |
onlyAnte |
TRUE if only antecedent-part of a rule should be evaluated. Antecedent-part of a rule are all predicates in rule vector starting from the 2nd position. (First element of a rule is the consequent - see above.) If FALSE, then the whole rule will be evaluated (antecedent part together with consequent). |
Let be a truth value of
-th rule on
-th row of
data
x
. Then is a maximum truth value that
is reached for the
-th data row with the rule base. Then the rule
base coverage is a mean of that truth values, i.e.
.
A numeric value of the rule base coverage of given data.
Michal Burda
M. Burda, M. Štěpnička, Reduction of Fuzzy Rule Bases Driven by the Coverage of Training Data, in: Proc. 16th World Congress of the International Fuzzy Systems Association and 9th Conference of the European Society for Fuzzy Logic and Technology (IFSA-EUSFLAT 2015), Advances in Intelligent Systems Research, Atlantic Press, Gijon, 2015.
x <- matrix(1:20 / 20, nrow=2) colnames(x) <- letters[1:10] rules <- list(c('a', 'c', 'e'), c('b'), c('d', 'a'), c('c', 'a', 'b')) rbcoverage(x, rules, "goguen", TRUE) # returns 1 rules <- list(c('d', 'a'), c('c', 'a', 'b')) rbcoverage(x, rules, "goguen", TRUE) # returns 0.075)
x <- matrix(1:20 / 20, nrow=2) colnames(x) <- letters[1:10] rules <- list(c('a', 'c', 'e'), c('b'), c('d', 'a'), c('c', 'a', 'b')) rbcoverage(x, rules, "goguen", TRUE) # returns 1 rules <- list(c('d', 'a'), c('c', 'a', 'b')) rbcoverage(x, rules, "goguen", TRUE) # returns 0.075)
From given rule base, select such set of rules that influence mostly the rule base coverage of the input data.
reduce( x, rules, ratio, tnorm = c("goedel", "goguen", "lukasiewicz"), tconorm = c("goedel", "goguen", "lukasiewicz"), numThreads = 1 )
reduce( x, rules, ratio, tnorm = c("goedel", "goguen", "lukasiewicz"), tconorm = c("goedel", "goguen", "lukasiewicz"), numThreads = 1 )
x |
Data for the rules to be evaluated on. Could be either a numeric
matrix or numeric vector. If matrix is given then the rules are evaluated
on rows. Each value of the vector or column of the matrix represents a
predicate - it's numeric value represents the truth values (values in the
interval |
rules |
Either an object of class "farules" or list of character
vectors where each vector is a rule with consequent being the first element
of the vector. Elements of the vectors (predicate names) must correspond to
the |
ratio |
A percentage of rule base coverage that must be preserved. It
must be a value within the |
tnorm |
Which t-norm to use as a conjunction of antecedents. The
default is |
tconorm |
Which t-norm to use as a disjunction, i.e. to combine
multiple antecedents to get coverage of the rule base. The default is
|
numThreads |
How many threads to use for computation. Value higher than 1 causes that the algorithm runs in several parallel threads (using the OpenMP library). |
From a given rulebase, a rule with greatest coverage is selected. After
that, additional rules are selected that increase the rule base coverage the
most. Addition stops after the coverage exceeds .
Note that the size of the resulting rule base is not necessarily minimal because the algorithm does not search all possible combination of rules. It only finds a local minimum of rule base size.
Function returns an instance of class farules()
or a
list depending on the type of the rules
argument.
Michal Burda
M. Burda, M. Štěpnička, Reduction of Fuzzy Rule Bases Driven by the Coverage of Training Data, in: Proc. 16th World Congress of the International Fuzzy Systems Association and 9th Conference of the European Society for Fuzzy Logic and Technology (IFSA-EUSFLAT 2015), Advances in Intelligent Systems Research, Atlantic Press, Gijon, 2015.
RMSE is computed as .
rmse(forecast, validation)
rmse(forecast, validation)
forecast |
A numeric vector of forecasted values |
validation |
A numeric vector of actual (real) values |
A Root Mean Squared Error (RMSE)
Michal Burda
This function searches the given fsets()
object d
for all
fuzzy association rules that satisfy defined constraints. It returns a list
of fuzzy association rules together with some statistics characterizing them
(such as support, confidence etc.).
searchrules( d, lhs = 2:ncol(d), rhs = 1, tnorm = c("goedel", "goguen", "lukasiewicz"), n = 100, best = c("confidence"), minSupport = 0.02, minConfidence = 0.75, maxConfidence = 1, maxLength = 4, numThreads = 1, trie = (maxConfidence < 1) )
searchrules( d, lhs = 2:ncol(d), rhs = 1, tnorm = c("goedel", "goguen", "lukasiewicz"), n = 100, best = c("confidence"), minSupport = 0.02, minConfidence = 0.75, maxConfidence = 1, maxLength = 4, numThreads = 1, trie = (maxConfidence < 1) )
d |
An object of class |
lhs |
Indices of fuzzy attributes that may appear on the left-hand-side (LHS) of association rules, i.e. in the antecedent. |
rhs |
Indices of fuzzy attributes that may appear on the right-hand-side (RHS) of association rules, i.e. in the consequent. |
tnorm |
A t-norm to be used for computation of conjunction of fuzzy attributes. (Allowed are even only starting letters of "lukasiewicz", "goedel" and "goguen"). |
n |
The non-negative number of rules to be found. If zero, the function
returns all rules satisfying the given conditions. If positive, only
|
best |
Specifies measure accordingly to which the rules are ordered
from best to worst. This argument is used mainly in combination with the
|
minSupport |
The minimum support degree of a rule. Rules with support
below that number are filtered out. It must be a numeric value from interval
|
minConfidence |
The minimum confidence degree of a rule. Rules with
confidence below that number are filtered out. It must be a numeric value
from interval |
maxConfidence |
Maximum confidence threshold. After finding a rule that
has confidence degree above the If you want to disable this feature, set |
maxLength |
Maximum allowed length of the rule, i.e. maximum number of predicates that are allowed on the left-hand + right-hand side of the rule. If negative, the maximum length of rules is unlimited. |
numThreads |
Number of threads used to perform the algorithm in
parallel. If greater than 1, the OpenMP library (not to be confused with
Open MPI) is used for parallelization. Please note that there are known
problems of using OpenMP together with another means of parallelization that
may be used within R. Therefore, if you plan to use the |
trie |
Whether or not to use internal mechanism of Tries. If FALSE,
then in the output may appear such rule that is a descendant of a rule that
has confidence above Tries consume very much memory, so if you encounter problems with
insufficient memory, set this argument to FALSE. On the other hand, the size
of result (if |
The function searches data frame d
for fuzzy association rules that
satisfy conditions specified by the parameters.
A list of the following elements: rules
and statistics
.
rules
is a list of mined fuzzy association rules. Each element of
that list is a character vector with consequent attribute being on the first
position.
statistics
is a data frame of statistical characteristics about mined
rules. Each row corresponds to a rule in the rules
list. Let us
consider a rule "a & b => c", let be a t-norm specified with
the
tnorm
parameter and goes over all rows of a data table
d
. Then columns of the statistics
data frame are as follows:
support: a rule's support degree:
lhsSupport: a support of rule's antecedent (LHS):
rhsSupport: a support of rule's consequent (RHS):
confidence: a rule's confidence degree:
Michal Burda
fcut()
, lcut()
, farules()
, fsets()
, pbld()
d <- lcut(CO2) searchrules(d, lhs=1:ncol(d), rhs=1:ncol(d))
d <- lcut(CO2) searchrules(d, lhs=1:ncol(d), rhs=1:ncol(d))
Returns an ordered vector of values from given interval, of given size, generated by equal steps.
slices(from, to, n)
slices(from, to, n)
from |
The lower bound of the interval. |
to |
The upper bound of the interval. |
n |
The length of the vector to be produced. |
Returns a vector of values from from
to to
(inclusive), with
equal difference between two consecutive values, with total length n
.
Function is useful e.g. together with the pbld
or
defuzz
functions (for the values
argument; see also
lcut
or fcut
) or defuzz
).
A vector of numbers in the given interval and size.
Michal Burda
## Not run: slices(1, 5, 10) # 1, 1.5, 2, 2.5, 3, 3.5 4, 4.5, 5 ## End(Not run) # is the same as seq(1, 5, length.out=10)
## Not run: slices(1, 5, 10) # 1, 1.5, 2, 2.5, 3, 3.5 4, 4.5, 5 ## End(Not run) # is the same as seq(1, 5, length.out=10)
SMAPE is computed as .
smape(forecast, validation)
smape(forecast, validation)
forecast |
A numeric vector of forecasted values |
validation |
A numeric vector of actual (real) values |
A Symmetric Mean Absolute Percentage Error (SMAPE)
Michal Burda
NA
values.By default, the objects created with the algebra()
function represent a mathematical
algebra capable to work on the interval. If
NA
appears as a value instead,
it is propagated to the result. That is, any operation with NA
results in NA
, by default.
This scheme of handling missing values is also known as Bochvar's. To change this default
behavior, the following functions may be applied.
sobocinski(algebra) kleene(algebra) dragonfly(algebra) nelson(algebra) lowerEst(algebra)
sobocinski(algebra) kleene(algebra) dragonfly(algebra) nelson(algebra) lowerEst(algebra)
algebra |
the underlying algebra object to be modified – see the |
The sobocinski()
, kleene()
, nelson()
, lowerEst()
and dragonfly()
functions modify the algebra to
handle the NA
in a different way than is the default. Sobocinski's algebra simply ignores NA
values
whereas Kleene's algebra treats NA
as "unknown value". Dragonfly approach is a combination
of Sobocinski's and Bochvar's approach, which preserves the ordering 0 <= NA <= 1
to obtain from compositions (see compose()
)
the lower-estimate in the presence of missing values.
In detail, the behavior of the algebra modifiers is defined as follows:
Sobocinski's negation for n
being the underlying algebra:
a | n(a) |
NA | 0 |
Sobocinski's operation for op
being one of t
, pt
, c
, pc
, i
, pi
, s
, ps
from the underlying algebra:
b | NA | |
a | op(a, b) | a |
NA | b | NA |
Sobocinski's operation for r
from the underlying algebra:
b | NA | |
a | r(a, b) | n(a) |
NA | b | NA |
Kleene's negation is identical to n
from the underlying algebra.
Kleene's operation for op
being one of t
, pt
, i
, pi
from the underlying algebra:
b | NA | 0 | |
a | op(a, b) | NA | 0 |
NA | NA | NA | 0 |
0 | 0 | 0 | 0 |
Kleene's operation for op
being one of c
, pc
, s
, ps
from the underlying algebra:
b | NA | 1 | |
a | op(a, b) | NA | 1 |
NA | NA | NA | 1 |
1 | 1 | 1 | 1 |
Kleene's operation for r
from the underlying algebra:
b | NA | 1 | |
a | r(a, b) | NA | 1 |
NA | NA | NA | 1 |
0 | 1 | 1 | 1 |
Dragonfly negation is identical to n
from the underlying algebra.
Dragonfly operation for op
being one of t
, pt
, i
, pi
from the underlying algebra:
b | NA | 0 | 1 | |
a | op(a, b) | NA | 0 | a |
NA | NA | NA | 0 | NA |
0 | 0 | 0 | 0 | 0 |
1 | b | NA | 0 | 1 |
Dragonfly operation for op
being one of c
, pc
, s
, ps
from the underlying algebra:
b | NA | 0 | 1 | |
a | op(a, b) | a | a | 1 |
NA | b | NA | NA | 1 |
0 | b | NA | 0 | 1 |
1 | 1 | 1 | 1 | 1 |
Dragonfly operation for r
from the underlying algebra:
b | NA | 0 | 1 | |
a | r(a, b) | NA | n(a) | 1 |
NA | b | 1 | NA | 1 |
0 | 1 | 1 | 1 | 1 |
1 | b | NA | 0 | 1 |
A list of function of the same structure as is the list returned from the algebra()
function
Michal Burda
a <- algebra('lukas') b <- sobocinski(a) a$t(0.3, NA) # NA b$t(0.3, NA) # 0.3
a <- algebra('lukas') b <- sobocinski(a) a$t(0.3, NA) # NA b$t(0.3, NA) # 0.3
A factory function for creation of sugeno-integrals.
sugeno( measure, relative = TRUE, strong = FALSE, alg = c("lukasiewicz", "goedel", "goguen") )
sugeno( measure, relative = TRUE, strong = FALSE, alg = c("lukasiewicz", "goedel", "goguen") )
measure |
A non-decreasing function that assigns a truth value from the
|
relative |
Whether the measure assumes relative or absolute quantity.
Relative quantity is always a number from the |
strong |
Whether to use the strong conjunction ( |
alg |
The underlying algebra must be either a string (one from 'lukasiewicz',
'goedel' or 'goguen') or an instance of the S3 class |
A two-argument function, which expects two numeric vectors of equal length
(the vector elements are recycled to ensure equal lengths). The first argument, x
,
is a vector of membership degrees to be measured, the second argument, w
, is
the vector of weights.
Let be the set of input vector indices (1 to
length(x)
). Then the sugeno integral
computes the truth values accordingly to the following formula:
,
where
if
relative==TRUE
or if
relative==FALSE
and where CONJ is a strong conjunction (i.e. alg$pt
) or a weak conjunction
(i.e. alg$pi
) accordingly to the strong
parameter.
Michal Burda
# Dvorak <1> "almost all" quantifier q <- sugeno(lingexpr(ctx3(), atomic='bi', hedge='ex')) a <- c(0.9, 1, 1, 0.2, 1) q(x=a, w=1) # Dvorak <1,1> "almost all" quantifier a <- c(0.9, 1, 1, 0.2, 1) b <- c(0.2, 1, 0, 0.5, 0.8) q <- sugeno(lingexpr(ctx3(), atomic='bi', hedge='ex')) q(x=lukas.residuum(a, b), w=1) # Murinová <1,1> "almost all" quantifier a <- c(0.9, 1, 1, 0.2, 1) b <- c(0.2, 1, 0, 0.5, 0.8) q <- sugeno(lingexpr(ctx3(), atomic='bi', hedge='ex')) q(x=lukas.residuum(a, b), w=a)
# Dvorak <1> "almost all" quantifier q <- sugeno(lingexpr(ctx3(), atomic='bi', hedge='ex')) a <- c(0.9, 1, 1, 0.2, 1) q(x=a, w=1) # Dvorak <1,1> "almost all" quantifier a <- c(0.9, 1, 1, 0.2, 1) b <- c(0.2, 1, 0, 0.5, 0.8) q <- sugeno(lingexpr(ctx3(), atomic='bi', hedge='ex')) q(x=lukas.residuum(a, b), w=1) # Murinová <1,1> "almost all" quantifier a <- c(0.9, 1, 1, 0.2, 1) b <- c(0.2, 1, 0, 0.5, 0.8) q <- sugeno(lingexpr(ctx3(), atomic='bi', hedge='ex')) q(x=lukas.residuum(a, b), w=a)
These functions compute membership degrees of numeric fuzzy sets with
triangular or raised-cosine shape. These functions are deprecated.
Please use triangular()
or raisedcosine()
functions instead.
triangle(x, lo, center, hi) raisedcos(x, lo, center, hi)
triangle(x, lo, center, hi) raisedcos(x, lo, center, hi)
x |
A numeric vector to be transformed. |
lo |
A lower bound (can be -Inf). |
center |
A peak value. |
hi |
An upper bound (can be Inf). |
A numeric vector of membership degrees of x
to a fuzzy set with the shape
determined with lo
, center
, hi
.
Michal Burda
These functions create functions with a single argument x
that compute membership degrees of x
to a fuzzy set
of either triangular or raised-cosine shape that is defined by lo
, center
, and hi
.
triangular(lo, center, hi) raisedcosine(lo, center, hi)
triangular(lo, center, hi) raisedcosine(lo, center, hi)
lo |
A lower bound (can be -Inf). |
center |
A peak value. |
hi |
An upper bound (can be Inf). |
The arguments must satisfy lo <= center <= hi
. Functions compute membership degrees of triangular or
raised-cosine fuzzy sets. x
values equal to center obtain membership degree equal to 1,
xvalues lower than
loor greater than
hiobtain membership degree equal to 0. A transition of the triangular (resp. raised cosine) shape (with peak at
centeris computed for
xvalues between
loand
hi'.
If lo == -Inf
then any value that is lower or equal to center gets membership degree 1. Similarly, if hi == Inf
then any value that is greater or equal to center gets membership degree 1. NA
and NaN
values remain unchanged.
triangular()
produces fuzzy sets of a triangular shape (with peak at center
), raisedcosine()
produces
fuzzy sets defined as a raised cosine hill.
A function with single argument x
that should be a numeric vector to be converted.
Michal Burda
tr <- triangular(1, 2, 3) tr(1:30 / 3) rc <- raisedcosine(1, 2, 3) rc(1:30 / 3) plot(triangular(-1, 0, 1), from=-2, to=3) plot(triangular(-1, 0, 2), from=-2, to=3) plot(triangular(-Inf, 0, 1), from=-2, to=3) plot(triangular(-1, 0, Inf), from=-2, to=3) plot(raisedcosine(-1, 0, 1), from=-2, to=3) plot(raisedcosine(-1, 0, 2), from=-2, to=3) plot(raisedcosine(-Inf, 0, 1), from=-2, to=3) plot(raisedcosine(-1, 0, Inf), from=-2, to=3)
tr <- triangular(1, 2, 3) tr(1:30 / 3) rc <- raisedcosine(1, 2, 3) rc(1:30 / 3) plot(triangular(-1, 0, 1), from=-2, to=3) plot(triangular(-1, 0, 2), from=-2, to=3) plot(triangular(-Inf, 0, 1), from=-2, to=3) plot(triangular(-1, 0, Inf), from=-2, to=3) plot(raisedcosine(-1, 0, 1), from=-2, to=3) plot(raisedcosine(-1, 0, 2), from=-2, to=3) plot(raisedcosine(-Inf, 0, 1), from=-2, to=3) plot(raisedcosine(-1, 0, Inf), from=-2, to=3)