$loglik()
method from all learners.lrn("classif.ranger")
and lrn("regr.ranger")
for 0.17.0, adding na.action
parameter and "missings"
property, and poisson
splitrule for regression with a new poisson.tau
parameter.lrn("classif.ranger")
and lrn("regr.ranger")
.
Remove alpha
and minprop
hyperparameter.
Remove default of respect.unordered.factors
.
Change lower bound of max_depth
from 0 to 1.
Remove se.method
from lrn("classif.ranger")
.base_margin
in xgboost learners (#205).lrn("regr.xgboost")
now works properly. Previously the training data was used.eval_metric
must now be set.
This achieves that one needs to make the conscious decision which performance metric to use for early stopping.LearnerClassifXgboost
and LearnerRegrXgboost
now support internal tuning and validation.
This now also works in conjunction with mlr3pipelines
.nnet
learner and support feature type "integer"
.min.bucket
parameter to classif.ranger
and regr.ranger
.mlr3learners
removes learners from dictionary.regr.nnet
learner.classif.log_reg
.default_values()
function for ranger and svm learners.eval_metric()
is now explicitly set for xgboost learners to silence a
deprecation warning.mtry.ratio
is converted to mtry
to
simplify tuning.glm
and glmnet
(#199). While predictions in previous versions
were correct, the estimated coefficients had the wrong sign.lambda
and s
for glmnet
learners (#197).glmnet
now support to extract selected features (#200).kknn
now raise an exception if k >= n
(#191).ranger
now come with the virtual hyperparameter
mtry.ratio
to set the hyperparameter mtry
based on the proportion of
features to use.$loglik()
), allowing to calculate measures like AIC or BIC in mlr3
(#182).e1071
.set_threads()
in mlr3 provides a generic way to set the
respective hyperparameter to the desired number of parallel threads.survival:aft
objective to surv.xgboost
predict.all
from ranger learners (#172).surv.ranger
, c.f. https://github.com/mlr-org/mlr3proba/issues/165.classif.nnet
learner (moved from mlr3extralearners
).LearnerSurvRanger
.glmnet
tests on solaris.bibtex
.classif.glmnet
and classif.cv_glmnet
with predict_type
set to "prob"
(#155).glmnet
to be more robust if the order of
features has changed between train and predict.$model
slot of the {kknn} learner now returns a list containing some
information which is being used during the predict step.
Before, the slot was empty because there is no training step for kknn.saveRDS()
, serialize()
etc.penalty.factor
is a vector param, not a ParamDbl
(#141)mxitnr
and epsnr
from glmnet v4.0 updatesurv.glmnet
(#130)mlr3proba
(#144)surv.xgboost
(#135)surv.ranger
(#134)cv_glmnet
and glmnet
(#99)predict.gamma
and newoffset
arg (#98)inst/paramtest
was added.
This test checks against the arguments of the upstream train & predict
functions and ensures that all parameters are implemented in the respective
mlr3 learner (#96).interaction_constraints
to {xgboost} learners (#97).classif.multinom
from package nnet
.regr.lm
and classif.log_reg
now ignore the global option
"contrasts"
.additional-learners.Rmd
listing all mlr3 custom learnersinteraction_constraints
(#95)logical()
to multiple learners.regr.glmnet
, regr.km
,
regr.ranger
, regr.svm
, regr.xgboost
, classif.glmnet
, classif.lda
,
classif.naivebayes
, classif.qda
, classif.ranger
and classif.svm
.glmnet
: Added relax
parameter (v3.0)xgboost
: Updated parameters for v0.90.0.2*.xgboost
and *.svm
which was triggered if columns
were reordered between $train()
and $predict()
.Changes to work with new mlr3::Learner
API.
Improved documentation.
Added references.
add new parameters of xgboost version 0.90.2
add parameter dependencies for xgboost