parents
and leaves
is rename to jt_parents
and jt_leaves
respectively no avoid nameclashes with bnlearn
.jt_nbinary_ops
and new triangulation methods.jt_binary_ops
breaking api slightly. The algorithm is now much faster though.compile
gets a new argumen initialize_cpts
in order to speed up computations when insertion of evidence at CPT level are of interest.initialize
to initialize an object from compile
if initialize_cpts
was set to FALSE
pot_list
deprecated. Use cpt_list
for both BNs and MRFsjt_nbinary_ops
more than twice as fast.pot_list
for markov random fields which is more efficient and idiomaticjoint_vars
is specified, the root node is automatically set to the clique where these variables are located. Hence, one only needs to collect
to query probabilies about these variablesjt
algorithm now proceeds assuming a uniform prior distribution
on the affected tables. In this regard, one can not query the evidence since it has no meaning. The print method flags
if there are inconsistencies and it can be obtained by has_inconsistencies
. This new feature means, that jti
can
now also be seen as a "machine-learning" algorithm that can be very useful in connection with class-prediction e.g.new_mpd
renamed to mpd
. Works on cpt_list
objects now.triangulate
method for cpt_list
objects.evidence
, for triangulation.jt_binary_ops
for calculating the number of binary operations to perform a full message passingmpd
for finding maximal prime decompositions is now include includedtriangulate
a graph before compilation in order to investigate the size of the cliques etc.There was a bug in the creation of the junction tree when calling Kruskals algorithm.
It is now possible to specify variables of interest in advance, such that we are guaranteed to be able to query the joint pmf of these variables.
Some refactoring making compilation much faster. When potentials is assigned to a clique we no longer start by creating a unity table and then multiply. This was killing the advantage of the sparsity.