aGrUM 2.3.2
a C++ library for (probabilistic) graphical models
gum::credal::InferenceEngine< GUM_SCALAR > Class Template Referenceabstract

Abstract class template representing a CredalNet inference engine. More...

#include <agrum/CN/inferenceEngine.h>

Inheritance diagram for gum::credal::InferenceEngine< GUM_SCALAR >:
Collaboration diagram for gum::credal::InferenceEngine< GUM_SCALAR >:

Public Types

enum class  ApproximationSchemeSTATE : char {
  Undefined , Continue , Epsilon , Rate ,
  Limit , TimeLimit , Stopped
}
 The different state of an approximation scheme. More...

Public Member Functions

virtual void addEvidence (NodeId id, const Idx val) final
 adds a new hard evidence on node id
virtual void addEvidence (const std::string &nodeName, const Idx val) final
 adds a new hard evidence on node named nodeName
virtual void addEvidence (NodeId id, const std::string &label) final
 adds a new hard evidence on node id
virtual void addEvidence (const std::string &nodeName, const std::string &label) final
 adds a new hard evidence on node named nodeName
virtual void addEvidence (NodeId id, const std::vector< GUM_SCALAR > &vals) final
 adds a new evidence on node id (might be soft or hard)
virtual void addEvidence (const std::string &nodeName, const std::vector< GUM_SCALAR > &vals) final
 adds a new evidence on node named nodeName (might be soft or hard)
virtual void addEvidence (const Tensor< GUM_SCALAR > &pot) final
 adds a new evidence on node id (might be soft or hard)
virtual void eraseAllEvidence ()
 removes all the evidence entered into the network
Constructors / Destructors
 InferenceEngine (const CredalNet< GUM_SCALAR > &credalNet)
 Construtor.
virtual ~InferenceEngine ()
 Destructor.
Pure virtual methods
virtual void makeInference ()=0
 To be redefined by each credal net algorithm.
Getters and setters
VarMod2BNsMap< GUM_SCALAR > * getVarMod2BNsMap ()
 Get optimum IBayesNet.
const CredalNet< GUM_SCALAR > & credalNet () const
 Get this creadal network.
const NodeProperty< std::vector< NodeId > > & getT0Cluster () const
 Get the t0_ cluster.
const NodeProperty< std::vector< NodeId > > & getT1Cluster () const
 Get the t1_ cluster.
void setRepetitiveInd (const bool repetitive)
void storeVertices (const bool value)
void storeBNOpt (const bool value)
bool repetitiveInd () const
 Get the current independence status.
bool storeVertices () const
 Get the number of iterations without changes used to stop some algorithms.
bool storeBNOpt () const
Pre-inference initialization methods
void insertModalsFile (const std::string &path)
 Insert variables modalities from file to compute expectations.
void insertModals (const std::map< std::string, std::vector< GUM_SCALAR > > &modals)
 Insert variables modalities from map to compute expectations.
virtual void insertEvidenceFile (const std::string &path)
 Insert evidence from file.
void insertEvidence (const std::map< std::string, std::vector< GUM_SCALAR > > &eviMap)
 Insert evidence from map.
void insertEvidence (const NodeProperty< std::vector< GUM_SCALAR > > &evidence)
 Insert evidence from Property.
void insertQueryFile (const std::string &path)
 Insert query variables states from file.
void insertQuery (const NodeProperty< std::vector< bool > > &query)
 Insert query variables and states from Property.
Post-inference methods
Tensor< GUM_SCALAR > marginalMin (const NodeId id) const
 Get the lower marginals of a given node id.
Tensor< GUM_SCALAR > marginalMax (const NodeId id) const
 Get the upper marginals of a given node id.
Tensor< GUM_SCALAR > marginalMin (const std::string &varName) const
 Get the lower marginals of a given variable name.
Tensor< GUM_SCALAR > marginalMax (const std::string &varName) const
 Get the upper marginals of a given variable name.
const GUM_SCALAR & expectationMin (const NodeId id) const
 Get the lower expectation of a given node id.
const GUM_SCALAR & expectationMax (const NodeId id) const
 Get the upper expectation of a given node id.
const GUM_SCALAR & expectationMin (const std::string &varName) const
 Get the lower expectation of a given variable name.
const GUM_SCALAR & expectationMax (const std::string &varName) const
 Get the upper expectation of a given variable name.
const std::vector< GUM_SCALAR > & dynamicExpMin (const std::string &varName) const
 Get the lower dynamic expectation of a given variable prefix (without the time step included, i.e.
const std::vector< GUM_SCALAR > & dynamicExpMax (const std::string &varName) const
 Get the upper dynamic expectation of a given variable prefix (without the time step included, i.e.
const std::vector< std::vector< GUM_SCALAR > > & vertices (const NodeId id) const
 Get the vertice of a given node id.
void saveMarginals (const std::string &path) const
 Saves marginals to file.
void saveExpectations (const std::string &path) const
 Saves expectations to file.
void saveVertices (const std::string &path) const
 Saves vertices to file.
void dynamicExpectations ()
 Compute dynamic expectations.
std::string toString () const
 Print all nodes marginals to standart output.
const std::string getApproximationSchemeMsg ()
 Get approximation scheme state.
Getters and setters
void setEpsilon (double eps) override
 Given that we approximate f(t), stopping criterion on |f(t+1)-f(t)|.
double epsilon () const override
 Returns the value of epsilon.
void disableEpsilon () override
 Disable stopping criterion on epsilon.
void enableEpsilon () override
 Enable stopping criterion on epsilon.
bool isEnabledEpsilon () const override
 Returns true if stopping criterion on epsilon is enabled, false otherwise.
void setMinEpsilonRate (double rate) override
 Given that we approximate f(t), stopping criterion on d/dt(|f(t+1)-f(t)|).
double minEpsilonRate () const override
 Returns the value of the minimal epsilon rate.
void disableMinEpsilonRate () override
 Disable stopping criterion on epsilon rate.
void enableMinEpsilonRate () override
 Enable stopping criterion on epsilon rate.
bool isEnabledMinEpsilonRate () const override
 Returns true if stopping criterion on epsilon rate is enabled, false otherwise.
void setMaxIter (Size max) override
 Stopping criterion on number of iterations.
Size maxIter () const override
 Returns the criterion on number of iterations.
void disableMaxIter () override
 Disable stopping criterion on max iterations.
void enableMaxIter () override
 Enable stopping criterion on max iterations.
bool isEnabledMaxIter () const override
 Returns true if stopping criterion on max iterations is enabled, false otherwise.
void setMaxTime (double timeout) override
 Stopping criterion on timeout.
double maxTime () const override
 Returns the timeout (in seconds).
double currentTime () const override
 Returns the current running time in second.
void disableMaxTime () override
 Disable stopping criterion on timeout.
void enableMaxTime () override
 Enable stopping criterion on timeout.
bool isEnabledMaxTime () const override
 Returns true if stopping criterion on timeout is enabled, false otherwise.
void setPeriodSize (Size p) override
 How many samples between two stopping is enable.
Size periodSize () const override
 Returns the period size.
void setVerbosity (bool v) override
 Set the verbosity on (true) or off (false).
bool verbosity () const override
 Returns true if verbosity is enabled.
ApproximationSchemeSTATE stateApproximationScheme () const override
 Returns the approximation scheme state.
Size nbrIterations () const override
 Returns the number of iterations.
const std::vector< double > & history () const override
 Returns the scheme history.
void initApproximationScheme ()
 Initialise the scheme.
bool startOfPeriod () const
 Returns true if we are at the beginning of a period (compute error is mandatory).
void updateApproximationScheme (unsigned int incr=1)
 Update the scheme w.r.t the new error and increment steps.
Size remainingBurnIn () const
 Returns the remaining burn in.
void stopApproximationScheme ()
 Stop the approximation scheme.
bool continueApproximationScheme (double error)
 Update the scheme w.r.t the new error.
Getters and setters
std::string messageApproximationScheme () const
 Returns the approximation scheme message.
Accessors/Modifiers
virtual void setNumberOfThreads (Size nb)
 sets the number max of threads to be used by the class containing this ThreadNumberManager
virtual Size getNumberOfThreads () const
 returns the current max number of threads used by the class containing this ThreadNumberManager
bool isGumNumberOfThreadsOverriden () const
 indicates whether the class containing this ThreadNumberManager set its own number of threads

Public Attributes

Signaler3< Size, double, doubleonProgress
 Progression, error and time.
Signaler1< const std::string & > onStop
 Criteria messageApproximationScheme.

Protected Member Functions

Protected initialization methods
void repetitiveInit_ ()
 Initialize t0_ and t1_ clusters.
void initExpectations_ ()
 Initialize lower and upper expectations before inference, with the lower expectation being initialized on the highest modality and the upper expectation being initialized on the lowest modality.
void initMarginals_ ()
 Initialize lower and upper old marginals and marginals before inference, with the lower marginal being 1 and the upper 0.
void displatchMarginalsToThreads_ ()
 computes Vector threadRanges_, that assigns some part of marginalMin_ and marginalMax_ to the threads
void initMarginalSets_ ()
 Initialize credal set vertices with empty sets.
Protected algorithms methods
virtual const GUM_SCALAR computeEpsilon_ ()
 Compute approximation scheme epsilon using the old marginals and the new ones.
void updateExpectations_ (const NodeId &id, const std::vector< GUM_SCALAR > &vertex)
 Given a node id and one of it's possible vertex obtained during inference, update this node lower and upper expectations.
void updateCredalSets_ (const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund=false)
 Given a node id and one of it's possible vertex, update it's credal set.
Proptected post-inference methods
void dynamicExpectations_ ()
 Rearrange lower and upper expectations to suit dynamic networks.

Protected Attributes

const CredalNet< GUM_SCALAR > * credalNet_
 A pointer to the Credal Net used.
margi oldMarginalMin_
 Old lower marginals used to compute epsilon.
margi oldMarginalMax_
 Old upper marginals used to compute epsilon.
margi marginalMin_
 Lower marginals.
margi marginalMax_
 Upper marginals.
credalSet marginalSets_
 Credal sets vertices, if enabled.
expe expectationMin_
 Lower expectations, if some variables modalities were inserted.
expe expectationMax_
 Upper expectations, if some variables modalities were inserted.
dynExpe dynamicExpMin_
 Lower dynamic expectations.
dynExpe dynamicExpMax_
 Upper dynamic expectations.
dynExpe modal_
 Variables modalities used to compute expectations.
margi evidence_
 Holds observed variables states.
query query_
 Holds the query nodes states.
cluster t0_
 Clusters of nodes used with dynamic networks.
cluster t1_
 Clusters of nodes used with dynamic networks.
bool storeVertices_
 True if credal sets vertices are stored, False otherwise.
bool repetitiveInd_
 True if using repetitive independence ( dynamic network only ), False otherwise.
bool storeBNOpt_
 Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.
VarMod2BNsMap< GUM_SCALAR > dbnOpt_
 Object used to efficiently store optimal bayes net during inference, for some algorithms.
std::vector< std::pair< NodeId, Idx > > threadRanges_
 the ranges of elements of marginalMin_ and marginalMax_ processed by each thread
int timeSteps_
 The number of time steps of this network (only usefull for dynamic networks).
Size threadMinimalNbOps_ {Size(20)}
double current_epsilon_
 Current epsilon.
double last_epsilon_
 Last epsilon value.
double current_rate_
 Current rate.
Size current_step_
 The current step.
Timer timer_
 The timer.
ApproximationSchemeSTATE current_state_
 The current state.
std::vector< doublehistory_
 The scheme history, used only if verbosity == true.
double eps_
 Threshold for convergence.
bool enabled_eps_
 If true, the threshold convergence is enabled.
double min_rate_eps_
 Threshold for the epsilon rate.
bool enabled_min_rate_eps_
 If true, the minimal threshold for epsilon rate is enabled.
double max_time_
 The timeout.
bool enabled_max_time_
 If true, the timeout is enabled.
Size max_iter_
 The maximum iterations.
bool enabled_max_iter_
 If true, the maximum iterations stopping criterion is enabled.
Size burn_in_
 Number of iterations before checking stopping criteria.
Size period_size_
 Checking criteria frequency.
bool verbosity_
 If true, verbosity is enabled.

Private Types

using credalSet = NodeProperty< std::vector< std::vector< GUM_SCALAR > > >
using margi = NodeProperty< std::vector< GUM_SCALAR > >
using expe = NodeProperty< GUM_SCALAR >
using dynExpe = typename gum::HashTable< std::string, std::vector< GUM_SCALAR > >
using query = NodeProperty< std::vector< bool > >
using cluster = NodeProperty< std::vector< NodeId > >

Private Member Functions

void stopScheme_ (ApproximationSchemeSTATE new_state)
 Stop the scheme given a new state.

Private Attributes

Size _nb_threads_ {0}
 the max number of threads used by the class

Detailed Description

template<typename GUM_SCALAR>
class gum::credal::InferenceEngine< GUM_SCALAR >

Abstract class template representing a CredalNet inference engine.

Used by credal network inference algorithms such as CNLoopyPropagation (inner multi-threading) or CNMonteCarloSampling (outer multi-threading).

Template Parameters
GUM_SCALARA floating type ( float, double, long double ... ).
Author
Matthieu HOURBRACQ and Pierre-Henri WUILLEMIN(_at_LIP6)

Definition at line 72 of file inferenceEngine.h.

Member Typedef Documentation

◆ cluster

template<typename GUM_SCALAR>
using gum::credal::InferenceEngine< GUM_SCALAR >::cluster = NodeProperty< std::vector< NodeId > >
private

Definition at line 80 of file inferenceEngine.h.

◆ credalSet

template<typename GUM_SCALAR>
using gum::credal::InferenceEngine< GUM_SCALAR >::credalSet = NodeProperty< std::vector< std::vector< GUM_SCALAR > > >
private

Definition at line 73 of file inferenceEngine.h.

◆ dynExpe

template<typename GUM_SCALAR>
using gum::credal::InferenceEngine< GUM_SCALAR >::dynExpe = typename gum::HashTable< std::string, std::vector< GUM_SCALAR > >
private

Definition at line 77 of file inferenceEngine.h.

◆ expe

template<typename GUM_SCALAR>
using gum::credal::InferenceEngine< GUM_SCALAR >::expe = NodeProperty< GUM_SCALAR >
private

Definition at line 75 of file inferenceEngine.h.

◆ margi

template<typename GUM_SCALAR>
using gum::credal::InferenceEngine< GUM_SCALAR >::margi = NodeProperty< std::vector< GUM_SCALAR > >
private

Definition at line 74 of file inferenceEngine.h.

◆ query

template<typename GUM_SCALAR>
using gum::credal::InferenceEngine< GUM_SCALAR >::query = NodeProperty< std::vector< bool > >
private

Definition at line 79 of file inferenceEngine.h.

Member Enumeration Documentation

◆ ApproximationSchemeSTATE

The different state of an approximation scheme.

Enumerator
Undefined 
Continue 
Epsilon 
Rate 
Limit 
TimeLimit 
Stopped 

Definition at line 86 of file IApproximationSchemeConfiguration.h.

86 : char {
87 Undefined,
88 Continue,
89 Epsilon,
90 Rate,
91 Limit,
92 TimeLimit,
93 Stopped
94 };

Constructor & Destructor Documentation

◆ InferenceEngine()

template<typename GUM_SCALAR>
gum::credal::InferenceEngine< GUM_SCALAR >::InferenceEngine ( const CredalNet< GUM_SCALAR > & credalNet)
explicit

Construtor.

Parameters
credalNetThe credal net to be used with this inference engine.

Definition at line 64 of file inferenceEngine_tpl.h.

64 :
67
68 dbnOpt_.setCNet(credalNet);
69
71
73 }
ApproximationScheme(bool verbosity=false)
Abstract class template representing a CredalNet inference engine.
InferenceEngine(const CredalNet< GUM_SCALAR > &credalNet)
Construtor.
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
const CredalNet< GUM_SCALAR > & credalNet() const
Get this creadal network.
void initMarginals_()
Initialize lower and upper old marginals and marginals before inference, with the lower marginal bein...
VarMod2BNsMap< GUM_SCALAR > dbnOpt_
Object used to efficiently store optimal bayes net during inference, for some algorithms.

References gum::ApproximationScheme::ApproximationScheme(), InferenceEngine(), credalNet(), credalNet_, dbnOpt_, and initMarginals_().

Referenced by gum::credal::CNLoopyPropagation< GUM_SCALAR >::CNLoopyPropagation(), InferenceEngine(), gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::MultipleInferenceEngine(), ~InferenceEngine(), and dynamicExpMax().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ ~InferenceEngine()

template<typename GUM_SCALAR>
gum::credal::InferenceEngine< GUM_SCALAR >::~InferenceEngine ( )
virtual

Destructor.

Definition at line 76 of file inferenceEngine_tpl.h.

References InferenceEngine().

Here is the call graph for this function:

Member Function Documentation

◆ addEvidence() [1/7]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::addEvidence ( const std::string & nodeName,
const Idx val )
finalvirtual

adds a new hard evidence on node named nodeName

Exceptions
UndefinedElementif nodeName does not belong to the Bayesian network
InvalidArgumentif val is not a value for id
InvalidArgumentif nodeName already has an evidence

Definition at line 1211 of file inferenceEngine_tpl.h.

1211 {
1212 addEvidence(this->credalNet_->current_bn().idFromName(nodeName), val);
1213 }
virtual void addEvidence(NodeId id, const Idx val) final
adds a new hard evidence on node id

References addEvidence(), and credalNet_.

Here is the call graph for this function:

◆ addEvidence() [2/7]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::addEvidence ( const std::string & nodeName,
const std::string & label )
finalvirtual

adds a new hard evidence on node named nodeName

Exceptions
UndefinedElementif nodeName does not belong to the Bayesian network
InvalidArgumentif val is not a value for id
InvalidArgumentif nodeName already has an evidence

Definition at line 1223 of file inferenceEngine_tpl.h.

1224 {
1225 const NodeId id = this->credalNet_->current_bn().idFromName(nodeName);
1226 addEvidence(id, this->credalNet_->current_bn().variable(id)[label]);
1227 }

References addEvidence(), and credalNet_.

Here is the call graph for this function:

◆ addEvidence() [3/7]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::addEvidence ( const std::string & nodeName,
const std::vector< GUM_SCALAR > & vals )
finalvirtual

adds a new evidence on node named nodeName (might be soft or hard)

Exceptions
UndefinedElementif id does not belong to the Bayesian network
InvalidArgumentif nodeName already has an evidence
FatalErrorif vals=[0,0,...,0]
InvalidArgumentif the size of vals is different from the domain size of node nodeName

Definition at line 1230 of file inferenceEngine_tpl.h.

1231 {
1232 addEvidence(this->credalNet_->current_bn().idFromName(nodeName), vals);
1233 }

References addEvidence(), and credalNet_.

Here is the call graph for this function:

◆ addEvidence() [4/7]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::addEvidence ( const Tensor< GUM_SCALAR > & pot)
finalvirtual

adds a new evidence on node id (might be soft or hard)

Exceptions
UndefinedElementif the tensor is defined over several nodes
UndefinedElementif the node on which the tensor is defined does not belong to the Bayesian network
InvalidArgumentif the node of the tensor already has an evidence
FatalErrorif pot=[0,0,...,0]

Definition at line 1236 of file inferenceEngine_tpl.h.

1236 {
1237 const auto id = this->credalNet_->current_bn().idFromName(pot.variable(0).name());
1238 std::vector< GUM_SCALAR > vals(this->credalNet_->current_bn().variable(id).domainSize(), 0);
1240 for (I.setFirst(); !I.end(); I.inc()) {
1241 vals[I.val(0)] = pot[I];
1242 }
1243 addEvidence(id, vals);
1244 }

References addEvidence(), credalNet_, gum::Instantiation::end(), gum::Instantiation::inc(), gum::Instantiation::setFirst(), and gum::Instantiation::val().

Here is the call graph for this function:

◆ addEvidence() [5/7]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::addEvidence ( NodeId id,
const Idx val )
finalvirtual

adds a new hard evidence on node id

Exceptions
UndefinedElementif id does not belong to the Bayesian network
InvalidArgumentif val is not a value for id
InvalidArgumentif id already has an evidence

Definition at line 1203 of file inferenceEngine_tpl.h.

1203 {
1204 std::vector< GUM_SCALAR > vals(this->credalNet_->current_bn().variable(id).domainSize(), 0);
1205 vals[val] = 1;
1206 addEvidence(id, vals);
1207 }

References addEvidence(), and credalNet_.

Referenced by addEvidence(), addEvidence(), addEvidence(), addEvidence(), addEvidence(), and addEvidence().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ addEvidence() [6/7]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::addEvidence ( NodeId id,
const std::string & label )
finalvirtual

adds a new hard evidence on node id

Exceptions
UndefinedElementif id does not belong to the Bayesian network
InvalidArgumentif val is not a value for id
InvalidArgumentif id already has an evidence

Definition at line 1217 of file inferenceEngine_tpl.h.

1217 {
1218 addEvidence(id, this->credalNet_->current_bn().variable(id)[label]);
1219 }

References addEvidence(), and credalNet_.

Here is the call graph for this function:

◆ addEvidence() [7/7]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::addEvidence ( NodeId id,
const std::vector< GUM_SCALAR > & vals )
finalvirtual

adds a new evidence on node id (might be soft or hard)

Exceptions
UndefinedElementif id does not belong to the Bayesian network
InvalidArgumentif id already has an evidence
FatalErrorif vals=[0,0,...,0]
InvalidArgumentif the size of vals is different from the domain size of node id

Definition at line 1193 of file inferenceEngine_tpl.h.

1194 {
1195 evidence_.insert(id, vals);
1196 // forces the computation of the begin iterator to avoid subsequent data races
1197 // @TODO make HashTableConstIterator constructors thread safe
1198 evidence_.begin();
1199 }
margi evidence_
Holds observed variables states.

References evidence_.

◆ computeEpsilon_()

template<typename GUM_SCALAR>
const GUM_SCALAR gum::credal::InferenceEngine< GUM_SCALAR >::computeEpsilon_ ( )
inlineprotectedvirtual

Compute approximation scheme epsilon using the old marginals and the new ones.

Highest delta on either lower or upper marginal is epsilon.

Also updates oldMarginals to current marginals.

Returns
Epsilon.

Reimplemented in gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >, and gum::credal::MultipleInferenceEngine< GUM_SCALAR, LazyPropagation< GUM_SCALAR > >.

Definition at line 1018 of file inferenceEngine_tpl.h.

1018 {
1019 // compute the number of threads and prepare for the result
1021 ? this->threadRanges_.size() - 1
1022 : 1; // no nested multithreading
1024
1025 // create the function to be executed by the threads
1026 auto threadedEps = [this, &tEps](const std::size_t this_thread,
1027 const std::size_t nb_threads,
1029 auto& this_tEps = tEps[this_thread];
1031
1032 // below, we will loop over indices i and j of marginalMin_ and
1033 // marginalMax_. Index i represents nodes and j allow to parse their
1034 // domain. To parse all the domains of all the nodes, we should theorically
1035 // use 2 loops. However, here, we will use one loop: we start with node i
1036 // and parse its domain with index j. When this is done, we move to the
1037 // next node, and so on. The underlying idea is that, by doing so, we
1038 // need not parse in this function the whole domain of a node: we can start
1039 // the loop at a given value of node i and complete the loop on another
1040 // value of another node. These values are computed in Vector threadRanges_
1041 // by Method dispatchMarginalsToThreads_(), which dispatches the loops
1042 // among threads
1043 auto i = ranges[this_thread].first;
1044 auto j = ranges[this_thread].second;
1045 auto domain_size = this->marginalMax_[i].size();
1046 const auto end_i = ranges[this_thread + 1].first;
1047 auto end_j = ranges[this_thread + 1].second;
1048 const auto marginalMax_size = this->marginalMax_.size();
1049
1050 while ((i < end_i) || (j < end_j)) {
1051 // on min
1053 delta = (delta < 0) ? (-delta) : delta;
1055
1056 // on max
1058 delta = (delta < 0) ? (-delta) : delta;
1060
1063
1064 if (++j == domain_size) {
1065 j = 0;
1066 ++i;
1067 if (i < marginalMax_size) domain_size = this->marginalMax_[i].size();
1068 }
1069 }
1070 };
1071
1072 // launch the threads
1074 nb_threads,
1076 (nb_threads == 1)
1077 ? std::vector< std::pair< NodeId, Idx > >{{0, 0}, {this->marginalMin_.size(), 0}}
1078 : this->threadRanges_);
1079
1080 // aggregate all the results
1081 GUM_SCALAR eps = tEps[0];
1082 for (const auto nb: tEps)
1083 if (eps < nb) eps = nb;
1084
1085 return eps;
1086 }
margi oldMarginalMax_
Old upper marginals used to compute epsilon.
margi marginalMax_
Upper marginals.
margi oldMarginalMin_
Old lower marginals used to compute epsilon.
margi marginalMin_
Lower marginals.
std::vector< std::pair< NodeId, Idx > > threadRanges_
the ranges of elements of marginalMin_ and marginalMax_ processed by each thread
static void execute(std::size_t nb_threads, FUNCTION exec_func, ARGS &&... func_args)
executes a function using several threads
static int nbRunningThreadsExecutors()
indicates how many threadExecutors are currently running

References gum::threadsSTL::ThreadExecutor::execute(), marginalMax_, marginalMin_, gum::threadsSTL::ThreadExecutor::nbRunningThreadsExecutors(), oldMarginalMax_, oldMarginalMin_, and threadRanges_.

Referenced by gum::credal::CNLoopyPropagation< GUM_SCALAR >::calculateEpsilon_().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ continueApproximationScheme()

INLINE bool gum::ApproximationScheme::continueApproximationScheme ( double error)
inherited

Update the scheme w.r.t the new error.

Test the stopping criterion that are enabled.

Parameters
errorThe new error value.
Returns
false if state become != ApproximationSchemeSTATE::Continue
Exceptions
OperationNotAllowedRaised if state != ApproximationSchemeSTATE::Continue.

Definition at line 229 of file approximationScheme_inl.h.

229 {
230 // For coherence, we fix the time used in the method
231
232 double timer_step = timer_.step();
233
234 if (enabled_max_time_) {
235 if (timer_step > max_time_) {
237 return false;
238 }
239 }
240
241 if (!startOfPeriod()) { return true; }
242
244 GUM_ERROR(
245 OperationNotAllowed,
246 "state of the approximation scheme is not correct : " << messageApproximationScheme());
247 }
248
249 if (verbosity()) { history_.push_back(error); }
250
251 if (enabled_max_iter_) {
252 if (current_step_ > max_iter_) {
254 return false;
255 }
256 }
257
259 current_epsilon_ = error; // eps rate isEnabled needs it so affectation was
260 // moved from eps isEnabled below
261
262 if (enabled_eps_) {
263 if (current_epsilon_ <= eps_) {
265 return false;
266 }
267 }
268
269 if (last_epsilon_ >= 0.) {
270 if (current_epsilon_ > .0) {
271 // ! current_epsilon_ can be 0. AND epsilon
272 // isEnabled can be disabled !
274 }
275 // limit with current eps ---> 0 is | 1 - ( last_eps / 0 ) | --->
276 // infinity the else means a return false if we isEnabled the rate below,
277 // as we would have returned false if epsilon isEnabled was enabled
278 else {
280 }
281
285 return false;
286 }
287 }
288 }
289
291 if (onProgress.hasListener()) {
293 }
294
295 return true;
296 } else {
297 return false;
298 }
299 }
Size current_step_
The current step.
double current_epsilon_
Current epsilon.
double last_epsilon_
Last epsilon value.
double eps_
Threshold for convergence.
bool enabled_max_time_
If true, the timeout is enabled.
Size max_iter_
The maximum iterations.
bool enabled_eps_
If true, the threshold convergence is enabled.
ApproximationSchemeSTATE current_state_
The current state.
double min_rate_eps_
Threshold for the epsilon rate.
std::vector< double > history_
The scheme history, used only if verbosity == true.
double current_rate_
Current rate.
ApproximationSchemeSTATE stateApproximationScheme() const override
Returns the approximation scheme state.
bool startOfPeriod() const
Returns true if we are at the beginning of a period (compute error is mandatory).
bool enabled_max_iter_
If true, the maximum iterations stopping criterion is enabled.
void stopScheme_(ApproximationSchemeSTATE new_state)
Stop the scheme given a new state.
bool verbosity() const override
Returns true if verbosity is enabled.
bool enabled_min_rate_eps_
If true, the minimal threshold for epsilon rate is enabled.
std::string messageApproximationScheme() const
Returns the approximation scheme message.
Signaler3< Size, double, double > onProgress
Progression, error and time.
#define GUM_ERROR(type, msg)
Definition exceptions.h:72
#define GUM_EMIT3(signal, arg1, arg2, arg3)
Definition signaler3.h:61

References enabled_max_time_, and timer_.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::computeKL_(), gum::learning::GreedyHillClimbing::learnStructure(), gum::learning::LocalSearchWithTabuList::learnStructure(), gum::SamplingInference< GUM_SCALAR >::loopApproxInference_(), gum::credal::CNLoopyPropagation< GUM_SCALAR >::makeInferenceByOrderedArcs_(), gum::credal::CNLoopyPropagation< GUM_SCALAR >::makeInferenceByRandomOrder_(), and gum::credal::CNLoopyPropagation< GUM_SCALAR >::makeInferenceNodeToNeighbours_().

Here is the caller graph for this function:

◆ credalNet()

template<typename GUM_SCALAR>
const CredalNet< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::credalNet ( ) const

Get this creadal network.

Returns
A constant reference to this CredalNet.

Definition at line 81 of file inferenceEngine_tpl.h.

81 {
82 return *credalNet_;
83 }

References credalNet_.

Referenced by gum::credal::CNLoopyPropagation< GUM_SCALAR >::CNLoopyPropagation(), InferenceEngine(), and gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::MultipleInferenceEngine().

Here is the caller graph for this function:

◆ currentTime()

INLINE double gum::ApproximationScheme::currentTime ( ) const
overridevirtualinherited

Returns the current running time in second.

Returns
Returns the current running time in second.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 136 of file approximationScheme_inl.h.

136{ return timer_.step(); }

References timer_.

◆ disableEpsilon()

INLINE void gum::ApproximationScheme::disableEpsilon ( )
overridevirtualinherited

Disable stopping criterion on epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 74 of file approximationScheme_inl.h.

74{ enabled_eps_ = false; }

References enabled_eps_.

Referenced by gum::learning::EMApproximationScheme::EMApproximationScheme(), and gum::learning::EMApproximationScheme::setMinEpsilonRate().

Here is the caller graph for this function:

◆ disableMaxIter()

INLINE void gum::ApproximationScheme::disableMaxIter ( )
overridevirtualinherited

Disable stopping criterion on max iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 115 of file approximationScheme_inl.h.

115{ enabled_max_iter_ = false; }

References enabled_max_iter_.

Referenced by gum::learning::GreedyHillClimbing::GreedyHillClimbing().

Here is the caller graph for this function:

◆ disableMaxTime()

INLINE void gum::ApproximationScheme::disableMaxTime ( )
overridevirtualinherited

Disable stopping criterion on timeout.

Returns
Disable stopping criterion on timeout.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 139 of file approximationScheme_inl.h.

139{ enabled_max_time_ = false; }

References enabled_max_time_.

Referenced by gum::learning::GreedyHillClimbing::GreedyHillClimbing().

Here is the caller graph for this function:

◆ disableMinEpsilonRate()

INLINE void gum::ApproximationScheme::disableMinEpsilonRate ( )
overridevirtualinherited

Disable stopping criterion on epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 95 of file approximationScheme_inl.h.

95{ enabled_min_rate_eps_ = false; }

References enabled_min_rate_eps_.

Referenced by gum::learning::GreedyHillClimbing::GreedyHillClimbing(), gum::GibbsBNdistance< GUM_SCALAR >::computeKL_(), and gum::learning::EMApproximationScheme::setEpsilon().

Here is the caller graph for this function:

◆ displatchMarginalsToThreads_()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::displatchMarginalsToThreads_ ( )
protected

computes Vector threadRanges_, that assigns some part of marginalMin_ and marginalMax_ to the threads

Definition at line 1133 of file inferenceEngine_tpl.h.

1133 {
1134 // we compute the number of elements in the 2 loops (over i,j in marginalMin_[i][j])
1135 Size nb_elements = 0;
1136 const auto marginalMin_size = this->marginalMin_.size();
1137 for (const auto& marg_i: this->marginalMin_)
1138 nb_elements += marg_i.second.size();
1139
1140 // distribute evenly the elements among the threads
1143
1144 // the result that we return is a vector of pairs (NodeId, Idx). For thread number i, the
1145 // pair at index i is the beginning of the range that the thread will have to process: this
1146 // is the part of the marginal distribution vector of node NodeId starting at index Idx.
1147 // The pair at index i+1 is the end of this range (not included)
1148 threadRanges_.clear();
1149 threadRanges_.reserve(nb_threads + 1);
1150
1151 // try to balance the number of elements among the threads
1154
1155 NodeId current_node = 0;
1157 Size current_domain_size = this->marginalMin_[0].size();
1159
1160 for (Idx i = Idx(0); i < nb_threads; ++i) {
1161 // compute the end of the threads, assuming that the current node has a domain
1162 // sufficiently large
1164 if (rest_elts != Idx(0)) {
1166 --rest_elts;
1167 }
1168
1169 // if the current node is not sufficient to hold all the elements that
1170 // the current thread should process. So we should add elements of the
1171 // next nodes
1174 ++current_node;
1178 }
1179 }
1180
1181 // now we can store the range if elements
1183
1184 // compute the next begin_node
1186 ++current_node;
1188 }
1189 }
1190 }
virtual Size getNumberOfThreads() const
returns the current max number of threads used by the class containing this ThreadNumberManager
Size Idx
Type for indexes.
Definition types.h:79

References gum::ThreadNumberManager::getNumberOfThreads(), and threadRanges_.

Referenced by initMarginals_().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ dynamicExpectations()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpectations ( )

Compute dynamic expectations.

See also
dynamicExpectations_ Only call this if an algorithm does not call it by itself.

Definition at line 739 of file inferenceEngine_tpl.h.

739 {
741 }
void dynamicExpectations_()
Rearrange lower and upper expectations to suit dynamic networks.

References dynamicExpectations_().

Here is the call graph for this function:

◆ dynamicExpectations_()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpectations_ ( )
protected

Rearrange lower and upper expectations to suit dynamic networks.

Definition at line 744 of file inferenceEngine_tpl.h.

744 {
745 // no modals, no expectations computed during inference
746 if (expectationMin_.empty() || modal_.empty()) return;
747
748 // already called by the algorithm or the user
749 if (dynamicExpMax_.size() > 0 && dynamicExpMin_.size() > 0) return;
750
752
754
755
756 // if non dynamic, directly save expectationMin_ et Max (same but faster)
758
759 for (const auto& elt: expectationMin_) {
761
762 var_name = credalNet_->current_bn().variable(elt.first).name();
763 auto delim = var_name.find_first_of("_");
764 time_step = var_name.substr(delim + 1, var_name.size());
765 var_name = var_name.substr(0, delim);
766
767 // to be sure (don't store not monitored variables' expectations)
768 // although it
769 // should be taken care of before this point
770 if (!modal_.exists(var_name)) continue;
771
772 expectationsMin.getWithDefault(var_name, innerMap())
773 .getWithDefault(atoi(time_step.c_str()), 0)
774 = elt.second; // we iterate with min iterators
775 expectationsMax.getWithDefault(var_name, innerMap())
776 .getWithDefault(atoi(time_step.c_str()), 0)
777 = expectationMax_[elt.first];
778 }
779
780 for (const auto& elt: expectationsMin) {
781 typename std::vector< GUM_SCALAR > dynExp(elt.second.size());
782
783 for (const auto& elt2: elt.second)
784 dynExp[elt2.first] = elt2.second;
785
786 dynamicExpMin_.insert(elt.first, dynExp);
787 }
788
789 for (const auto& elt: expectationsMax) {
790 typename std::vector< GUM_SCALAR > dynExp(elt.second.size());
791
792 for (const auto& elt2: elt.second) {
793 dynExp[elt2.first] = elt2.second;
794 }
795
796 dynamicExpMax_.insert(elt.first, dynExp);
797 }
798 }
dynExpe dynamicExpMin_
Lower dynamic expectations.
dynExpe dynamicExpMax_
Upper dynamic expectations.
expe expectationMax_
Upper expectations, if some variables modalities were inserted.
dynExpe modal_
Variables modalities used to compute expectations.
expe expectationMin_
Lower expectations, if some variables modalities were inserted.

References credalNet_, dynamicExpMax_, dynamicExpMin_, expectationMax_, expectationMin_, and modal_.

Referenced by dynamicExpectations().

Here is the caller graph for this function:

◆ dynamicExpMax()

template<typename GUM_SCALAR>
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMax ( const std::string & varName) const

Get the upper dynamic expectation of a given variable prefix (without the time step included, i.e.

call with "temp" to get "temp_0", ..., "temp_T").

Parameters
varNameThe variable name prefix which upper expectation we want.
Returns
A constant reference to the variable upper expectation over all time steps.

Definition at line 534 of file inferenceEngine_tpl.h.

534 {
535 std::string errTxt = "const std::vector< GUM_SCALAR > & InferenceEngine< "
536 "GUM_SCALAR >::dynamicExpMax ( const std::string & "
537 "varName ) const : ";
538
539 if (dynamicExpMax_.empty())
540 GUM_ERROR(OperationNotAllowed, errTxt + "_dynamicExpectations() needs to be called before")
541
542 if (!dynamicExpMax_.exists(varName) /*dynamicExpMin_.find(varName) == dynamicExpMin_.end()*/)
543 GUM_ERROR(NotFound, errTxt + "variable name not found : " << varName)
544
546 }

References InferenceEngine(), dynamicExpMax(), and dynamicExpMax_.

Referenced by dynamicExpMax().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ dynamicExpMin()

template<typename GUM_SCALAR>
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMin ( const std::string & varName) const

Get the lower dynamic expectation of a given variable prefix (without the time step included, i.e.

call with "temp" to get "temp_0", ..., "temp_T").

Parameters
varNameThe variable name prefix which lower expectation we want.
Returns
A constant reference to the variable lower expectation over all time steps.

Definition at line 518 of file inferenceEngine_tpl.h.

518 {
519 std::string errTxt = "const std::vector< GUM_SCALAR > & InferenceEngine< "
520 "GUM_SCALAR >::dynamicExpMin ( const std::string & "
521 "varName ) const : ";
522
523 if (dynamicExpMin_.empty())
524 GUM_ERROR(OperationNotAllowed, errTxt + "_dynamicExpectations() needs to be called before")
525
526 if (!dynamicExpMin_.exists(varName) /*dynamicExpMin_.find(varName) == dynamicExpMin_.end()*/)
527 GUM_ERROR(NotFound, errTxt + "variable name not found : " << varName)
528
530 }

◆ enableEpsilon()

INLINE void gum::ApproximationScheme::enableEpsilon ( )
overridevirtualinherited

Enable stopping criterion on epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 77 of file approximationScheme_inl.h.

77{ enabled_eps_ = true; }

References enabled_eps_.

◆ enableMaxIter()

INLINE void gum::ApproximationScheme::enableMaxIter ( )
overridevirtualinherited

Enable stopping criterion on max iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 118 of file approximationScheme_inl.h.

118{ enabled_max_iter_ = true; }

References enabled_max_iter_.

◆ enableMaxTime()

INLINE void gum::ApproximationScheme::enableMaxTime ( )
overridevirtualinherited

Enable stopping criterion on timeout.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 142 of file approximationScheme_inl.h.

142{ enabled_max_time_ = true; }

References enabled_max_time_.

◆ enableMinEpsilonRate()

INLINE void gum::ApproximationScheme::enableMinEpsilonRate ( )
overridevirtualinherited

Enable stopping criterion on epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 98 of file approximationScheme_inl.h.

98{ enabled_min_rate_eps_ = true; }

References enabled_min_rate_eps_.

Referenced by gum::learning::EMApproximationScheme::EMApproximationScheme(), and gum::GibbsBNdistance< GUM_SCALAR >::computeKL_().

Here is the caller graph for this function:

◆ epsilon()

INLINE double gum::ApproximationScheme::epsilon ( ) const
overridevirtualinherited

Returns the value of epsilon.

Returns
Returns the value of epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 71 of file approximationScheme_inl.h.

71{ return eps_; }

References eps_.

Referenced by gum::ImportanceSampling< GUM_SCALAR >::onContextualize_(), and gum::ImportanceSampling< GUM_SCALAR >::unsharpenBN_().

Here is the caller graph for this function:

◆ eraseAllEvidence()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::eraseAllEvidence ( )
virtual

removes all the evidence entered into the network

Reimplemented in gum::credal::CNLoopyPropagation< GUM_SCALAR >, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >, and gum::credal::MultipleInferenceEngine< GUM_SCALAR, LazyPropagation< GUM_SCALAR > >.

Definition at line 86 of file inferenceEngine_tpl.h.

86 {
87 evidence_.clear();
88 query_.clear();
89 /*
90 marginalMin_.clear();
91 marginalMax_.clear();
92 oldMarginalMin_.clear();
93 oldMarginalMax_.clear();
94 */
96 /*
97 expectationMin_.clear();
98 expectationMax_.clear();
99 */
101
102 // marginalSets_.clear();
104
105 dynamicExpMin_.clear();
106 dynamicExpMax_.clear();
107
108 //_modal.clear();
109
110 //_t0.clear();
111 //_t1.clear();
112 }
void initExpectations_()
Initialize lower and upper expectations before inference, with the lower expectation being initialize...
void initMarginalSets_()
Initialize credal set vertices with empty sets.
query query_
Holds the query nodes states.

References dynamicExpMax_, dynamicExpMin_, evidence_, initExpectations_(), initMarginals_(), initMarginalSets_(), and query_.

Referenced by gum::credal::CNLoopyPropagation< GUM_SCALAR >::eraseAllEvidence(), and gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::eraseAllEvidence().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ expectationMax() [1/2]

template<typename GUM_SCALAR>
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax ( const NodeId id) const

Get the upper expectation of a given node id.

Parameters
idThe node id which upper expectation we want.
Returns
A constant reference to this node upper expectation.

Definition at line 510 of file inferenceEngine_tpl.h.

510 {
511 try {
512 return expectationMax_[id];
513 } catch (NotFound& err) { throw(err); }
514 }

References expectationMax_.

◆ expectationMax() [2/2]

template<typename GUM_SCALAR>
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax ( const std::string & varName) const

Get the upper expectation of a given variable name.

Parameters
varNameThe variable name which upper expectation we want.
Returns
A constant reference to this variable upper expectation.

Definition at line 496 of file inferenceEngine_tpl.h.

496 {
497 try {
498 return expectationMax_[credalNet_->current_bn().idFromName(varName)];
499 } catch (NotFound& err) { throw(err); }
500 }

References credalNet_, and expectationMax_.

◆ expectationMin() [1/2]

template<typename GUM_SCALAR>
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin ( const NodeId id) const

Get the lower expectation of a given node id.

Parameters
idThe node id which lower expectation we want.
Returns
A constant reference to this node lower expectation.

Definition at line 503 of file inferenceEngine_tpl.h.

503 {
504 try {
505 return expectationMin_[id];
506 } catch (NotFound& err) { throw(err); }
507 }

References expectationMin_.

◆ expectationMin() [2/2]

template<typename GUM_SCALAR>
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin ( const std::string & varName) const

Get the lower expectation of a given variable name.

Parameters
varNameThe variable name which lower expectation we want.
Returns
A constant reference to this variable lower expectation.

Definition at line 488 of file inferenceEngine_tpl.h.

488 {
489 try {
490 return expectationMin_[credalNet_->current_bn().idFromName(varName)];
491 } catch (NotFound& err) { throw(err); }
492 }

References credalNet_, and expectationMin_.

◆ getApproximationSchemeMsg()

template<typename GUM_SCALAR>
const std::string gum::credal::InferenceEngine< GUM_SCALAR >::getApproximationSchemeMsg ( )
inline

Get approximation scheme state.

Returns
A constant string about approximation scheme state.

Definition at line 598 of file inferenceEngine.h.

598{ return this->messageApproximationScheme(); }

References gum::IApproximationSchemeConfiguration::messageApproximationScheme().

Here is the call graph for this function:

◆ getNumberOfThreads()

virtual Size gum::ThreadNumberManager::getNumberOfThreads ( ) const
virtualinherited

◆ getT0Cluster()

template<typename GUM_SCALAR>
const NodeProperty< std::vector< NodeId > > & gum::credal::InferenceEngine< GUM_SCALAR >::getT0Cluster ( ) const

Get the t0_ cluster.

Returns
A constant reference to the t0_ cluster.

Definition at line 1007 of file inferenceEngine_tpl.h.

1007 {
1008 return t0_;
1009 }
cluster t0_
Clusters of nodes used with dynamic networks.

References t0_.

◆ getT1Cluster()

template<typename GUM_SCALAR>
const NodeProperty< std::vector< NodeId > > & gum::credal::InferenceEngine< GUM_SCALAR >::getT1Cluster ( ) const

Get the t1_ cluster.

Returns
A constant reference to the t1_ cluster.

Definition at line 1013 of file inferenceEngine_tpl.h.

1013 {
1014 return t1_;
1015 }
cluster t1_
Clusters of nodes used with dynamic networks.

References t1_.

◆ getVarMod2BNsMap()

template<typename GUM_SCALAR>
VarMod2BNsMap< GUM_SCALAR > * gum::credal::InferenceEngine< GUM_SCALAR >::getVarMod2BNsMap ( )

Get optimum IBayesNet.

Returns
A pointer to the optimal net object.

Definition at line 163 of file inferenceEngine_tpl.h.

163 {
164 return &dbnOpt_;
165 }

References dbnOpt_.

◆ history()

INLINE const std::vector< double > & gum::ApproximationScheme::history ( ) const
overridevirtualinherited

Returns the scheme history.

Returns
Returns the scheme history.
Exceptions
OperationNotAllowedRaised if the scheme did not performed or if verbosity is set to false.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 178 of file approximationScheme_inl.h.

178 {
180 GUM_ERROR(OperationNotAllowed, "state of the approximation scheme is udefined")
181 }
182
183 if (!verbosity()) GUM_ERROR(OperationNotAllowed, "No history when verbosity=false")
184
185 return history_;
186 }

References GUM_ERROR, stateApproximationScheme(), and gum::IApproximationSchemeConfiguration::Undefined.

Here is the call graph for this function:

◆ initApproximationScheme()

INLINE void gum::ApproximationScheme::initApproximationScheme ( )
inherited

Initialise the scheme.

Definition at line 189 of file approximationScheme_inl.h.

189 {
191 current_step_ = 0;
193 history_.clear();
194 timer_.reset();
195 }

References ApproximationScheme(), gum::IApproximationSchemeConfiguration::Continue, current_epsilon_, current_rate_, current_state_, current_step_, and initApproximationScheme().

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::computeKL_(), initApproximationScheme(), gum::learning::GreedyHillClimbing::learnStructure(), gum::learning::LocalSearchWithTabuList::learnStructure(), gum::SamplingInference< GUM_SCALAR >::loopApproxInference_(), gum::credal::CNLoopyPropagation< GUM_SCALAR >::makeInference(), and gum::SamplingInference< GUM_SCALAR >::onStateChanged_().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ initExpectations_()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::initExpectations_ ( )
protected

Initialize lower and upper expectations before inference, with the lower expectation being initialized on the highest modality and the upper expectation being initialized on the lowest modality.

Definition at line 718 of file inferenceEngine_tpl.h.

718 {
719 expectationMin_.clear();
720 expectationMax_.clear();
721
722 if (modal_.empty()) return;
723
724 for (auto node: credalNet_->current_bn().nodes()) {
726
727 var_name = credalNet_->current_bn().variable(node).name();
728 auto delim = var_name.find_first_of("_");
729 var_name = var_name.substr(0, delim);
730
731 if (!modal_.exists(var_name)) continue;
732
735 }
736 }

References credalNet_, expectationMax_, expectationMin_, and modal_.

Referenced by eraseAllEvidence().

Here is the caller graph for this function:

◆ initMarginals_()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::initMarginals_ ( )
protected

Initialize lower and upper old marginals and marginals before inference, with the lower marginal being 1 and the upper 0.

Definition at line 682 of file inferenceEngine_tpl.h.

682 {
683 marginalMin_.clear();
684 marginalMax_.clear();
685 oldMarginalMin_.clear();
686 oldMarginalMax_.clear();
687
688 for (auto node: credalNet_->current_bn().nodes()) {
689 auto dSize = credalNet_->current_bn().variable(node).domainSize();
692
695 }
696
697 // now that we know the sizes of marginalMin_ and marginalMax_, we can
698 // dispatch their processes to the threads
700 }
void displatchMarginalsToThreads_()
computes Vector threadRanges_, that assigns some part of marginalMin_ and marginalMax_ to the threads

References credalNet_, displatchMarginalsToThreads_(), marginalMax_, marginalMin_, oldMarginalMax_, and oldMarginalMin_.

Referenced by InferenceEngine(), and eraseAllEvidence().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ initMarginalSets_()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::initMarginalSets_ ( )
protected

Initialize credal set vertices with empty sets.

Definition at line 703 of file inferenceEngine_tpl.h.

703 {
704 marginalSets_.clear();
705
706 if (!storeVertices_) return;
707
708 for (auto node: credalNet_->current_bn().nodes())
710 }
bool storeVertices_
True if credal sets vertices are stored, False otherwise.
credalSet marginalSets_
Credal sets vertices, if enabled.

References credalNet_, marginalSets_, and storeVertices_.

Referenced by eraseAllEvidence(), and storeVertices().

Here is the caller graph for this function:

◆ insertEvidence() [1/2]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidence ( const NodeProperty< std::vector< GUM_SCALAR > > & evidence)

Insert evidence from Property.

Parameters
evidenceThe on nodes Property containing likelihoods.

Definition at line 277 of file inferenceEngine_tpl.h.

278 {
279 if (!evidence_.empty()) evidence_.clear();
280
281 // use cbegin() to get const_iterator when available in aGrUM hashtables
282 for (const auto& elt: evidence) {
283 try {
284 credalNet_->current_bn().variable(elt.first);
285 } catch (NotFound& err) {
287 continue;
288 }
289
290 evidence_.insert(elt.first, elt.second);
291 }
292
293 // forces the computation of the begin iterator to avoid subsequent data races
294 // @TODO make HashTableConstIterator constructors thread safe
295 evidence_.begin();
296 }
#define GUM_SHOWERROR(e)
Definition exceptions.h:85

References credalNet_, evidence_, and GUM_SHOWERROR.

◆ insertEvidence() [2/2]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidence ( const std::map< std::string, std::vector< GUM_SCALAR > > & eviMap)

Insert evidence from map.

Parameters
eviMapThe map variable name - likelihood.

Definition at line 251 of file inferenceEngine_tpl.h.

252 {
253 if (!evidence_.empty()) evidence_.clear();
254
255 for (auto it = eviMap.cbegin(), theEnd = eviMap.cend(); it != theEnd; ++it) {
256 NodeId id;
257
258 try {
259 id = credalNet_->current_bn().idFromName(it->first);
260 } catch (NotFound& err) {
262 continue;
263 }
264
265 evidence_.insert(id, it->second);
266 }
267
268 // forces the computation of the begin iterator to avoid subsequent data races
269 // @TODO make HashTableConstIterator constructors thread safe
270 evidence_.begin();
271 }

References credalNet_, evidence_, and GUM_SHOWERROR.

◆ insertEvidenceFile()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidenceFile ( const std::string & path)
virtual

Insert evidence from file.

Parameters
pathThe path to the evidence file.

Reimplemented in gum::credal::CNLoopyPropagation< GUM_SCALAR >, and gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >.

Definition at line 299 of file inferenceEngine_tpl.h.

299 {
301
302 if (!evi_stream.good()) {
304 "void InferenceEngine< GUM_SCALAR "
305 ">::insertEvidence(const std::string & path) : could not "
306 "open input file : "
307 << path);
308 }
309
310 if (!evidence_.empty()) evidence_.clear();
311
313 char * cstr, *p;
314
315 while (evi_stream.good() && std::strcmp(line.c_str(), "[EVIDENCE]") != 0) {
317 }
318
319 while (evi_stream.good()) {
321
322 if (std::strcmp(line.c_str(), "[QUERY]") == 0) break;
323
324 if (line.size() == 0) continue;
325
326 cstr = new char[line.size() + 1];
327 strcpy(cstr, line.c_str());
328
329 p = strtok(cstr, " ");
330 tmp = p;
331
332 // if user input is wrong
333 NodeId node = -1;
334
335 try {
336 node = credalNet_->current_bn().idFromName(tmp);
337 } catch (NotFound& err) {
339 continue;
340 }
341
343 p = strtok(nullptr, " ");
344
345 while (p != nullptr) {
346 values.push_back(GUM_SCALAR(atof(p)));
347 p = strtok(nullptr, " ");
348 } // end of : line
349
350 evidence_.insert(node, values);
351
352 delete[] p;
353 delete[] cstr;
354 } // end of : file
355
356 evi_stream.close();
357
358 // forces the computation of the begin iterator to avoid subsequent data races
359 // @TODO make HashTableConstIterator constructors thread safe
360 evidence_.begin();
361 }

Referenced by gum::credal::CNLoopyPropagation< GUM_SCALAR >::insertEvidenceFile(), and gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::insertEvidenceFile().

Here is the caller graph for this function:

◆ insertModals()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::insertModals ( const std::map< std::string, std::vector< GUM_SCALAR > > & modals)

Insert variables modalities from map to compute expectations.

Parameters
modalsThe map variable name - modalities.

Definition at line 215 of file inferenceEngine_tpl.h.

216 {
217 if (!modal_.empty()) modal_.clear();
218
219 for (auto it = modals.cbegin(), theEnd = modals.cend(); it != theEnd; ++it) {
220 NodeId id;
221
222 try {
223 id = credalNet_->current_bn().idFromName(it->first);
224 } catch (NotFound& err) {
226 continue;
227 }
228
229 // check that modals are net compatible
230 auto dSize = credalNet_->current_bn().variable(id).domainSize();
231
232 if (dSize != it->second.size()) continue;
233
234 // GUM_ERROR(OperationNotAllowed, "void InferenceEngine< GUM_SCALAR
235 // >::insertModals( const std::map< std::string, std::vector< GUM_SCALAR
236 // > >
237 // &modals) : modalities does not respect variable cardinality : " <<
238 // credalNet_->current_bn().variable( id ).name() << " : " << dSize << "
239 // != "
240 // << it->second.size());
241
242 modal_.insert(it->first, it->second); //[ it->first ] = it->second;
243 }
244
245 //_modal = modals;
246
248 }

References credalNet_, GUM_SHOWERROR, and modal_.

◆ insertModalsFile()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::insertModalsFile ( const std::string & path)

Insert variables modalities from file to compute expectations.

Parameters
pathThe path to the modalities file.

Definition at line 168 of file inferenceEngine_tpl.h.

168 {
170
171 if (!mod_stream.good()) {
173 "void InferenceEngine< GUM_SCALAR "
174 ">::insertModals(const std::string & path) : "
175 "could not open input file : "
176 << path);
177 }
178
179 if (!modal_.empty()) modal_.clear();
180
182 char * cstr, *p;
183
184 while (mod_stream.good()) {
186
187 if (line.size() == 0) continue;
188
189 cstr = new char[line.size() + 1];
190 strcpy(cstr, line.c_str());
191
192 p = strtok(cstr, " ");
193 tmp = p;
194
196 p = strtok(nullptr, " ");
197
198 while (p != nullptr) {
199 values.push_back(GUM_SCALAR(atof(p)));
200 p = strtok(nullptr, " ");
201 } // end of : line
202
203 modal_.insert(tmp, values); //[tmp] = values;
204
205 delete[] p;
206 delete[] cstr;
207 } // end of : file
208
209 mod_stream.close();
210
212 }

References GUM_ERROR, and modal_.

◆ insertQuery()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::insertQuery ( const NodeProperty< std::vector< bool > > & query)

Insert query variables and states from Property.

Parameters
queryThe on nodes Property containing queried variables states.

Definition at line 364 of file inferenceEngine_tpl.h.

365 {
366 if (!query_.empty()) query_.clear();
367
368 for (const auto& elt: query) {
369 try {
370 credalNet_->current_bn().variable(elt.first);
371 } catch (NotFound& err) {
373 continue;
374 }
375
376 query_.insert(elt.first, elt.second);
377 }
378 }
NodeProperty< std::vector< bool > > query

References query_.

◆ insertQueryFile()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::insertQueryFile ( const std::string & path)

Insert query variables states from file.

Parameters
pathThe path to the query file.

Definition at line 381 of file inferenceEngine_tpl.h.

381 {
383
384 if (!evi_stream.good()) {
386 "void InferenceEngine< GUM_SCALAR >::insertQuery(const "
387 "std::string & path) : could not open input file : "
388 << path);
389 }
390
391 if (!query_.empty()) query_.clear();
392
394 char * cstr, *p;
395
396 while (evi_stream.good() && std::strcmp(line.c_str(), "[QUERY]") != 0) {
398 }
399
400 while (evi_stream.good()) {
402
403 if (std::strcmp(line.c_str(), "[EVIDENCE]") == 0) break;
404
405 if (line.size() == 0) continue;
406
407 cstr = new char[line.size() + 1];
408 strcpy(cstr, line.c_str());
409
410 p = strtok(cstr, " ");
411 tmp = p;
412
413 // if user input is wrong
414 NodeId node = -1;
415
416 try {
417 node = credalNet_->current_bn().idFromName(tmp);
418 } catch (NotFound& err) {
420 continue;
421 }
422
423 auto dSize = credalNet_->current_bn().variable(node).domainSize();
424
425 p = strtok(nullptr, " ");
426
427 if (p == nullptr) {
428 query_.insert(node, std::vector< bool >(dSize, true));
429 } else {
431
432 while (p != nullptr) {
433 if ((Size)atoi(p) >= dSize)
435 "void InferenceEngine< GUM_SCALAR "
436 ">::insertQuery(const std::string & path) : "
437 "query modality is higher or equal to "
438 "cardinality");
439
440 values[atoi(p)] = true;
441 p = strtok(nullptr, " ");
442 } // end of : line
443
444 query_.insert(node, values);
445 }
446
447 delete[] p;
448 delete[] cstr;
449 } // end of : file
450
451 evi_stream.close();
452 }

References GUM_ERROR.

◆ isEnabledEpsilon()

INLINE bool gum::ApproximationScheme::isEnabledEpsilon ( ) const
overridevirtualinherited

Returns true if stopping criterion on epsilon is enabled, false otherwise.

Returns
Returns true if stopping criterion on epsilon is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 81 of file approximationScheme_inl.h.

81{ return enabled_eps_; }

References enabled_eps_.

◆ isEnabledMaxIter()

INLINE bool gum::ApproximationScheme::isEnabledMaxIter ( ) const
overridevirtualinherited

Returns true if stopping criterion on max iterations is enabled, false otherwise.

Returns
Returns true if stopping criterion on max iterations is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 122 of file approximationScheme_inl.h.

122{ return enabled_max_iter_; }

References enabled_max_iter_.

◆ isEnabledMaxTime()

INLINE bool gum::ApproximationScheme::isEnabledMaxTime ( ) const
overridevirtualinherited

Returns true if stopping criterion on timeout is enabled, false otherwise.

Returns
Returns true if stopping criterion on timeout is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 146 of file approximationScheme_inl.h.

146{ return enabled_max_time_; }

References enabled_max_time_.

◆ isEnabledMinEpsilonRate()

INLINE bool gum::ApproximationScheme::isEnabledMinEpsilonRate ( ) const
overridevirtualinherited

Returns true if stopping criterion on epsilon rate is enabled, false otherwise.

Returns
Returns true if stopping criterion on epsilon rate is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 102 of file approximationScheme_inl.h.

102{ return enabled_min_rate_eps_; }

References enabled_min_rate_eps_.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::computeKL_().

Here is the caller graph for this function:

◆ isGumNumberOfThreadsOverriden()

bool gum::ThreadNumberManager::isGumNumberOfThreadsOverriden ( ) const
virtualinherited

indicates whether the class containing this ThreadNumberManager set its own number of threads

Implements gum::IThreadNumberManager.

Referenced by gum::learning::IBNLearner::createParamEstimator_(), and gum::learning::IBNLearner::createScore_().

Here is the caller graph for this function:

◆ makeInference()

◆ marginalMax() [1/2]

template<typename GUM_SCALAR>
gum::Tensor< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax ( const NodeId id) const

Get the upper marginals of a given node id.

Parameters
idThe node id which upper marginals we want.
Returns
A constant reference to this node upper marginals.

Definition at line 477 of file inferenceEngine_tpl.h.

477 {
478 try {
480 res.add(credalNet_->current_bn().variable(id));
481 res.fillWith(marginalMax_[id]);
482 return res;
483 } catch (NotFound& err) { throw(err); }
484 }

Referenced by marginalMax().

Here is the caller graph for this function:

◆ marginalMax() [2/2]

template<typename GUM_SCALAR>
INLINE Tensor< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax ( const std::string & varName) const

Get the upper marginals of a given variable name.

Parameters
varNameThe variable name which upper marginals we want.
Returns
A constant reference to this variable upper marginals.

Definition at line 462 of file inferenceEngine_tpl.h.

462 {
463 return marginalMax(credalNet_->current_bn().idFromName(varName));
464 }
Tensor< GUM_SCALAR > marginalMax(const NodeId id) const
Get the upper marginals of a given node id.

References credalNet_, and marginalMax().

Here is the call graph for this function:

◆ marginalMin() [1/2]

template<typename GUM_SCALAR>
gum::Tensor< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin ( const NodeId id) const

Get the lower marginals of a given node id.

Parameters
idThe node id which lower marginals we want.
Returns
A constant reference to this node lower marginals.

Definition at line 467 of file inferenceEngine_tpl.h.

467 {
468 try {
470 res.add(credalNet_->current_bn().variable(id));
471 res.fillWith(marginalMin_[id]);
472 return res;
473 } catch (NotFound& err) { throw(err); }
474 }

References credalNet_, and marginalMin_.

Referenced by marginalMin().

Here is the caller graph for this function:

◆ marginalMin() [2/2]

template<typename GUM_SCALAR>
INLINE Tensor< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin ( const std::string & varName) const

Get the lower marginals of a given variable name.

Parameters
varNameThe variable name which lower marginals we want.
Returns
A constant reference to this variable lower marginals.

Definition at line 456 of file inferenceEngine_tpl.h.

456 {
457 return marginalMin(credalNet_->current_bn().idFromName(varName));
458 }
Tensor< GUM_SCALAR > marginalMin(const NodeId id) const
Get the lower marginals of a given node id.

References credalNet_, and marginalMin().

Here is the call graph for this function:

◆ maxIter()

INLINE Size gum::ApproximationScheme::maxIter ( ) const
overridevirtualinherited

Returns the criterion on number of iterations.

Returns
Returns the criterion on number of iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 112 of file approximationScheme_inl.h.

112{ return max_iter_; }

References max_iter_.

◆ maxTime()

INLINE double gum::ApproximationScheme::maxTime ( ) const
overridevirtualinherited

Returns the timeout (in seconds).

Returns
Returns the timeout (in seconds).

Implements gum::IApproximationSchemeConfiguration.

Definition at line 133 of file approximationScheme_inl.h.

133{ return max_time_; }

References max_time_.

◆ messageApproximationScheme()

INLINE std::string gum::IApproximationSchemeConfiguration::messageApproximationScheme ( ) const
inherited

Returns the approximation scheme message.

Returns
Returns the approximation scheme message.

Definition at line 59 of file IApproximationSchemeConfiguration_inl.h.

59 {
60 std::stringstream s;
61
62 switch (stateApproximationScheme()) {
63 case ApproximationSchemeSTATE::Continue : s << "in progress"; break;
64
65 case ApproximationSchemeSTATE::Epsilon : s << "stopped with epsilon=" << epsilon(); break;
66
67 case ApproximationSchemeSTATE::Rate : s << "stopped with rate=" << minEpsilonRate(); break;
68
69 case ApproximationSchemeSTATE::Limit : s << "stopped with max iteration=" << maxIter(); break;
70
71 case ApproximationSchemeSTATE::TimeLimit : s << "stopped with timeout=" << maxTime(); break;
72
73 case ApproximationSchemeSTATE::Stopped : s << "stopped on request"; break;
74
75 case ApproximationSchemeSTATE::Undefined : s << "undefined state"; break;
76 };
77
78 return s.str();
79 }
virtual double epsilon() const =0
Returns the value of epsilon.
virtual ApproximationSchemeSTATE stateApproximationScheme() const =0
Returns the approximation scheme state.
virtual double minEpsilonRate() const =0
Returns the value of the minimal epsilon rate.
virtual Size maxIter() const =0
Returns the criterion on number of iterations.
virtual double maxTime() const =0
Returns the timeout (in seconds).

References Continue, Epsilon, epsilon(), Limit, maxIter(), maxTime(), minEpsilonRate(), Rate, stateApproximationScheme(), Stopped, TimeLimit, and Undefined.

Referenced by gum::credal::InferenceEngine< GUM_SCALAR >::getApproximationSchemeMsg(), and gum::credal::MultipleInferenceEngine< GUM_SCALAR, LazyPropagation< GUM_SCALAR > >::stateApproximationScheme().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ minEpsilonRate()

INLINE double gum::ApproximationScheme::minEpsilonRate ( ) const
overridevirtualinherited

Returns the value of the minimal epsilon rate.

Returns
Returns the value of the minimal epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 92 of file approximationScheme_inl.h.

92{ return min_rate_eps_; }

References min_rate_eps_.

◆ nbrIterations()

INLINE Size gum::ApproximationScheme::nbrIterations ( ) const
overridevirtualinherited

Returns the number of iterations.

Returns
Returns the number of iterations.
Exceptions
OperationNotAllowedRaised if the scheme did not perform.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 169 of file approximationScheme_inl.h.

169 {
171 GUM_ERROR(OperationNotAllowed, "state of the approximation scheme is undefined")
172 }
173
174 return current_step_;
175 }

References current_step_, GUM_ERROR, stateApproximationScheme(), and gum::IApproximationSchemeConfiguration::Undefined.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::computeKL_().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ periodSize()

INLINE Size gum::ApproximationScheme::periodSize ( ) const
overridevirtualinherited

Returns the period size.

Returns
Returns the period size.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 155 of file approximationScheme_inl.h.

155{ return period_size_; }
Size period_size_
Checking criteria frequency.

References period_size_.

◆ remainingBurnIn()

INLINE Size gum::ApproximationScheme::remainingBurnIn ( ) const
inherited

Returns the remaining burn in.

Returns
Returns the remaining burn in.

Definition at line 212 of file approximationScheme_inl.h.

212 {
213 if (burn_in_ > current_step_) {
214 return burn_in_ - current_step_;
215 } else {
216 return 0;
217 }
218 }
Size burn_in_
Number of iterations before checking stopping criteria.

References burn_in_, and current_step_.

◆ repetitiveInd()

template<typename GUM_SCALAR>
bool gum::credal::InferenceEngine< GUM_SCALAR >::repetitiveInd ( ) const

Get the current independence status.

Returns
True if repetitive, False otherwise.

Definition at line 142 of file inferenceEngine_tpl.h.

142 {
143 return repetitiveInd_;
144 }
bool repetitiveInd_
True if using repetitive independence ( dynamic network only ), False otherwise.

References repetitiveInd_.

◆ repetitiveInit_()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::repetitiveInit_ ( )
protected

Initialize t0_ and t1_ clusters.

Definition at line 801 of file inferenceEngine_tpl.h.

801 {
802 timeSteps_ = 0;
803 t0_.clear();
804 t1_.clear();
805
806 // t = 0 vars belongs to t0_ as keys
807 for (auto node: credalNet_->current_bn().dag().nodes()) {
808 std::string var_name = credalNet_->current_bn().variable(node).name();
809 auto delim = var_name.find_first_of("_");
810
811 if (delim > var_name.size()) {
813 "void InferenceEngine< GUM_SCALAR "
814 ">::repetitiveInit_() : the network does not "
815 "appear to be dynamic");
816 }
817
818 std::string time_step = var_name.substr(delim + 1, 1);
819
820 if (time_step.compare("0") == 0) t0_.insert(node, std::vector< NodeId >());
821 }
822
823 // t = 1 vars belongs to either t0_ as member value or t1_ as keys
824 for (const auto& node: credalNet_->current_bn().dag().nodes()) {
825 std::string var_name = credalNet_->current_bn().variable(node).name();
826 auto delim = var_name.find_first_of("_");
827 std::string time_step = var_name.substr(delim + 1, var_name.size());
828 var_name = var_name.substr(0, delim);
829 delim = time_step.find_first_of("_");
830 time_step = time_step.substr(0, delim);
831
832 if (time_step.compare("1") == 0) {
833 bool found = false;
834
835 for (const auto& elt: t0_) {
836 std::string var_0_name = credalNet_->current_bn().variable(elt.first).name();
837 delim = var_0_name.find_first_of("_");
838 var_0_name = var_0_name.substr(0, delim);
839
840 if (var_name.compare(var_0_name) == 0) {
841 const Tensor< GUM_SCALAR >* tensor(&credalNet_->current_bn().cpt(node));
842 const Tensor< GUM_SCALAR >* tensor2(&credalNet_->current_bn().cpt(elt.first));
843
844 if (tensor->domainSize() == tensor2->domainSize()) t0_[elt.first].push_back(node);
845 else t1_.insert(node, std::vector< NodeId >());
846
847 found = true;
848 break;
849 }
850 }
851
852 if (!found) { t1_.insert(node, std::vector< NodeId >()); }
853 }
854 }
855
856 // t > 1 vars belongs to either t0_ or t1_ as member value
857 // remember timeSteps_
858 for (auto node: credalNet_->current_bn().dag().nodes()) {
859 std::string var_name = credalNet_->current_bn().variable(node).name();
860 auto delim = var_name.find_first_of("_");
861 std::string time_step = var_name.substr(delim + 1, var_name.size());
862 var_name = var_name.substr(0, delim);
863 delim = time_step.find_first_of("_");
864 time_step = time_step.substr(0, delim);
865
866 if (time_step.compare("0") != 0 && time_step.compare("1") != 0) {
867 // keep max time_step
868 if (atoi(time_step.c_str()) > timeSteps_) timeSteps_ = atoi(time_step.c_str());
869
871 bool found = false;
872
873 for (const auto& elt: t0_) {
874 std::string var_0_name = credalNet_->current_bn().variable(elt.first).name();
875 delim = var_0_name.find_first_of("_");
876 var_0_name = var_0_name.substr(0, delim);
877
878 if (var_name.compare(var_0_name) == 0) {
879 const Tensor< GUM_SCALAR >* tensor(&credalNet_->current_bn().cpt(node));
880 const Tensor< GUM_SCALAR >* tensor2(&credalNet_->current_bn().cpt(elt.first));
881
882 if (tensor->domainSize() == tensor2->domainSize()) {
883 t0_[elt.first].push_back(node);
884 found = true;
885 break;
886 }
887 }
888 }
889
890 if (!found) {
891 for (const auto& elt: t1_) {
892 std::string var_0_name = credalNet_->current_bn().variable(elt.first).name();
893 auto delim = var_0_name.find_first_of("_");
894 var_0_name = var_0_name.substr(0, delim);
895
896 if (var_name.compare(var_0_name) == 0) {
897 const Tensor< GUM_SCALAR >* tensor(&credalNet_->current_bn().cpt(node));
898 const Tensor< GUM_SCALAR >* tensor2(&credalNet_->current_bn().cpt(elt.first));
899
900 if (tensor->domainSize() == tensor2->domainSize()) {
901 t1_[elt.first].push_back(node);
902 break;
903 }
904 }
905 }
906 }
907 }
908 }
909 }
int timeSteps_
The number of time steps of this network (only usefull for dynamic networks).

References credalNet_, GUM_ERROR, t0_, t1_, and timeSteps_.

Referenced by setRepetitiveInd().

Here is the caller graph for this function:

◆ saveExpectations()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::saveExpectations ( const std::string & path) const

Saves expectations to file.

Parameters
pathThe path to the file to be used.

Definition at line 578 of file inferenceEngine_tpl.h.

578 {
579 if (dynamicExpMin_.empty()) //_modal.empty())
580 return;
581
582 // else not here, to keep the const (natural with a saving process)
583 // else if(dynamicExpMin_.empty() || dynamicExpMax_.empty())
584 //_dynamicExpectations(); // works with or without a dynamic network
585
587
588 if (!m_stream.good()) {
590 "void InferenceEngine< GUM_SCALAR "
591 ">::saveExpectations(const std::string & path) : could "
592 "not open output file : "
593 << path);
594 }
595
596 for (const auto& elt: dynamicExpMin_) {
597 m_stream << elt.first; // it->first;
598
599 // iterates over a vector
600 for (const auto& elt2: elt.second) {
601 m_stream << " " << elt2;
602 }
603
605 }
606
607 for (const auto& elt: dynamicExpMax_) {
608 m_stream << elt.first;
609
610 // iterates over a vector
611 for (const auto& elt2: elt.second) {
612 m_stream << " " << elt2;
613 }
614
616 }
617
618 m_stream.close();
619 }

References dynamicExpMin_.

◆ saveMarginals()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::saveMarginals ( const std::string & path) const

Saves marginals to file.

Parameters
pathThe path to the file to be used.

Definition at line 555 of file inferenceEngine_tpl.h.

555 {
557
558 if (!m_stream.good()) {
560 "void InferenceEngine< GUM_SCALAR >::saveMarginals(const "
561 "std::string & path) const : could not open output file "
562 ": " << path);
563 }
564
565 for (const auto& elt: marginalMin_) {
566 Size esize = Size(elt.second.size());
567
568 for (Size mod = 0; mod < esize; mod++) {
569 m_stream << credalNet_->current_bn().variable(elt.first).name() << " " << mod << " "
570 << (elt.second)[mod] << " " << marginalMax_[elt.first][mod] << std::endl;
571 }
572 }
573
574 m_stream.close();
575 }
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition types.h:74

References GUM_ERROR.

◆ saveVertices()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::saveVertices ( const std::string & path) const

Saves vertices to file.

Parameters
pathThe path to the file to be used.

Definition at line 648 of file inferenceEngine_tpl.h.

648 {
650
651 if (!m_stream.good()) {
653 "void InferenceEngine< GUM_SCALAR >::saveVertices(const "
654 "std::string & path) : could not open outpul file : "
655 << path);
656 }
657
658 for (const auto& elt: marginalSets_) {
659 m_stream << credalNet_->current_bn().variable(elt.first).name() << std::endl;
660
661 for (const auto& elt2: elt.second) {
662 m_stream << "[";
663 bool first = true;
664
665 for (const auto& elt3: elt2) {
666 if (!first) {
667 m_stream << ",";
668 first = false;
669 }
670
671 m_stream << elt3;
672 }
673
674 m_stream << "]\n";
675 }
676 }
677
678 m_stream.close();
679 }

References credalNet_, GUM_ERROR, and marginalSets_.

◆ setEpsilon()

INLINE void gum::ApproximationScheme::setEpsilon ( double eps)
overridevirtualinherited

Given that we approximate f(t), stopping criterion on |f(t+1)-f(t)|.

If the criterion was disabled it will be enabled.

Parameters
epsThe new epsilon value.
Exceptions
OutOfBoundsRaised if eps < 0.

Implements gum::IApproximationSchemeConfiguration.

Reimplemented in gum::learning::EMApproximationScheme.

Definition at line 63 of file approximationScheme_inl.h.

63 {
64 if (eps < 0.) { GUM_ERROR(OutOfBounds, "eps should be >=0") }
65
66 eps_ = eps;
67 enabled_eps_ = true;
68 }

References enabled_eps_, eps_, and GUM_ERROR.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsSampling< GUM_SCALAR >::GibbsSampling(), gum::learning::GreedyHillClimbing::GreedyHillClimbing(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::EMApproximationScheme::setEpsilon().

Here is the caller graph for this function:

◆ setMaxIter()

INLINE void gum::ApproximationScheme::setMaxIter ( Size max)
overridevirtualinherited

Stopping criterion on number of iterations.

If the criterion was disabled it will be enabled.

Parameters
maxThe maximum number of iterations.
Exceptions
OutOfBoundsRaised if max <= 1.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 105 of file approximationScheme_inl.h.

105 {
106 if (max < 1) { GUM_ERROR(OutOfBounds, "max should be >=1") }
107 max_iter_ = max;
108 enabled_max_iter_ = true;
109 }

References enabled_max_iter_, GUM_ERROR, and max_iter_.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), and gum::SamplingInference< GUM_SCALAR >::SamplingInference().

Here is the caller graph for this function:

◆ setMaxTime()

INLINE void gum::ApproximationScheme::setMaxTime ( double timeout)
overridevirtualinherited

Stopping criterion on timeout.

If the criterion was disabled it will be enabled.

Parameters
timeoutThe timeout value in seconds.
Exceptions
OutOfBoundsRaised if timeout <= 0.0.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 126 of file approximationScheme_inl.h.

126 {
127 if (timeout <= 0.) { GUM_ERROR(OutOfBounds, "timeout should be >0.") }
128 max_time_ = timeout;
129 enabled_max_time_ = true;
130 }

References enabled_max_time_, GUM_ERROR, and max_time_.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), and gum::SamplingInference< GUM_SCALAR >::SamplingInference().

Here is the caller graph for this function:

◆ setMinEpsilonRate()

INLINE void gum::ApproximationScheme::setMinEpsilonRate ( double rate)
overridevirtualinherited

Given that we approximate f(t), stopping criterion on d/dt(|f(t+1)-f(t)|).

If the criterion was disabled it will be enabled

Parameters
rateThe minimal epsilon rate.
Exceptions
OutOfBoundsif rate<0

Implements gum::IApproximationSchemeConfiguration.

Reimplemented in gum::learning::EMApproximationScheme.

Definition at line 84 of file approximationScheme_inl.h.

84 {
85 if (rate < 0) { GUM_ERROR(OutOfBounds, "rate should be >=0") }
86
87 min_rate_eps_ = rate;
89 }

References enabled_min_rate_eps_, GUM_ERROR, and min_rate_eps_.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsSampling< GUM_SCALAR >::GibbsSampling(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::EMApproximationScheme::setMinEpsilonRate().

Here is the caller graph for this function:

◆ setNumberOfThreads()

virtual void gum::ThreadNumberManager::setNumberOfThreads ( Size nb)
virtualinherited

sets the number max of threads to be used by the class containing this ThreadNumberManager

Parameters
nbthe number of threads to be used. If this number is set to 0, then it is defaulted to aGrUM's number of threads

Implements gum::IThreadNumberManager.

Reimplemented in gum::learning::IBNLearner, gum::learning::RecordCounter, gum::ScheduledInference, and gum::SchedulerParallel.

Referenced by gum::learning::IBNLearner::setNumberOfThreads(), and gum::ScheduledInference::setNumberOfThreads().

Here is the caller graph for this function:

◆ setPeriodSize()

INLINE void gum::ApproximationScheme::setPeriodSize ( Size p)
overridevirtualinherited

How many samples between two stopping is enable.

Parameters
pThe new period value.
Exceptions
OutOfBoundsRaised if p < 1.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 149 of file approximationScheme_inl.h.

149 {
150 if (p < 1) { GUM_ERROR(OutOfBounds, "p should be >=1") }
151
152 period_size_ = p;
153 }

References GUM_ERROR, and period_size_.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), and gum::SamplingInference< GUM_SCALAR >::SamplingInference().

Here is the caller graph for this function:

◆ setRepetitiveInd()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::setRepetitiveInd ( const bool repetitive)
Parameters
repetitiveTrue if repetitive independence is to be used, false otherwise. Only usefull with dynamic networks.

Definition at line 133 of file inferenceEngine_tpl.h.

133 {
136
137 // do not compute clusters more than once
139 }
void repetitiveInit_()
Initialize t0_ and t1_ clusters.

References repetitiveInd_, and repetitiveInit_().

Here is the call graph for this function:

◆ setVerbosity()

INLINE void gum::ApproximationScheme::setVerbosity ( bool v)
overridevirtualinherited

Set the verbosity on (true) or off (false).

Parameters
vIf true, then verbosity is turned on.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 158 of file approximationScheme_inl.h.

158{ verbosity_ = v; }
bool verbosity_
If true, verbosity is enabled.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), and gum::SamplingInference< GUM_SCALAR >::SamplingInference().

Here is the caller graph for this function:

◆ startOfPeriod()

INLINE bool gum::ApproximationScheme::startOfPeriod ( ) const
inherited

Returns true if we are at the beginning of a period (compute error is mandatory).

Returns
Returns true if we are at the beginning of a period (compute error is mandatory).

Definition at line 199 of file approximationScheme_inl.h.

199 {
200 if (current_step_ < burn_in_) { return false; }
201
202 if (period_size_ == 1) { return true; }
203
204 return ((current_step_ - burn_in_) % period_size_ == 0);
205 }

References burn_in_, and current_step_.

◆ stateApproximationScheme()

INLINE IApproximationSchemeConfiguration::ApproximationSchemeSTATE gum::ApproximationScheme::stateApproximationScheme ( ) const
overridevirtualinherited

Returns the approximation scheme state.

Returns
Returns the approximation scheme state.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 164 of file approximationScheme_inl.h.

164 {
165 return current_state_;
166 }

References current_state_.

Referenced by history(), and nbrIterations().

Here is the caller graph for this function:

◆ stopApproximationScheme()

INLINE void gum::ApproximationScheme::stopApproximationScheme ( )
inherited

Stop the approximation scheme.

Definition at line 221 of file approximationScheme_inl.h.

Referenced by gum::learning::GreedyHillClimbing::learnStructure(), gum::learning::LocalSearchWithTabuList::learnStructure(), and gum::credal::CNLoopyPropagation< GUM_SCALAR >::makeInferenceNodeToNeighbours_().

Here is the caller graph for this function:

◆ stopScheme_()

INLINE void gum::ApproximationScheme::stopScheme_ ( ApproximationSchemeSTATE new_state)
privateinherited

Stop the scheme given a new state.

Parameters
new_stateThe scheme new state.

Definition at line 301 of file approximationScheme_inl.h.

301 {
302 if (new_state == ApproximationSchemeSTATE::Continue) { return; }
303
304 if (new_state == ApproximationSchemeSTATE::Undefined) { return; }
305
306 current_state_ = new_state;
307 timer_.pause();
308
309 if (onStop.hasListener()) { GUM_EMIT1(onStop, messageApproximationScheme()); }
310 }
Signaler1< const std::string & > onStop
Criteria messageApproximationScheme.
#define GUM_EMIT1(signal, arg1)
Definition signaler1.h:61

References gum::IApproximationSchemeConfiguration::Continue, current_state_, and gum::IApproximationSchemeConfiguration::Undefined.

Referenced by gum::credal::MultipleInferenceEngine< GUM_SCALAR, LazyPropagation< GUM_SCALAR > >::disableMaxIter(), gum::credal::MultipleInferenceEngine< GUM_SCALAR, LazyPropagation< GUM_SCALAR > >::disableMaxTime(), gum::credal::MultipleInferenceEngine< GUM_SCALAR, LazyPropagation< GUM_SCALAR > >::isEnabledMaxIter(), gum::credal::MultipleInferenceEngine< GUM_SCALAR, LazyPropagation< GUM_SCALAR > >::maxTime(), and gum::credal::MultipleInferenceEngine< GUM_SCALAR, LazyPropagation< GUM_SCALAR > >::setPeriodSize().

Here is the caller graph for this function:

◆ storeBNOpt() [1/2]

template<typename GUM_SCALAR>
bool gum::credal::InferenceEngine< GUM_SCALAR >::storeBNOpt ( ) const
Returns
True if optimal bayes net are stored for each variable and each modality, False otherwise.

Definition at line 158 of file inferenceEngine_tpl.h.

158 {
159 return storeBNOpt_;
160 }
bool storeBNOpt_
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.

References storeBNOpt_.

◆ storeBNOpt() [2/2]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::storeBNOpt ( const bool value)
Parameters
valueTrue if optimal Bayesian networks are to be stored for each variable and each modality.

Definition at line 121 of file inferenceEngine_tpl.h.

121 {
123 }

References storeBNOpt_.

◆ storeVertices() [1/2]

template<typename GUM_SCALAR>
bool gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices ( ) const

Get the number of iterations without changes used to stop some algorithms.

Returns
the number of iterations. int iterStop () const;
True if vertice are stored, False otherwise.

Definition at line 153 of file inferenceEngine_tpl.h.

153 {
154 return storeVertices_;
155 }

References storeVertices_.

◆ storeVertices() [2/2]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices ( const bool value)
Parameters
valueTrue if vertices are to be stored, false otherwise.

Definition at line 126 of file inferenceEngine_tpl.h.

126 {
128
130 }

References initMarginalSets_(), and storeVertices_.

Here is the call graph for this function:

◆ toString()

template<typename GUM_SCALAR>
std::string gum::credal::InferenceEngine< GUM_SCALAR >::toString ( ) const

Print all nodes marginals to standart output.

Definition at line 622 of file inferenceEngine_tpl.h.

622 {
624 output << std::endl;
625
626 // use cbegin() when available
627 for (const auto& elt: marginalMin_) {
628 Size esize = Size(elt.second.size());
629
630 for (Size mod = 0; mod < esize; mod++) {
631 output << "P(" << credalNet_->current_bn().variable(elt.first).name() << "=" << mod
632 << "|e) = [ ";
633 output << marginalMin_[elt.first][mod] << ", " << marginalMax_[elt.first][mod] << " ]";
634
635 if (!query_.empty())
636 if (query_.exists(elt.first) && query_[elt.first][mod]) output << " QUERY";
637
638 output << std::endl;
639 }
640
641 output << std::endl;
642 }
643
644 return output.str();
645 }

References credalNet_, marginalMax_, marginalMin_, and query_.

◆ updateApproximationScheme()

INLINE void gum::ApproximationScheme::updateApproximationScheme ( unsigned int incr = 1)
inherited

Update the scheme w.r.t the new error and increment steps.

Parameters
incrThe new increment steps.

Definition at line 208 of file approximationScheme_inl.h.

208 {
209 current_step_ += incr;
210 }

References current_step_.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::computeKL_(), gum::learning::GreedyHillClimbing::learnStructure(), gum::learning::LocalSearchWithTabuList::learnStructure(), gum::SamplingInference< GUM_SCALAR >::loopApproxInference_(), gum::credal::CNLoopyPropagation< GUM_SCALAR >::makeInferenceByOrderedArcs_(), gum::credal::CNLoopyPropagation< GUM_SCALAR >::makeInferenceByRandomOrder_(), and gum::credal::CNLoopyPropagation< GUM_SCALAR >::makeInferenceNodeToNeighbours_().

Here is the caller graph for this function:

◆ updateCredalSets_()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::updateCredalSets_ ( const NodeId & id,
const std::vector< GUM_SCALAR > & vertex,
const bool & elimRedund = false )
inlineprotected

Given a node id and one of it's possible vertex, update it's credal set.

To maximise efficiency, don't pass a vertex we know is inside the polytope (i.e. not at an extreme value for any modality)

Parameters
idThe id of the node to be updated
vertexA (tensor) vertex of the node credal set
elimRedundremove redundant vertex (inside a facet)

Definition at line 934 of file inferenceEngine_tpl.h.

936 {
938 auto dsize = vertex.size();
939
940 bool eq = true;
941
942 for (auto it = nodeCredalSet.cbegin(), itEnd = nodeCredalSet.cend(); it != itEnd; ++it) {
943 eq = true;
944
945 for (Size i = 0; i < dsize; i++) {
946 if (std::fabs(vertex[i] - (*it)[i]) > 1e-6) {
947 eq = false;
948 break;
949 }
950 }
951
952 if (eq) break;
953 }
954
955 if (!eq || nodeCredalSet.size() == 0) {
956 nodeCredalSet.push_back(vertex);
957 return;
958 } else return;
959
960 // because of next lambda return condition
961 if (nodeCredalSet.size() == 1) return;
962
963 // check that the point and all previously added ones are not inside the
964 // actual
965 // polytope
966 auto itEnd = std::remove_if(
967 nodeCredalSet.begin(),
968 nodeCredalSet.end(),
969 [&](const std::vector< GUM_SCALAR >& v) -> bool {
970 for (auto jt = v.cbegin(),
971 jtEnd = v.cend(),
972 minIt = marginalMin_[id].cbegin(),
973 minItEnd = marginalMin_[id].cend(),
974 maxIt = marginalMax_[id].cbegin(),
975 maxItEnd = marginalMax_[id].cend();
976 jt != jtEnd && minIt != minItEnd && maxIt != maxItEnd;
977 ++jt, ++minIt, ++maxIt) {
978 if ((std::fabs(*jt - *minIt) < 1e-6 || std::fabs(*jt - *maxIt) < 1e-6)
979 && std::fabs(*minIt - *maxIt) > 1e-6)
980 return false;
981 }
982 return true;
983 });
984
985 nodeCredalSet.erase(itEnd, nodeCredalSet.end());
986
987 // we need at least 2 points to make a convex combination
988 if (!elimRedund || nodeCredalSet.size() <= 2) return;
989
990 // there may be points not inside the polytope but on one of it's facet,
991 // meaning it's still a convex combination of vertices of this facet. Here
992 // we
993 // need lrs.
995 lrsWrapper.setUpV((unsigned int)dsize, (unsigned int)(nodeCredalSet.size()));
996
997 for (const auto& vtx: nodeCredalSet)
999
1000 lrsWrapper.elimRedundVrep();
1001
1002 marginalSets_[id] = lrsWrapper.getOutput();
1003 }

References marginalSets_.

Referenced by gum::credal::CNLoopyPropagation< GUM_SCALAR >::computeExpectations_(), and gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::verticesFusion_().

Here is the caller graph for this function:

◆ updateExpectations_()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::updateExpectations_ ( const NodeId & id,
const std::vector< GUM_SCALAR > & vertex )
inlineprotected

Given a node id and one of it's possible vertex obtained during inference, update this node lower and upper expectations.

Parameters
idThe id of the node to be updated
vertexA (tensor) vertex of the node credal set

Definition at line 912 of file inferenceEngine_tpl.h.

914 {
915 std::string var_name = credalNet_->current_bn().variable(id).name();
916 auto delim = var_name.find_first_of("_");
917
918 var_name = var_name.substr(0, delim);
919
920 if (modal_.exists(var_name) /*modal_.find(var_name) != modal_.end()*/) {
921 GUM_SCALAR exp = 0;
922 auto vsize = vertex.size();
923
924 for (Size mod = 0; mod < vsize; mod++)
926
928
930 }
931 }

References credalNet_, expectationMax_, expectationMin_, and modal_.

Referenced by gum::credal::CNLoopyPropagation< GUM_SCALAR >::computeExpectations_().

Here is the caller graph for this function:

◆ verbosity()

INLINE bool gum::ApproximationScheme::verbosity ( ) const
overridevirtualinherited

Returns true if verbosity is enabled.

Returns
Returns true if verbosity is enabled.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 160 of file approximationScheme_inl.h.

160{ return verbosity_; }

References verbosity_.

Referenced by ApproximationScheme(), and gum::learning::EMApproximationScheme::EMApproximationScheme().

Here is the caller graph for this function:

◆ vertices()

template<typename GUM_SCALAR>
const std::vector< std::vector< GUM_SCALAR > > & gum::credal::InferenceEngine< GUM_SCALAR >::vertices ( const NodeId id) const

Get the vertice of a given node id.

Parameters
idThe node id which vertice we want.
Returns
A constant reference to this node vertice.

Definition at line 550 of file inferenceEngine_tpl.h.

550 {
551 return marginalSets_[id];
552 }

References marginalSets_.

Referenced by gum::credal::CNLoopyPropagation< GUM_SCALAR >::computeExpectations_().

Here is the caller graph for this function:

Member Data Documentation

◆ _nb_threads_

Size gum::ThreadNumberManager::_nb_threads_ {0}
privateinherited

the max number of threads used by the class

Definition at line 126 of file threadNumberManager.h.

126{0};

◆ burn_in_

Size gum::ApproximationScheme::burn_in_
protectedinherited

◆ credalNet_

◆ current_epsilon_

double gum::ApproximationScheme::current_epsilon_
protectedinherited

Current epsilon.

Definition at line 378 of file approximationScheme.h.

Referenced by initApproximationScheme().

◆ current_rate_

double gum::ApproximationScheme::current_rate_
protectedinherited

Current rate.

Definition at line 384 of file approximationScheme.h.

Referenced by initApproximationScheme().

◆ current_state_

ApproximationSchemeSTATE gum::ApproximationScheme::current_state_
protectedinherited

The current state.

Definition at line 393 of file approximationScheme.h.

Referenced by ApproximationScheme(), initApproximationScheme(), stateApproximationScheme(), and stopScheme_().

◆ current_step_

◆ dbnOpt_

template<typename GUM_SCALAR>
VarMod2BNsMap< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::dbnOpt_
protected

Object used to efficiently store optimal bayes net during inference, for some algorithms.

Definition at line 158 of file inferenceEngine.h.

Referenced by InferenceEngine(), getVarMod2BNsMap(), and gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::optFusion_().

◆ dynamicExpMax_

template<typename GUM_SCALAR>
dynExpe gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMax_
protected

Upper dynamic expectations.

If the network if not dynamic it's content is the same as expectationMax_.

Definition at line 111 of file inferenceEngine.h.

Referenced by dynamicExpectations_(), dynamicExpMax(), and eraseAllEvidence().

◆ dynamicExpMin_

template<typename GUM_SCALAR>
dynExpe gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMin_
protected

Lower dynamic expectations.

If the network is not dynamic it's content is the same as expectationMin_.

Definition at line 108 of file inferenceEngine.h.

Referenced by dynamicExpectations_(), eraseAllEvidence(), and saveExpectations().

◆ enabled_eps_

bool gum::ApproximationScheme::enabled_eps_
protectedinherited

If true, the threshold convergence is enabled.

Definition at line 402 of file approximationScheme.h.

Referenced by ApproximationScheme(), disableEpsilon(), enableEpsilon(), isEnabledEpsilon(), and setEpsilon().

◆ enabled_max_iter_

bool gum::ApproximationScheme::enabled_max_iter_
protectedinherited

If true, the maximum iterations stopping criterion is enabled.

Definition at line 420 of file approximationScheme.h.

Referenced by ApproximationScheme(), disableMaxIter(), enableMaxIter(), isEnabledMaxIter(), and setMaxIter().

◆ enabled_max_time_

bool gum::ApproximationScheme::enabled_max_time_
protectedinherited

If true, the timeout is enabled.

Definition at line 414 of file approximationScheme.h.

Referenced by ApproximationScheme(), continueApproximationScheme(), disableMaxTime(), enableMaxTime(), isEnabledMaxTime(), and setMaxTime().

◆ enabled_min_rate_eps_

bool gum::ApproximationScheme::enabled_min_rate_eps_
protectedinherited

If true, the minimal threshold for epsilon rate is enabled.

Definition at line 408 of file approximationScheme.h.

Referenced by ApproximationScheme(), disableMinEpsilonRate(), enableMinEpsilonRate(), isEnabledMinEpsilonRate(), and setMinEpsilonRate().

◆ eps_

double gum::ApproximationScheme::eps_
protectedinherited

Threshold for convergence.

Definition at line 399 of file approximationScheme.h.

Referenced by ApproximationScheme(), epsilon(), and setEpsilon().

◆ evidence_

◆ expectationMax_

template<typename GUM_SCALAR>
expe gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax_
protected

◆ expectationMin_

template<typename GUM_SCALAR>
expe gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin_
protected

◆ history_

std::vector< double > gum::ApproximationScheme::history_
protectedinherited

The scheme history, used only if verbosity == true.

Definition at line 396 of file approximationScheme.h.

◆ last_epsilon_

double gum::ApproximationScheme::last_epsilon_
protectedinherited

Last epsilon value.

Definition at line 381 of file approximationScheme.h.

◆ marginalMax_

◆ marginalMin_

◆ marginalSets_

template<typename GUM_SCALAR>
credalSet gum::credal::InferenceEngine< GUM_SCALAR >::marginalSets_
protected

◆ max_iter_

Size gum::ApproximationScheme::max_iter_
protectedinherited

The maximum iterations.

Definition at line 417 of file approximationScheme.h.

Referenced by ApproximationScheme(), maxIter(), and setMaxIter().

◆ max_time_

double gum::ApproximationScheme::max_time_
protectedinherited

The timeout.

Definition at line 411 of file approximationScheme.h.

Referenced by ApproximationScheme(), maxTime(), and setMaxTime().

◆ min_rate_eps_

double gum::ApproximationScheme::min_rate_eps_
protectedinherited

Threshold for the epsilon rate.

Definition at line 405 of file approximationScheme.h.

Referenced by ApproximationScheme(), minEpsilonRate(), and setMinEpsilonRate().

◆ modal_

◆ oldMarginalMax_

◆ oldMarginalMin_

◆ onProgress

◆ onStop

Signaler1< const std::string& > gum::IApproximationSchemeConfiguration::onStop
inherited

Criteria messageApproximationScheme.

Definition at line 83 of file IApproximationSchemeConfiguration.h.

Referenced by gum::learning::IBNLearner::distributeStop().

◆ period_size_

Size gum::ApproximationScheme::period_size_
protectedinherited

Checking criteria frequency.

Definition at line 426 of file approximationScheme.h.

Referenced by ApproximationScheme(), periodSize(), and setPeriodSize().

◆ query_

template<typename GUM_SCALAR>
query gum::credal::InferenceEngine< GUM_SCALAR >::query_
protected

Holds the query nodes states.

Definition at line 119 of file inferenceEngine.h.

Referenced by eraseAllEvidence(), insertQuery(), and toString().

◆ repetitiveInd_

◆ storeBNOpt_

◆ storeVertices_

◆ t0_

template<typename GUM_SCALAR>
cluster gum::credal::InferenceEngine< GUM_SCALAR >::t0_
protected

Clusters of nodes used with dynamic networks.

Any node key in t0_ is present at \( t=0 \) and any node belonging to the node set of this key share the same CPT than the key. Used for sampling with repetitive independence.

Definition at line 127 of file inferenceEngine.h.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_mcThreadDataCopy_(), getT0Cluster(), and repetitiveInit_().

◆ t1_

template<typename GUM_SCALAR>
cluster gum::credal::InferenceEngine< GUM_SCALAR >::t1_
protected

Clusters of nodes used with dynamic networks.

Any node key in t1_ is present at \( t=1 \) and any node belonging to the node set of this key share the same CPT than the key. Used for sampling with repetitive independence.

Definition at line 134 of file inferenceEngine.h.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_mcThreadDataCopy_(), getT1Cluster(), and repetitiveInit_().

◆ threadMinimalNbOps_

template<typename GUM_SCALAR>
Size gum::credal::InferenceEngine< GUM_SCALAR >::threadMinimalNbOps_ {Size(20)}
protected

◆ threadRanges_

template<typename GUM_SCALAR>
std::vector< std::pair< NodeId, Idx > > gum::credal::InferenceEngine< GUM_SCALAR >::threadRanges_
protected

the ranges of elements of marginalMin_ and marginalMax_ processed by each thread

these ranges are stored into a vector of pairs (NodeId, Idx). For thread number i, the pair at index i is the beginning of the range that the thread will have to process: this is the part of the marginal distribution vector of node NodeId starting at index Idx. The pair at index i+1 is the end of this range (not included).

Warning
the size of threadRanges_ is the number of threads + 1.

Definition at line 170 of file inferenceEngine.h.

Referenced by computeEpsilon_(), gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::computeEpsilon_(), displatchMarginalsToThreads_(), and gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::updateMarginals_().

◆ timer_

◆ timeSteps_

template<typename GUM_SCALAR>
int gum::credal::InferenceEngine< GUM_SCALAR >::timeSteps_
protected

The number of time steps of this network (only usefull for dynamic networks).

Deprecated

Definition at line 177 of file inferenceEngine.h.

Referenced by repetitiveInit_().

◆ verbosity_

bool gum::ApproximationScheme::verbosity_
protectedinherited

If true, verbosity is enabled.

Definition at line 429 of file approximationScheme.h.

Referenced by ApproximationScheme(), and verbosity().


The documentation for this class was generated from the following files: