aGrUM 2.3.2
a C++ library for (probabilistic) graphical models
gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine > Class Template Referenceabstract

Class template representing a CredalNet inference engine using one or more IBayesNet inference engines such as LazyPropagation. More...

#include <multipleInferenceEngine.h>

Inheritance diagram for gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >:
Collaboration diagram for gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >:

Public Types

enum class  ApproximationSchemeSTATE : char {
  Undefined , Continue , Epsilon , Rate ,
  Limit , TimeLimit , Stopped
}
 The different state of an approximation scheme. More...

Public Member Functions

virtual void addEvidence (NodeId id, const Idx val) final
 adds a new hard evidence on node id
virtual void addEvidence (const std::string &nodeName, const Idx val) final
 adds a new hard evidence on node named nodeName
virtual void addEvidence (NodeId id, const std::string &label) final
 adds a new hard evidence on node id
virtual void addEvidence (const std::string &nodeName, const std::string &label) final
 adds a new hard evidence on node named nodeName
virtual void addEvidence (NodeId id, const std::vector< GUM_SCALAR > &vals) final
 adds a new evidence on node id (might be soft or hard)
virtual void addEvidence (const std::string &nodeName, const std::vector< GUM_SCALAR > &vals) final
 adds a new evidence on node named nodeName (might be soft or hard)
virtual void addEvidence (const Tensor< GUM_SCALAR > &pot) final
 adds a new evidence on node id (might be soft or hard)
Constructors / Destructors
 MultipleInferenceEngine (const CredalNet< GUM_SCALAR > &credalNet)
 Constructor.
virtual ~MultipleInferenceEngine ()
 Destructor.
Post-inference methods
virtual void eraseAllEvidence ()
 Erase all inference related data to perform another one.
Pure virtual methods
virtual void makeInference ()=0
 To be redefined by each credal net algorithm.
Getters and setters
VarMod2BNsMap< GUM_SCALAR > * getVarMod2BNsMap ()
 Get optimum IBayesNet.
const CredalNet< GUM_SCALAR > & credalNet () const
 Get this creadal network.
const NodeProperty< std::vector< NodeId > > & getT0Cluster () const
 Get the t0_ cluster.
const NodeProperty< std::vector< NodeId > > & getT1Cluster () const
 Get the t1_ cluster.
void setRepetitiveInd (const bool repetitive)
void storeVertices (const bool value)
bool storeVertices () const
 Get the number of iterations without changes used to stop some algorithms.
void storeBNOpt (const bool value)
bool storeBNOpt () const
bool repetitiveInd () const
 Get the current independence status.
Pre-inference initialization methods
void insertModalsFile (const std::string &path)
 Insert variables modalities from file to compute expectations.
void insertModals (const std::map< std::string, std::vector< GUM_SCALAR > > &modals)
 Insert variables modalities from map to compute expectations.
virtual void insertEvidenceFile (const std::string &path)
 Insert evidence from file.
void insertEvidence (const std::map< std::string, std::vector< GUM_SCALAR > > &eviMap)
 Insert evidence from map.
void insertEvidence (const NodeProperty< std::vector< GUM_SCALAR > > &evidence)
 Insert evidence from Property.
void insertQueryFile (const std::string &path)
 Insert query variables states from file.
void insertQuery (const NodeProperty< std::vector< bool > > &query)
 Insert query variables and states from Property.
Post-inference methods
Tensor< GUM_SCALAR > marginalMin (const NodeId id) const
 Get the lower marginals of a given node id.
Tensor< GUM_SCALAR > marginalMin (const std::string &varName) const
 Get the lower marginals of a given variable name.
Tensor< GUM_SCALAR > marginalMax (const NodeId id) const
 Get the upper marginals of a given node id.
Tensor< GUM_SCALAR > marginalMax (const std::string &varName) const
 Get the upper marginals of a given variable name.
const GUM_SCALAR & expectationMin (const NodeId id) const
 Get the lower expectation of a given node id.
const GUM_SCALAR & expectationMin (const std::string &varName) const
 Get the lower expectation of a given variable name.
const GUM_SCALAR & expectationMax (const NodeId id) const
 Get the upper expectation of a given node id.
const GUM_SCALAR & expectationMax (const std::string &varName) const
 Get the upper expectation of a given variable name.
const std::vector< GUM_SCALAR > & dynamicExpMin (const std::string &varName) const
 Get the lower dynamic expectation of a given variable prefix (without the time step included, i.e.
const std::vector< GUM_SCALAR > & dynamicExpMax (const std::string &varName) const
 Get the upper dynamic expectation of a given variable prefix (without the time step included, i.e.
const std::vector< std::vector< GUM_SCALAR > > & vertices (const NodeId id) const
 Get the vertice of a given node id.
void saveMarginals (const std::string &path) const
 Saves marginals to file.
void saveExpectations (const std::string &path) const
 Saves expectations to file.
void saveVertices (const std::string &path) const
 Saves vertices to file.
void dynamicExpectations ()
 Compute dynamic expectations.
std::string toString () const
 Print all nodes marginals to standart output.
const std::string getApproximationSchemeMsg ()
 Get approximation scheme state.
Getters and setters
void setEpsilon (double eps) override
 Given that we approximate f(t), stopping criterion on |f(t+1)-f(t)|.
double epsilon () const override
 Returns the value of epsilon.
void disableEpsilon () override
 Disable stopping criterion on epsilon.
void enableEpsilon () override
 Enable stopping criterion on epsilon.
bool isEnabledEpsilon () const override
 Returns true if stopping criterion on epsilon is enabled, false otherwise.
void setMinEpsilonRate (double rate) override
 Given that we approximate f(t), stopping criterion on d/dt(|f(t+1)-f(t)|).
double minEpsilonRate () const override
 Returns the value of the minimal epsilon rate.
void disableMinEpsilonRate () override
 Disable stopping criterion on epsilon rate.
void enableMinEpsilonRate () override
 Enable stopping criterion on epsilon rate.
bool isEnabledMinEpsilonRate () const override
 Returns true if stopping criterion on epsilon rate is enabled, false otherwise.
void setMaxIter (Size max) override
 Stopping criterion on number of iterations.
Size maxIter () const override
 Returns the criterion on number of iterations.
void disableMaxIter () override
 Disable stopping criterion on max iterations.
void enableMaxIter () override
 Enable stopping criterion on max iterations.
bool isEnabledMaxIter () const override
 Returns true if stopping criterion on max iterations is enabled, false otherwise.
void setMaxTime (double timeout) override
 Stopping criterion on timeout.
double maxTime () const override
 Returns the timeout (in seconds).
double currentTime () const override
 Returns the current running time in second.
void disableMaxTime () override
 Disable stopping criterion on timeout.
void enableMaxTime () override
 Enable stopping criterion on timeout.
bool isEnabledMaxTime () const override
 Returns true if stopping criterion on timeout is enabled, false otherwise.
void setPeriodSize (Size p) override
 How many samples between two stopping is enable.
Size periodSize () const override
 Returns the period size.
void setVerbosity (bool v) override
 Set the verbosity on (true) or off (false).
bool verbosity () const override
 Returns true if verbosity is enabled.
ApproximationSchemeSTATE stateApproximationScheme () const override
 Returns the approximation scheme state.
Size nbrIterations () const override
 Returns the number of iterations.
const std::vector< double > & history () const override
 Returns the scheme history.
void initApproximationScheme ()
 Initialise the scheme.
bool startOfPeriod () const
 Returns true if we are at the beginning of a period (compute error is mandatory).
void updateApproximationScheme (unsigned int incr=1)
 Update the scheme w.r.t the new error and increment steps.
Size remainingBurnIn () const
 Returns the remaining burn in.
void stopApproximationScheme ()
 Stop the approximation scheme.
bool continueApproximationScheme (double error)
 Update the scheme w.r.t the new error.
Getters and setters
std::string messageApproximationScheme () const
 Returns the approximation scheme message.
Accessors/Modifiers
virtual void setNumberOfThreads (Size nb)
 sets the number max of threads to be used by the class containing this ThreadNumberManager
virtual Size getNumberOfThreads () const
 returns the current max number of threads used by the class containing this ThreadNumberManager
bool isGumNumberOfThreadsOverriden () const
 indicates whether the class containing this ThreadNumberManager set its own number of threads

Public Attributes

Signaler3< Size, double, doubleonProgress
 Progression, error and time.
Signaler1< const std::string & > onStop
 Criteria messageApproximationScheme.

Protected Member Functions

Protected initialization methods

Fusion of threads optimal IBayesNet.

void initThreadsData_ (const Size &num_threads, const bool _storeVertices_, const bool _storeBNOpt_)
 Initialize threads data.
Protected algorithms methods
bool updateThread_ (Size this_thread, const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund=false)
 Update thread information (marginals, expectations, IBayesNet, vertices) for a given node id.
void updateMarginals_ ()
 Fusion of threads marginals.
const GUM_SCALAR computeEpsilon_ ()
 Compute epsilon and update old marginals.
void updateOldMarginals_ ()
 Update old marginals (from current marginals).
Proptected post-inference methods
void optFusion_ ()
 Fusion of threads optimal IBayesNet.
void expFusion_ ()
 Fusion of threads expectations.
void verticesFusion_ ()
Protected initialization methods
void repetitiveInit_ ()
 Initialize t0_ and t1_ clusters.
void initExpectations_ ()
 Initialize lower and upper expectations before inference, with the lower expectation being initialized on the highest modality and the upper expectation being initialized on the lowest modality.
void initMarginals_ ()
 Initialize lower and upper old marginals and marginals before inference, with the lower marginal being 1 and the upper 0.
void displatchMarginalsToThreads_ ()
 computes Vector threadRanges_, that assigns some part of marginalMin_ and marginalMax_ to the threads
void initMarginalSets_ ()
 Initialize credal set vertices with empty sets.
Protected algorithms methods
void updateExpectations_ (const NodeId &id, const std::vector< GUM_SCALAR > &vertex)
 Given a node id and one of it's possible vertex obtained during inference, update this node lower and upper expectations.
void updateCredalSets_ (const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund=false)
 Given a node id and one of it's possible vertex, update it's credal set.
Proptected post-inference methods
void dynamicExpectations_ ()
 Rearrange lower and upper expectations to suit dynamic networks.

Protected Attributes

_margis_ l_marginalMin_
 Threads lower marginals, one per thread.
_margis_ l_marginalMax_
 Threads upper marginals, one per thread.
_expes_ l_expectationMin_
 Threads lower expectations, one per thread.
_expes_ l_expectationMax_
 Threads upper expectations, one per thread.
_modals_ l_modal_
 Threads modalities.
_credalSets_ l_marginalSets_
 Threads vertices.
_margis_ l_evidence_
 Threads evidence.
_clusters_ l_clusters_
 Threads clusters.
std::vector< _bnet_ * > workingSet_
 Threads IBayesNet.
std::vector< List< const Tensor< GUM_SCALAR > * > * > workingSetE_
 Threads evidence.
std::vector< BNInferenceEngine * > l_inferenceEngine_
 Threads BNInferenceEngine.
std::vector< VarMod2BNsMap< GUM_SCALAR > * > l_optimalNet_
 Threads optimal IBayesNet.
std::vector< std::mt19937 > generators_
 the generators used for computing random values
const CredalNet< GUM_SCALAR > * credalNet_
 A pointer to the Credal Net used.
margi oldMarginalMin_
 Old lower marginals used to compute epsilon.
margi oldMarginalMax_
 Old upper marginals used to compute epsilon.
margi marginalMin_
 Lower marginals.
margi marginalMax_
 Upper marginals.
credalSet marginalSets_
 Credal sets vertices, if enabled.
expe expectationMin_
 Lower expectations, if some variables modalities were inserted.
expe expectationMax_
 Upper expectations, if some variables modalities were inserted.
dynExpe dynamicExpMin_
 Lower dynamic expectations.
dynExpe dynamicExpMax_
 Upper dynamic expectations.
dynExpe modal_
 Variables modalities used to compute expectations.
margi evidence_
 Holds observed variables states.
query query_
 Holds the query nodes states.
cluster t0_
 Clusters of nodes used with dynamic networks.
cluster t1_
 Clusters of nodes used with dynamic networks.
bool storeVertices_
 True if credal sets vertices are stored, False otherwise.
bool repetitiveInd_
 True if using repetitive independence ( dynamic network only ), False otherwise.
bool storeBNOpt_
 Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.
VarMod2BNsMap< GUM_SCALAR > dbnOpt_
 Object used to efficiently store optimal bayes net during inference, for some algorithms.
std::vector< std::pair< NodeId, Idx > > threadRanges_
 the ranges of elements of marginalMin_ and marginalMax_ processed by each thread
int timeSteps_
 The number of time steps of this network (only usefull for dynamic networks).
Size threadMinimalNbOps_ {Size(20)}
double current_epsilon_
 Current epsilon.
double last_epsilon_
 Last epsilon value.
double current_rate_
 Current rate.
Size current_step_
 The current step.
Timer timer_
 The timer.
ApproximationSchemeSTATE current_state_
 The current state.
std::vector< doublehistory_
 The scheme history, used only if verbosity == true.
double eps_
 Threshold for convergence.
bool enabled_eps_
 If true, the threshold convergence is enabled.
double min_rate_eps_
 Threshold for the epsilon rate.
bool enabled_min_rate_eps_
 If true, the minimal threshold for epsilon rate is enabled.
double max_time_
 The timeout.
bool enabled_max_time_
 If true, the timeout is enabled.
Size max_iter_
 The maximum iterations.
bool enabled_max_iter_
 If true, the maximum iterations stopping criterion is enabled.
Size burn_in_
 Number of iterations before checking stopping criteria.
Size period_size_
 Checking criteria frequency.
bool verbosity_
 If true, verbosity is enabled.

Private Types

using _infE_ = InferenceEngine< GUM_SCALAR >
 To easily access InferenceEngine< GUM_SCALAR > methods.
using _cluster_ = NodeProperty< std::vector< NodeId > >
using _credalSet_ = NodeProperty< std::vector< std::vector< GUM_SCALAR > > >
using _margi_ = NodeProperty< std::vector< GUM_SCALAR > >
using _expe_ = NodeProperty< GUM_SCALAR >
using _bnet_ = IBayesNet< GUM_SCALAR >
using _margis_ = std::vector< _margi_ >
using _expes_ = std::vector< _expe_ >
using _credalSets_ = std::vector< _credalSet_ >
using _clusters_ = std::vector< std::vector< _cluster_ > >
using _modals_ = std::vector< HashTable< std::string, std::vector< GUM_SCALAR > > >
using credalSet = NodeProperty< std::vector< std::vector< GUM_SCALAR > > >
using margi = NodeProperty< std::vector< GUM_SCALAR > >
using expe = NodeProperty< GUM_SCALAR >
using dynExpe = typename gum::HashTable< std::string, std::vector< GUM_SCALAR > >
using query = NodeProperty< std::vector< bool > >
using cluster = NodeProperty< std::vector< NodeId > >

Private Member Functions

void _updateThreadCredalSets_ (Size this_thread, const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund)
 Ask for redundancy elimination of a node credal set of a calling thread.
void stopScheme_ (ApproximationSchemeSTATE new_state)
 Stop the scheme given a new state.

Private Attributes

Size _nb_threads_ {0}
 the max number of threads used by the class

Detailed Description

template<typename GUM_SCALAR, class BNInferenceEngine>
class gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >

Class template representing a CredalNet inference engine using one or more IBayesNet inference engines such as LazyPropagation.

Extends InferenceEngine< GUM_SCALAR >. Used for outer multi-threading such as CNMonteCarloSampling.

Template Parameters
GUM_SCALARA floating type ( float, double, long double ... ).
BNInferenceEngineA IBayesNet inference engine such as LazyPropagation.
Author
Matthieu HOURBRACQ and Pierre-Henri WUILLEMIN(_at_LIP6)

Definition at line 74 of file multipleInferenceEngine.h.

Member Typedef Documentation

◆ _bnet_

template<typename GUM_SCALAR, class BNInferenceEngine>
using gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_bnet_ = IBayesNet< GUM_SCALAR >
private

Definition at line 84 of file multipleInferenceEngine.h.

◆ _cluster_

template<typename GUM_SCALAR, class BNInferenceEngine>
using gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_cluster_ = NodeProperty< std::vector< NodeId > >
private

Definition at line 79 of file multipleInferenceEngine.h.

◆ _clusters_

template<typename GUM_SCALAR, class BNInferenceEngine>
using gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_clusters_ = std::vector< std::vector< _cluster_ > >
private

Definition at line 88 of file multipleInferenceEngine.h.

◆ _credalSet_

template<typename GUM_SCALAR, class BNInferenceEngine>
using gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_credalSet_ = NodeProperty< std::vector< std::vector< GUM_SCALAR > > >
private

Definition at line 80 of file multipleInferenceEngine.h.

◆ _credalSets_

template<typename GUM_SCALAR, class BNInferenceEngine>
using gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_credalSets_ = std::vector< _credalSet_ >
private

Definition at line 87 of file multipleInferenceEngine.h.

◆ _expe_

template<typename GUM_SCALAR, class BNInferenceEngine>
using gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_expe_ = NodeProperty< GUM_SCALAR >
private

Definition at line 82 of file multipleInferenceEngine.h.

◆ _expes_

template<typename GUM_SCALAR, class BNInferenceEngine>
using gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_expes_ = std::vector< _expe_ >
private

Definition at line 86 of file multipleInferenceEngine.h.

◆ _infE_

template<typename GUM_SCALAR, class BNInferenceEngine>
using gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_infE_ = InferenceEngine< GUM_SCALAR >
private

To easily access InferenceEngine< GUM_SCALAR > methods.

Definition at line 77 of file multipleInferenceEngine.h.

◆ _margi_

template<typename GUM_SCALAR, class BNInferenceEngine>
using gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_margi_ = NodeProperty< std::vector< GUM_SCALAR > >
private

Definition at line 81 of file multipleInferenceEngine.h.

◆ _margis_

template<typename GUM_SCALAR, class BNInferenceEngine>
using gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_margis_ = std::vector< _margi_ >
private

Definition at line 85 of file multipleInferenceEngine.h.

◆ _modals_

template<typename GUM_SCALAR, class BNInferenceEngine>
using gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_modals_ = std::vector< HashTable< std::string, std::vector< GUM_SCALAR > > >
private

Definition at line 90 of file multipleInferenceEngine.h.

◆ cluster

template<typename GUM_SCALAR>
using gum::credal::InferenceEngine< GUM_SCALAR >::cluster = NodeProperty< std::vector< NodeId > >
privateinherited

Definition at line 80 of file inferenceEngine.h.

◆ credalSet

template<typename GUM_SCALAR>
using gum::credal::InferenceEngine< GUM_SCALAR >::credalSet = NodeProperty< std::vector< std::vector< GUM_SCALAR > > >
privateinherited

Definition at line 73 of file inferenceEngine.h.

◆ dynExpe

template<typename GUM_SCALAR>
using gum::credal::InferenceEngine< GUM_SCALAR >::dynExpe = typename gum::HashTable< std::string, std::vector< GUM_SCALAR > >
privateinherited

Definition at line 77 of file inferenceEngine.h.

◆ expe

template<typename GUM_SCALAR>
using gum::credal::InferenceEngine< GUM_SCALAR >::expe = NodeProperty< GUM_SCALAR >
privateinherited

Definition at line 75 of file inferenceEngine.h.

◆ margi

template<typename GUM_SCALAR>
using gum::credal::InferenceEngine< GUM_SCALAR >::margi = NodeProperty< std::vector< GUM_SCALAR > >
privateinherited

Definition at line 74 of file inferenceEngine.h.

◆ query

template<typename GUM_SCALAR>
using gum::credal::InferenceEngine< GUM_SCALAR >::query = NodeProperty< std::vector< bool > >
privateinherited

Definition at line 79 of file inferenceEngine.h.

Member Enumeration Documentation

◆ ApproximationSchemeSTATE

The different state of an approximation scheme.

Enumerator
Undefined 
Continue 
Epsilon 
Rate 
Limit 
TimeLimit 
Stopped 

Definition at line 86 of file IApproximationSchemeConfiguration.h.

86 : char {
87 Undefined,
88 Continue,
89 Epsilon,
90 Rate,
91 Limit,
92 TimeLimit,
93 Stopped
94 };

Constructor & Destructor Documentation

◆ MultipleInferenceEngine()

template<typename GUM_SCALAR, class BNInferenceEngine>
gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::MultipleInferenceEngine ( const CredalNet< GUM_SCALAR > & credalNet)
explicit

Constructor.

Parameters
credalNetThe CredalNet to be used.

Definition at line 50 of file multipleInferenceEngine_tpl.h.

51 :
54 }
InferenceEngine(const CredalNet< GUM_SCALAR > &credalNet)
Construtor.
const CredalNet< GUM_SCALAR > & credalNet() const
Get this creadal network.
Class template representing a CredalNet inference engine using one or more IBayesNet inference engine...
MultipleInferenceEngine(const CredalNet< GUM_SCALAR > &credalNet)
Constructor.

References gum::credal::InferenceEngine< GUM_SCALAR >::InferenceEngine(), MultipleInferenceEngine(), and gum::credal::InferenceEngine< GUM_SCALAR >::credalNet().

Referenced by MultipleInferenceEngine(), and ~MultipleInferenceEngine().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ ~MultipleInferenceEngine()

template<typename GUM_SCALAR, class BNInferenceEngine>
gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::~MultipleInferenceEngine ( )
virtual

Destructor.

Definition at line 57 of file multipleInferenceEngine_tpl.h.

References MultipleInferenceEngine().

Here is the call graph for this function:

Member Function Documentation

◆ _updateThreadCredalSets_()

template<typename GUM_SCALAR, class BNInferenceEngine>
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_updateThreadCredalSets_ ( Size this_thread,
const NodeId & id,
const std::vector< GUM_SCALAR > & vertex,
const bool & elimRedund )
inlineprivate

Ask for redundancy elimination of a node credal set of a calling thread.

Called by updateThread_ if vertices are stored.

Parameters
this_threadthe id of the thread executing this method
idA constant reference to the node id whose credal set is to be checked for redundancy.
vertexThe vertex to add to the credal set.
elimRedundtrue if redundancy elimination is to be performed, false otherwise and by default.

Definition at line 217 of file multipleInferenceEngine_tpl.h.

221 {
223 Size dsize = Size(vertex.size());
224
225 bool eq = true;
226
227 for (auto it = nodeCredalSet.cbegin(), itEnd = nodeCredalSet.cend(); it != itEnd; ++it) {
228 eq = true;
229
230 for (Size i = 0; i < dsize; i++) {
231 if (std::fabs(vertex[i] - (*it)[i]) > 1e-6) {
232 eq = false;
233 break;
234 }
235 }
236
237 if (eq) break;
238 }
239
240 if (!eq || nodeCredalSet.size() == 0) {
241 nodeCredalSet.push_back(vertex);
242 return;
243 } else return;
244
247 if (nodeCredalSet.size() == 1) return;
248
249 // check that the point and all previously added ones are not inside the
250 // actual
251 // polytope
252 auto itEnd = std::remove_if(
253 nodeCredalSet.begin(),
254 nodeCredalSet.end(),
255 [&](const std::vector< GUM_SCALAR >& v) -> bool {
256 for (auto jt = v.cbegin(),
257 jtEnd = v.cend(),
258 minIt = l_marginalMin_[tId][id].cbegin(),
259 minItEnd = l_marginalMin_[tId][id].cend(),
260 maxIt = l_marginalMax_[tId][id].cbegin(),
261 maxItEnd = l_marginalMax_[tId][id].cend();
262 jt != jtEnd && minIt != minItEnd && maxIt != maxItEnd;
263 ++jt, ++minIt, ++maxIt) {
264 if ((std::fabs(*jt - *minIt) < 1e-6 || std::fabs(*jt - *maxIt) < 1e-6)
265 && std::fabs(*minIt - *maxIt) > 1e-6)
266 return false;
267 }
268 return true;
269 });
270
271 nodeCredalSet.erase(itEnd, nodeCredalSet.end());
272
273 // we need at least 2 points to make a convex combination
274 if (!elimRedund || nodeCredalSet.size() <= 2) return;
275
276 // there may be points not inside the polytope but on one of it's facet,
277 // meaning it's still a convex combination of vertices of this facet. Here
278 // we need lrs.
279 Size setSize = Size(nodeCredalSet.size());
280
282 lrsWrapper.setUpV(dsize, setSize);
283
284 for (const auto& vtx: nodeCredalSet)
285 lrsWrapper.fillV(vtx);
286
287 lrsWrapper.elimRedundVrep();
288
289 l_marginalSets_[tId][id] = lrsWrapper.getOutput();
290 }
_credalSets_ l_marginalSets_
Threads vertices.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition types.h:74

◆ addEvidence() [1/7]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::addEvidence ( const std::string & nodeName,
const Idx val )
finalvirtualinherited

adds a new hard evidence on node named nodeName

Exceptions
UndefinedElementif nodeName does not belong to the Bayesian network
InvalidArgumentif val is not a value for id
InvalidArgumentif nodeName already has an evidence

Definition at line 1211 of file inferenceEngine_tpl.h.

1211 {
1212 addEvidence(this->credalNet_->current_bn().idFromName(nodeName), val);
1213 }
Abstract class template representing a CredalNet inference engine.
const CredalNet< GUM_SCALAR > * credalNet_
A pointer to the Credal Net used.
virtual void addEvidence(NodeId id, const Idx val) final
adds a new hard evidence on node id

References addEvidence(), and credalNet_.

Here is the call graph for this function:

◆ addEvidence() [2/7]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::addEvidence ( const std::string & nodeName,
const std::string & label )
finalvirtualinherited

adds a new hard evidence on node named nodeName

Exceptions
UndefinedElementif nodeName does not belong to the Bayesian network
InvalidArgumentif val is not a value for id
InvalidArgumentif nodeName already has an evidence

Definition at line 1223 of file inferenceEngine_tpl.h.

1224 {
1225 const NodeId id = this->credalNet_->current_bn().idFromName(nodeName);
1226 addEvidence(id, this->credalNet_->current_bn().variable(id)[label]);
1227 }

References addEvidence(), and credalNet_.

Here is the call graph for this function:

◆ addEvidence() [3/7]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::addEvidence ( const std::string & nodeName,
const std::vector< GUM_SCALAR > & vals )
finalvirtualinherited

adds a new evidence on node named nodeName (might be soft or hard)

Exceptions
UndefinedElementif id does not belong to the Bayesian network
InvalidArgumentif nodeName already has an evidence
FatalErrorif vals=[0,0,...,0]
InvalidArgumentif the size of vals is different from the domain size of node nodeName

Definition at line 1230 of file inferenceEngine_tpl.h.

1231 {
1232 addEvidence(this->credalNet_->current_bn().idFromName(nodeName), vals);
1233 }

References addEvidence(), and credalNet_.

Here is the call graph for this function:

◆ addEvidence() [4/7]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::addEvidence ( const Tensor< GUM_SCALAR > & pot)
finalvirtualinherited

adds a new evidence on node id (might be soft or hard)

Exceptions
UndefinedElementif the tensor is defined over several nodes
UndefinedElementif the node on which the tensor is defined does not belong to the Bayesian network
InvalidArgumentif the node of the tensor already has an evidence
FatalErrorif pot=[0,0,...,0]

Definition at line 1236 of file inferenceEngine_tpl.h.

1236 {
1237 const auto id = this->credalNet_->current_bn().idFromName(pot.variable(0).name());
1238 std::vector< GUM_SCALAR > vals(this->credalNet_->current_bn().variable(id).domainSize(), 0);
1240 for (I.setFirst(); !I.end(); I.inc()) {
1241 vals[I.val(0)] = pot[I];
1242 }
1243 addEvidence(id, vals);
1244 }

References addEvidence(), credalNet_, gum::Instantiation::end(), gum::Instantiation::inc(), gum::Instantiation::setFirst(), and gum::Instantiation::val().

Here is the call graph for this function:

◆ addEvidence() [5/7]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::addEvidence ( NodeId id,
const Idx val )
finalvirtualinherited

adds a new hard evidence on node id

Exceptions
UndefinedElementif id does not belong to the Bayesian network
InvalidArgumentif val is not a value for id
InvalidArgumentif id already has an evidence

Definition at line 1203 of file inferenceEngine_tpl.h.

1203 {
1204 std::vector< GUM_SCALAR > vals(this->credalNet_->current_bn().variable(id).domainSize(), 0);
1205 vals[val] = 1;
1206 addEvidence(id, vals);
1207 }

References addEvidence(), and credalNet_.

Referenced by addEvidence(), addEvidence(), addEvidence(), addEvidence(), addEvidence(), and addEvidence().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ addEvidence() [6/7]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::addEvidence ( NodeId id,
const std::string & label )
finalvirtualinherited

adds a new hard evidence on node id

Exceptions
UndefinedElementif id does not belong to the Bayesian network
InvalidArgumentif val is not a value for id
InvalidArgumentif id already has an evidence

Definition at line 1217 of file inferenceEngine_tpl.h.

1217 {
1218 addEvidence(id, this->credalNet_->current_bn().variable(id)[label]);
1219 }

References addEvidence(), and credalNet_.

Here is the call graph for this function:

◆ addEvidence() [7/7]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::addEvidence ( NodeId id,
const std::vector< GUM_SCALAR > & vals )
finalvirtualinherited

adds a new evidence on node id (might be soft or hard)

Exceptions
UndefinedElementif id does not belong to the Bayesian network
InvalidArgumentif id already has an evidence
FatalErrorif vals=[0,0,...,0]
InvalidArgumentif the size of vals is different from the domain size of node id

Definition at line 1193 of file inferenceEngine_tpl.h.

1194 {
1195 evidence_.insert(id, vals);
1196 // forces the computation of the begin iterator to avoid subsequent data races
1197 // @TODO make HashTableConstIterator constructors thread safe
1198 evidence_.begin();
1199 }
margi evidence_
Holds observed variables states.

References evidence_.

◆ computeEpsilon_()

template<typename GUM_SCALAR, class BNInferenceEngine>
const GUM_SCALAR gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::computeEpsilon_ ( )
inlineprotectedvirtual

Compute epsilon and update old marginals.

Returns
Epsilon.

Reimplemented from gum::credal::InferenceEngine< GUM_SCALAR >.

Definition at line 369 of file multipleInferenceEngine_tpl.h.

369 {
370 // compute the number of threads (avoid nested threads)
372 ? this->threadRanges_.size() - 1
373 : 1; // no nested multithreading
374
376
377 // create the function to be executed by the threads
378 auto threadedExec = [this, &tEps](const std::size_t this_thread,
381 auto& this_tEps = tEps[this_thread];
382 GUM_SCALAR delta = 0;
383
384 auto i = this->threadRanges_[this_thread].first;
385 auto j = this->threadRanges_[this_thread].second;
386 auto domain_size = this->marginalMax_[i].size();
387 const auto end_i = this->threadRanges_[this_thread + 1].first;
388 auto end_j = this->threadRanges_[this_thread + 1].second;
389 const auto marginalMax_size = this->marginalMax_.size();
390
391 while ((i < end_i) || (j < end_j)) {
392 // on min
393 delta = this->marginalMin_[i][j] - this->oldMarginalMin_[i][j];
394 delta = (delta < 0) ? (-delta) : delta;
396
397 // on max
398 delta = this->marginalMax_[i][j] - this->oldMarginalMax_[i][j];
399 delta = (delta < 0) ? (-delta) : delta;
401
402 this->oldMarginalMin_[i][j] = this->marginalMin_[i][j];
403 this->oldMarginalMax_[i][j] = this->marginalMax_[i][j];
404
405 if (++j == domain_size) {
406 j = 0;
407 ++i;
408 if (i < marginalMax_size) domain_size = this->marginalMax_[i].size();
409 }
410 }
411 };
412
413 // launch the threads
417 (nb_threads == 1)
418 ? std::vector< std::pair< NodeId, Idx > >{{0, 0}, {this->marginalMin_.size(), 0}}
419 : this->threadRanges_);
420
421 // aggregate all the results
422 GUM_SCALAR eps = tEps[0];
423 for (const auto nb: tEps)
424 if (eps < nb) eps = nb;
425
426 return eps;
427 }
margi oldMarginalMax_
Old upper marginals used to compute epsilon.
margi marginalMax_
Upper marginals.
margi oldMarginalMin_
Old lower marginals used to compute epsilon.
margi marginalMin_
Lower marginals.
std::vector< std::pair< NodeId, Idx > > threadRanges_
the ranges of elements of marginalMin_ and marginalMax_ processed by each thread
static void execute(std::size_t nb_threads, FUNCTION exec_func, ARGS &&... func_args)
executes a function using several threads
static int nbRunningThreadsExecutors()
indicates how many threadExecutors are currently running

References computeEpsilon_(), gum::threadsSTL::ThreadExecutor::execute(), gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax_, gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin_, gum::threadsSTL::ThreadExecutor::nbRunningThreadsExecutors(), gum::credal::InferenceEngine< GUM_SCALAR >::oldMarginalMax_, gum::credal::InferenceEngine< GUM_SCALAR >::oldMarginalMin_, and gum::credal::InferenceEngine< GUM_SCALAR >::threadRanges_.

Referenced by computeEpsilon_().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ continueApproximationScheme()

INLINE bool gum::ApproximationScheme::continueApproximationScheme ( double error)
inherited

Update the scheme w.r.t the new error.

Test the stopping criterion that are enabled.

Parameters
errorThe new error value.
Returns
false if state become != ApproximationSchemeSTATE::Continue
Exceptions
OperationNotAllowedRaised if state != ApproximationSchemeSTATE::Continue.

Definition at line 229 of file approximationScheme_inl.h.

229 {
230 // For coherence, we fix the time used in the method
231
232 double timer_step = timer_.step();
233
234 if (enabled_max_time_) {
235 if (timer_step > max_time_) {
237 return false;
238 }
239 }
240
241 if (!startOfPeriod()) { return true; }
242
244 GUM_ERROR(
245 OperationNotAllowed,
246 "state of the approximation scheme is not correct : " << messageApproximationScheme());
247 }
248
249 if (verbosity()) { history_.push_back(error); }
250
251 if (enabled_max_iter_) {
252 if (current_step_ > max_iter_) {
254 return false;
255 }
256 }
257
259 current_epsilon_ = error; // eps rate isEnabled needs it so affectation was
260 // moved from eps isEnabled below
261
262 if (enabled_eps_) {
263 if (current_epsilon_ <= eps_) {
265 return false;
266 }
267 }
268
269 if (last_epsilon_ >= 0.) {
270 if (current_epsilon_ > .0) {
271 // ! current_epsilon_ can be 0. AND epsilon
272 // isEnabled can be disabled !
274 }
275 // limit with current eps ---> 0 is | 1 - ( last_eps / 0 ) | --->
276 // infinity the else means a return false if we isEnabled the rate below,
277 // as we would have returned false if epsilon isEnabled was enabled
278 else {
280 }
281
285 return false;
286 }
287 }
288 }
289
291 if (onProgress.hasListener()) {
293 }
294
295 return true;
296 } else {
297 return false;
298 }
299 }
Size current_step_
The current step.
double current_epsilon_
Current epsilon.
double last_epsilon_
Last epsilon value.
double eps_
Threshold for convergence.
bool enabled_max_time_
If true, the timeout is enabled.
Size max_iter_
The maximum iterations.
bool enabled_eps_
If true, the threshold convergence is enabled.
ApproximationSchemeSTATE current_state_
The current state.
double min_rate_eps_
Threshold for the epsilon rate.
std::vector< double > history_
The scheme history, used only if verbosity == true.
double current_rate_
Current rate.
ApproximationSchemeSTATE stateApproximationScheme() const override
Returns the approximation scheme state.
bool startOfPeriod() const
Returns true if we are at the beginning of a period (compute error is mandatory).
bool enabled_max_iter_
If true, the maximum iterations stopping criterion is enabled.
void stopScheme_(ApproximationSchemeSTATE new_state)
Stop the scheme given a new state.
bool verbosity() const override
Returns true if verbosity is enabled.
bool enabled_min_rate_eps_
If true, the minimal threshold for epsilon rate is enabled.
std::string messageApproximationScheme() const
Returns the approximation scheme message.
Signaler3< Size, double, double > onProgress
Progression, error and time.
#define GUM_ERROR(type, msg)
Definition exceptions.h:72
#define GUM_EMIT3(signal, arg1, arg2, arg3)
Definition signaler3.h:61

References enabled_max_time_, and timer_.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::computeKL_(), gum::learning::GreedyHillClimbing::learnStructure(), gum::learning::LocalSearchWithTabuList::learnStructure(), gum::SamplingInference< GUM_SCALAR >::loopApproxInference_(), gum::credal::CNLoopyPropagation< GUM_SCALAR >::makeInferenceByOrderedArcs_(), gum::credal::CNLoopyPropagation< GUM_SCALAR >::makeInferenceByRandomOrder_(), and gum::credal::CNLoopyPropagation< GUM_SCALAR >::makeInferenceNodeToNeighbours_().

Here is the caller graph for this function:

◆ credalNet()

template<typename GUM_SCALAR>
const CredalNet< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::credalNet ( ) const
inherited

Get this creadal network.

Returns
A constant reference to this CredalNet.

Definition at line 81 of file inferenceEngine_tpl.h.

81 {
82 return *credalNet_;
83 }

References credalNet_.

Referenced by gum::credal::CNLoopyPropagation< GUM_SCALAR >::CNLoopyPropagation(), InferenceEngine(), and gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::MultipleInferenceEngine().

Here is the caller graph for this function:

◆ currentTime()

INLINE double gum::ApproximationScheme::currentTime ( ) const
overridevirtualinherited

Returns the current running time in second.

Returns
Returns the current running time in second.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 136 of file approximationScheme_inl.h.

136{ return timer_.step(); }

References timer_.

◆ disableEpsilon()

INLINE void gum::ApproximationScheme::disableEpsilon ( )
overridevirtualinherited

Disable stopping criterion on epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 74 of file approximationScheme_inl.h.

74{ enabled_eps_ = false; }

References enabled_eps_.

Referenced by gum::learning::EMApproximationScheme::EMApproximationScheme(), and gum::learning::EMApproximationScheme::setMinEpsilonRate().

Here is the caller graph for this function:

◆ disableMaxIter()

INLINE void gum::ApproximationScheme::disableMaxIter ( )
overridevirtualinherited

Disable stopping criterion on max iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 115 of file approximationScheme_inl.h.

115{ enabled_max_iter_ = false; }

References enabled_max_iter_.

Referenced by gum::learning::GreedyHillClimbing::GreedyHillClimbing().

Here is the caller graph for this function:

◆ disableMaxTime()

INLINE void gum::ApproximationScheme::disableMaxTime ( )
overridevirtualinherited

Disable stopping criterion on timeout.

Returns
Disable stopping criterion on timeout.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 139 of file approximationScheme_inl.h.

139{ enabled_max_time_ = false; }

References enabled_max_time_.

Referenced by gum::learning::GreedyHillClimbing::GreedyHillClimbing().

Here is the caller graph for this function:

◆ disableMinEpsilonRate()

INLINE void gum::ApproximationScheme::disableMinEpsilonRate ( )
overridevirtualinherited

Disable stopping criterion on epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 95 of file approximationScheme_inl.h.

95{ enabled_min_rate_eps_ = false; }

References enabled_min_rate_eps_.

Referenced by gum::learning::GreedyHillClimbing::GreedyHillClimbing(), gum::GibbsBNdistance< GUM_SCALAR >::computeKL_(), and gum::learning::EMApproximationScheme::setEpsilon().

Here is the caller graph for this function:

◆ displatchMarginalsToThreads_()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::displatchMarginalsToThreads_ ( )
protectedinherited

computes Vector threadRanges_, that assigns some part of marginalMin_ and marginalMax_ to the threads

Definition at line 1133 of file inferenceEngine_tpl.h.

1133 {
1134 // we compute the number of elements in the 2 loops (over i,j in marginalMin_[i][j])
1135 Size nb_elements = 0;
1136 const auto marginalMin_size = this->marginalMin_.size();
1137 for (const auto& marg_i: this->marginalMin_)
1138 nb_elements += marg_i.second.size();
1139
1140 // distribute evenly the elements among the threads
1143
1144 // the result that we return is a vector of pairs (NodeId, Idx). For thread number i, the
1145 // pair at index i is the beginning of the range that the thread will have to process: this
1146 // is the part of the marginal distribution vector of node NodeId starting at index Idx.
1147 // The pair at index i+1 is the end of this range (not included)
1148 threadRanges_.clear();
1149 threadRanges_.reserve(nb_threads + 1);
1150
1151 // try to balance the number of elements among the threads
1154
1155 NodeId current_node = 0;
1157 Size current_domain_size = this->marginalMin_[0].size();
1159
1160 for (Idx i = Idx(0); i < nb_threads; ++i) {
1161 // compute the end of the threads, assuming that the current node has a domain
1162 // sufficiently large
1164 if (rest_elts != Idx(0)) {
1166 --rest_elts;
1167 }
1168
1169 // if the current node is not sufficient to hold all the elements that
1170 // the current thread should process. So we should add elements of the
1171 // next nodes
1174 ++current_node;
1178 }
1179 }
1180
1181 // now we can store the range if elements
1183
1184 // compute the next begin_node
1186 ++current_node;
1188 }
1189 }
1190 }
virtual Size getNumberOfThreads() const
returns the current max number of threads used by the class containing this ThreadNumberManager
Size Idx
Type for indexes.
Definition types.h:79

References gum::ThreadNumberManager::getNumberOfThreads(), and threadRanges_.

Referenced by initMarginals_().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ dynamicExpectations()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpectations ( )
inherited

Compute dynamic expectations.

See also
dynamicExpectations_ Only call this if an algorithm does not call it by itself.

Definition at line 739 of file inferenceEngine_tpl.h.

739 {
741 }
void dynamicExpectations_()
Rearrange lower and upper expectations to suit dynamic networks.

References dynamicExpectations_().

Here is the call graph for this function:

◆ dynamicExpectations_()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpectations_ ( )
protectedinherited

Rearrange lower and upper expectations to suit dynamic networks.

Definition at line 744 of file inferenceEngine_tpl.h.

744 {
745 // no modals, no expectations computed during inference
746 if (expectationMin_.empty() || modal_.empty()) return;
747
748 // already called by the algorithm or the user
749 if (dynamicExpMax_.size() > 0 && dynamicExpMin_.size() > 0) return;
750
752
754
755
756 // if non dynamic, directly save expectationMin_ et Max (same but faster)
758
759 for (const auto& elt: expectationMin_) {
761
762 var_name = credalNet_->current_bn().variable(elt.first).name();
763 auto delim = var_name.find_first_of("_");
764 time_step = var_name.substr(delim + 1, var_name.size());
765 var_name = var_name.substr(0, delim);
766
767 // to be sure (don't store not monitored variables' expectations)
768 // although it
769 // should be taken care of before this point
770 if (!modal_.exists(var_name)) continue;
771
772 expectationsMin.getWithDefault(var_name, innerMap())
773 .getWithDefault(atoi(time_step.c_str()), 0)
774 = elt.second; // we iterate with min iterators
775 expectationsMax.getWithDefault(var_name, innerMap())
776 .getWithDefault(atoi(time_step.c_str()), 0)
777 = expectationMax_[elt.first];
778 }
779
780 for (const auto& elt: expectationsMin) {
781 typename std::vector< GUM_SCALAR > dynExp(elt.second.size());
782
783 for (const auto& elt2: elt.second)
784 dynExp[elt2.first] = elt2.second;
785
786 dynamicExpMin_.insert(elt.first, dynExp);
787 }
788
789 for (const auto& elt: expectationsMax) {
790 typename std::vector< GUM_SCALAR > dynExp(elt.second.size());
791
792 for (const auto& elt2: elt.second) {
793 dynExp[elt2.first] = elt2.second;
794 }
795
796 dynamicExpMax_.insert(elt.first, dynExp);
797 }
798 }
dynExpe dynamicExpMin_
Lower dynamic expectations.
dynExpe dynamicExpMax_
Upper dynamic expectations.
expe expectationMax_
Upper expectations, if some variables modalities were inserted.
dynExpe modal_
Variables modalities used to compute expectations.
expe expectationMin_
Lower expectations, if some variables modalities were inserted.

References credalNet_, dynamicExpMax_, dynamicExpMin_, expectationMax_, expectationMin_, and modal_.

Referenced by dynamicExpectations().

Here is the caller graph for this function:

◆ dynamicExpMax()

template<typename GUM_SCALAR>
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMax ( const std::string & varName) const
inherited

Get the upper dynamic expectation of a given variable prefix (without the time step included, i.e.

call with "temp" to get "temp_0", ..., "temp_T").

Parameters
varNameThe variable name prefix which upper expectation we want.
Returns
A constant reference to the variable upper expectation over all time steps.

Definition at line 534 of file inferenceEngine_tpl.h.

534 {
535 std::string errTxt = "const std::vector< GUM_SCALAR > & InferenceEngine< "
536 "GUM_SCALAR >::dynamicExpMax ( const std::string & "
537 "varName ) const : ";
538
539 if (dynamicExpMax_.empty())
540 GUM_ERROR(OperationNotAllowed, errTxt + "_dynamicExpectations() needs to be called before")
541
542 if (!dynamicExpMax_.exists(varName) /*dynamicExpMin_.find(varName) == dynamicExpMin_.end()*/)
543 GUM_ERROR(NotFound, errTxt + "variable name not found : " << varName)
544
546 }

References InferenceEngine(), dynamicExpMax(), and dynamicExpMax_.

Referenced by dynamicExpMax().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ dynamicExpMin()

template<typename GUM_SCALAR>
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMin ( const std::string & varName) const
inherited

Get the lower dynamic expectation of a given variable prefix (without the time step included, i.e.

call with "temp" to get "temp_0", ..., "temp_T").

Parameters
varNameThe variable name prefix which lower expectation we want.
Returns
A constant reference to the variable lower expectation over all time steps.

Definition at line 518 of file inferenceEngine_tpl.h.

518 {
519 std::string errTxt = "const std::vector< GUM_SCALAR > & InferenceEngine< "
520 "GUM_SCALAR >::dynamicExpMin ( const std::string & "
521 "varName ) const : ";
522
523 if (dynamicExpMin_.empty())
524 GUM_ERROR(OperationNotAllowed, errTxt + "_dynamicExpectations() needs to be called before")
525
526 if (!dynamicExpMin_.exists(varName) /*dynamicExpMin_.find(varName) == dynamicExpMin_.end()*/)
527 GUM_ERROR(NotFound, errTxt + "variable name not found : " << varName)
528
530 }

◆ enableEpsilon()

INLINE void gum::ApproximationScheme::enableEpsilon ( )
overridevirtualinherited

Enable stopping criterion on epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 77 of file approximationScheme_inl.h.

77{ enabled_eps_ = true; }

References enabled_eps_.

◆ enableMaxIter()

INLINE void gum::ApproximationScheme::enableMaxIter ( )
overridevirtualinherited

Enable stopping criterion on max iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 118 of file approximationScheme_inl.h.

118{ enabled_max_iter_ = true; }

References enabled_max_iter_.

◆ enableMaxTime()

INLINE void gum::ApproximationScheme::enableMaxTime ( )
overridevirtualinherited

Enable stopping criterion on timeout.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 142 of file approximationScheme_inl.h.

142{ enabled_max_time_ = true; }

References enabled_max_time_.

◆ enableMinEpsilonRate()

INLINE void gum::ApproximationScheme::enableMinEpsilonRate ( )
overridevirtualinherited

Enable stopping criterion on epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 98 of file approximationScheme_inl.h.

98{ enabled_min_rate_eps_ = true; }

References enabled_min_rate_eps_.

Referenced by gum::learning::EMApproximationScheme::EMApproximationScheme(), and gum::GibbsBNdistance< GUM_SCALAR >::computeKL_().

Here is the caller graph for this function:

◆ epsilon()

INLINE double gum::ApproximationScheme::epsilon ( ) const
overridevirtualinherited

Returns the value of epsilon.

Returns
Returns the value of epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 71 of file approximationScheme_inl.h.

71{ return eps_; }

References eps_.

Referenced by gum::ImportanceSampling< GUM_SCALAR >::onContextualize_(), and gum::ImportanceSampling< GUM_SCALAR >::unsharpenBN_().

Here is the caller graph for this function:

◆ eraseAllEvidence()

template<typename GUM_SCALAR, class BNInferenceEngine>
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::eraseAllEvidence ( )
virtual

Erase all inference related data to perform another one.

You need to insert evidence again if needed but modalities are kept. You can insert new ones by using the appropriate method which will delete the old ones.

Reimplemented from gum::credal::InferenceEngine< GUM_SCALAR >.

Definition at line 782 of file multipleInferenceEngine_tpl.h.

782 {
784 Size tsize = Size(workingSet_.size());
785
786 // delete pointers
787 for (Size bn = 0; bn < tsize; bn++) {
789
790 if (workingSet_[bn] != nullptr) delete workingSet_[bn];
791
793 if (l_inferenceEngine_[bn] != nullptr) delete l_optimalNet_[bn];
794
795 if (this->workingSetE_[bn] != nullptr) {
796 for (const auto ev: *workingSetE_[bn])
797 delete ev;
798
799 delete workingSetE_[bn];
800 }
801
802 if (l_inferenceEngine_[bn] != nullptr) delete l_inferenceEngine_[bn];
803 }
804
805 // this is important, those will be resized with the correct number of
806 // threads.
807
808 workingSet_.clear();
809 workingSetE_.clear();
810 l_inferenceEngine_.clear();
811 l_optimalNet_.clear();
812
813 l_marginalMin_.clear();
814 l_marginalMax_.clear();
815 l_expectationMin_.clear();
816 l_expectationMax_.clear();
817 l_modal_.clear();
818 l_marginalSets_.clear();
819 l_evidence_.clear();
820 l_clusters_.clear();
821 }
bool storeBNOpt_
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.
bool storeVertices_
True if credal sets vertices are stored, False otherwise.
virtual void eraseAllEvidence()
removes all the evidence entered into the network
_margis_ l_marginalMin_
Threads lower marginals, one per thread.
_expes_ l_expectationMax_
Threads upper expectations, one per thread.
_expes_ l_expectationMin_
Threads lower expectations, one per thread.
std::vector< _bnet_ * > workingSet_
Threads IBayesNet.
std::vector< BNInferenceEngine * > l_inferenceEngine_
Threads BNInferenceEngine.
std::vector< VarMod2BNsMap< GUM_SCALAR > * > l_optimalNet_
Threads optimal IBayesNet.
_margis_ l_marginalMax_
Threads upper marginals, one per thread.
std::vector< List< const Tensor< GUM_SCALAR > * > * > workingSetE_
Threads evidence.

References gum::credal::InferenceEngine< GUM_SCALAR >::eraseAllEvidence(), eraseAllEvidence(), l_clusters_, l_evidence_, l_expectationMax_, l_expectationMin_, l_inferenceEngine_, l_marginalMax_, l_marginalMin_, l_marginalSets_, l_modal_, l_optimalNet_, gum::credal::InferenceEngine< GUM_SCALAR >::storeBNOpt_, gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices_, workingSet_, and workingSetE_.

Referenced by eraseAllEvidence().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ expectationMax() [1/2]

template<typename GUM_SCALAR>
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax ( const NodeId id) const
inherited

Get the upper expectation of a given node id.

Parameters
idThe node id which upper expectation we want.
Returns
A constant reference to this node upper expectation.

Definition at line 510 of file inferenceEngine_tpl.h.

510 {
511 try {
512 return expectationMax_[id];
513 } catch (NotFound& err) { throw(err); }
514 }

References expectationMax_.

◆ expectationMax() [2/2]

template<typename GUM_SCALAR>
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax ( const std::string & varName) const
inherited

Get the upper expectation of a given variable name.

Parameters
varNameThe variable name which upper expectation we want.
Returns
A constant reference to this variable upper expectation.

Definition at line 496 of file inferenceEngine_tpl.h.

496 {
497 try {
498 return expectationMax_[credalNet_->current_bn().idFromName(varName)];
499 } catch (NotFound& err) { throw(err); }
500 }

References credalNet_, and expectationMax_.

◆ expectationMin() [1/2]

template<typename GUM_SCALAR>
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin ( const NodeId id) const
inherited

Get the lower expectation of a given node id.

Parameters
idThe node id which lower expectation we want.
Returns
A constant reference to this node lower expectation.

Definition at line 503 of file inferenceEngine_tpl.h.

503 {
504 try {
505 return expectationMin_[id];
506 } catch (NotFound& err) { throw(err); }
507 }

References expectationMin_.

◆ expectationMin() [2/2]

template<typename GUM_SCALAR>
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin ( const std::string & varName) const
inherited

Get the lower expectation of a given variable name.

Parameters
varNameThe variable name which lower expectation we want.
Returns
A constant reference to this variable lower expectation.

Definition at line 488 of file inferenceEngine_tpl.h.

488 {
489 try {
490 return expectationMin_[credalNet_->current_bn().idFromName(varName)];
491 } catch (NotFound& err) { throw(err); }
492 }

References credalNet_, and expectationMin_.

◆ expFusion_()

template<typename GUM_SCALAR, class BNInferenceEngine>
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::expFusion_ ( )
protected

Fusion of threads expectations.

Definition at line 576 of file multipleInferenceEngine_tpl.h.

576 {
577 // don't create threads if there are no modalities to compute expectations
578 if (this->modal_.empty()) return;
579
580 // compute the max number of threads to use (avoid nested threads)
583 : 1; // no nested multithreading
584
585 // we can compute expectations from vertices of the final credal set
587 // create the function to be executed by the threads
588 auto threadedExec = [this](const std::size_t this_thread,
593 std::string var_name = workingSet_[work_index]->variable(i).name();
594 auto delim = var_name.find_first_of("_");
595 var_name = var_name.substr(0, delim);
596
597 if (!l_modal_[work_index].exists(var_name)) continue;
598
599 for (const auto& vertex: _infE_::marginalSets_[i]) {
600 GUM_SCALAR exp = 0;
601 Size vsize = Size(vertex.size());
602
603 for (Size mod = 0; mod < vsize; mod++)
605
607
609 }
610 }
611 };
612
613 const Size working_size = workingSet_.size();
615 if (!this->l_modal_[work_index].empty()) {
616 // compute the ranges over which the threads will work
617 const auto nsize = workingSet_[work_index]->size();
619 const auto ranges
622 }
623 }
624
625 return;
626 }
627
628 // create the function to be executed by the threads
629 auto threadedExec = [this](const std::size_t this_thread,
634 std::string var_name = workingSet_[work_index]->variable(i).name();
635 auto delim = var_name.find_first_of("_");
636 var_name = var_name.substr(0, delim);
637
638 if (!l_modal_[work_index].exists(var_name)) continue;
639
641
642 for (Idx tId = 0; tId < tsize; tId++) {
643 if (l_expectationMax_[tId][i] > this->expectationMax_[i])
645
646 if (l_expectationMin_[tId][i] < this->expectationMin_[i])
648 } // end of : each thread
649 } // end of : each variable
650 };
651
652 const Size working_size = workingSet_.size();
654 if (!this->l_modal_[work_index].empty()) {
655 const auto nsize = Size(workingSet_[work_index]->size());
657 const auto ranges
660 }
661 }
662 }
credalSet marginalSets_
Credal sets vertices, if enabled.
std::vector< std::pair< Idx, Idx > > dispatchRangeToThreads(Idx beg, Idx end, unsigned int nb_threads)
returns a vector equally splitting elements of a range among threads
Definition threads.cpp:76

References gum::dispatchRangeToThreads(), gum::threadsSTL::ThreadExecutor::execute(), gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax_, gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin_, expFusion_(), gum::ThreadNumberManager::getNumberOfThreads(), l_expectationMax_, l_expectationMin_, l_modal_, gum::credal::InferenceEngine< GUM_SCALAR >::marginalSets_, gum::credal::InferenceEngine< GUM_SCALAR >::modal_, gum::threadsSTL::ThreadExecutor::nbRunningThreadsExecutors(), gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices_, and workingSet_.

Referenced by expFusion_().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ getApproximationSchemeMsg()

template<typename GUM_SCALAR>
const std::string gum::credal::InferenceEngine< GUM_SCALAR >::getApproximationSchemeMsg ( )
inlineinherited

Get approximation scheme state.

Returns
A constant string about approximation scheme state.

Definition at line 598 of file inferenceEngine.h.

598{ return this->messageApproximationScheme(); }

References gum::IApproximationSchemeConfiguration::messageApproximationScheme().

Here is the call graph for this function:

◆ getNumberOfThreads()

virtual Size gum::ThreadNumberManager::getNumberOfThreads ( ) const
virtualinherited

◆ getT0Cluster()

template<typename GUM_SCALAR>
const NodeProperty< std::vector< NodeId > > & gum::credal::InferenceEngine< GUM_SCALAR >::getT0Cluster ( ) const
inherited

Get the t0_ cluster.

Returns
A constant reference to the t0_ cluster.

Definition at line 1007 of file inferenceEngine_tpl.h.

1007 {
1008 return t0_;
1009 }
cluster t0_
Clusters of nodes used with dynamic networks.

References t0_.

◆ getT1Cluster()

template<typename GUM_SCALAR>
const NodeProperty< std::vector< NodeId > > & gum::credal::InferenceEngine< GUM_SCALAR >::getT1Cluster ( ) const
inherited

Get the t1_ cluster.

Returns
A constant reference to the t1_ cluster.

Definition at line 1013 of file inferenceEngine_tpl.h.

1013 {
1014 return t1_;
1015 }
cluster t1_
Clusters of nodes used with dynamic networks.

References t1_.

◆ getVarMod2BNsMap()

template<typename GUM_SCALAR>
VarMod2BNsMap< GUM_SCALAR > * gum::credal::InferenceEngine< GUM_SCALAR >::getVarMod2BNsMap ( )
inherited

Get optimum IBayesNet.

Returns
A pointer to the optimal net object.

Definition at line 163 of file inferenceEngine_tpl.h.

163 {
164 return &dbnOpt_;
165 }
VarMod2BNsMap< GUM_SCALAR > dbnOpt_
Object used to efficiently store optimal bayes net during inference, for some algorithms.

References dbnOpt_.

◆ history()

INLINE const std::vector< double > & gum::ApproximationScheme::history ( ) const
overridevirtualinherited

Returns the scheme history.

Returns
Returns the scheme history.
Exceptions
OperationNotAllowedRaised if the scheme did not performed or if verbosity is set to false.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 178 of file approximationScheme_inl.h.

178 {
180 GUM_ERROR(OperationNotAllowed, "state of the approximation scheme is udefined")
181 }
182
183 if (!verbosity()) GUM_ERROR(OperationNotAllowed, "No history when verbosity=false")
184
185 return history_;
186 }

References GUM_ERROR, stateApproximationScheme(), and gum::IApproximationSchemeConfiguration::Undefined.

Here is the call graph for this function:

◆ initApproximationScheme()

INLINE void gum::ApproximationScheme::initApproximationScheme ( )
inherited

Initialise the scheme.

Definition at line 189 of file approximationScheme_inl.h.

189 {
191 current_step_ = 0;
193 history_.clear();
194 timer_.reset();
195 }

References ApproximationScheme(), gum::IApproximationSchemeConfiguration::Continue, current_epsilon_, current_rate_, current_state_, current_step_, and initApproximationScheme().

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::computeKL_(), initApproximationScheme(), gum::learning::GreedyHillClimbing::learnStructure(), gum::learning::LocalSearchWithTabuList::learnStructure(), gum::SamplingInference< GUM_SCALAR >::loopApproxInference_(), gum::credal::CNLoopyPropagation< GUM_SCALAR >::makeInference(), and gum::SamplingInference< GUM_SCALAR >::onStateChanged_().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ initExpectations_()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::initExpectations_ ( )
protectedinherited

Initialize lower and upper expectations before inference, with the lower expectation being initialized on the highest modality and the upper expectation being initialized on the lowest modality.

Definition at line 718 of file inferenceEngine_tpl.h.

718 {
719 expectationMin_.clear();
720 expectationMax_.clear();
721
722 if (modal_.empty()) return;
723
724 for (auto node: credalNet_->current_bn().nodes()) {
726
727 var_name = credalNet_->current_bn().variable(node).name();
728 auto delim = var_name.find_first_of("_");
729 var_name = var_name.substr(0, delim);
730
731 if (!modal_.exists(var_name)) continue;
732
735 }
736 }

References credalNet_, expectationMax_, expectationMin_, and modal_.

Referenced by eraseAllEvidence().

Here is the caller graph for this function:

◆ initMarginals_()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::initMarginals_ ( )
protectedinherited

Initialize lower and upper old marginals and marginals before inference, with the lower marginal being 1 and the upper 0.

Definition at line 682 of file inferenceEngine_tpl.h.

682 {
683 marginalMin_.clear();
684 marginalMax_.clear();
685 oldMarginalMin_.clear();
686 oldMarginalMax_.clear();
687
688 for (auto node: credalNet_->current_bn().nodes()) {
689 auto dSize = credalNet_->current_bn().variable(node).domainSize();
692
695 }
696
697 // now that we know the sizes of marginalMin_ and marginalMax_, we can
698 // dispatch their processes to the threads
700 }
void displatchMarginalsToThreads_()
computes Vector threadRanges_, that assigns some part of marginalMin_ and marginalMax_ to the threads

References credalNet_, displatchMarginalsToThreads_(), marginalMax_, marginalMin_, oldMarginalMax_, and oldMarginalMin_.

Referenced by InferenceEngine(), and eraseAllEvidence().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ initMarginalSets_()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::initMarginalSets_ ( )
protectedinherited

Initialize credal set vertices with empty sets.

Definition at line 703 of file inferenceEngine_tpl.h.

703 {
704 marginalSets_.clear();
705
706 if (!storeVertices_) return;
707
708 for (auto node: credalNet_->current_bn().nodes())
710 }

References credalNet_, marginalSets_, and storeVertices_.

Referenced by eraseAllEvidence(), and storeVertices().

Here is the caller graph for this function:

◆ initThreadsData_()

template<typename GUM_SCALAR, class BNInferenceEngine>
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::initThreadsData_ ( const Size & num_threads,
const bool _storeVertices_,
const bool _storeBNOpt_ )
inlineprotected

Initialize threads data.

Parameters
num_threadsThe number of threads.
_storeVertices_True if vertices should be stored, False otherwise.
_storeBNOpt_True if optimal IBayesNet should be stored, false otherwise.

Definition at line 62 of file multipleInferenceEngine_tpl.h.

65 {
66 workingSet_.clear();
67 workingSet_.resize(num_threads, nullptr);
68 workingSetE_.clear();
69 workingSetE_.resize(num_threads, nullptr);
70
71 l_marginalMin_.clear();
73 l_marginalMax_.clear();
75 l_expectationMin_.clear();
77 l_expectationMax_.clear();
79
80 l_clusters_.clear();
82
83 if (_storeVertices_) {
84 l_marginalSets_.clear();
86 }
87
88 if (_storeBNOpt_) {
89 for (Size ptr = 0; ptr < this->l_optimalNet_.size(); ptr++)
90 if (this->l_optimalNet_[ptr] != nullptr) delete l_optimalNet_[ptr];
91
92 l_optimalNet_.clear();
94 }
95
96 l_modal_.clear();
97 l_modal_.resize(num_threads);
98
100 this->oldMarginalMin_ = this->marginalMin_;
101 this->oldMarginalMax_.clear();
102 this->oldMarginalMax_ = this->marginalMax_;
103
104 // init the random number generators
105 generators_.clear();
106 generators_.resize(num_threads);
108 for (auto& generator: generators_) {
109 generator.seed(seed);
110 seed = generator();
111 }
112 }
std::vector< std::mt19937 > generators_
the generators used for computing random values
unsigned int currentRandomGeneratorValue()
returns the current generator's value

References generators_, l_clusters_, l_expectationMax_, l_expectationMin_, l_marginalMax_, l_marginalMin_, l_marginalSets_, l_modal_, l_optimalNet_, gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax_, gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin_, gum::credal::InferenceEngine< GUM_SCALAR >::oldMarginalMax_, gum::credal::InferenceEngine< GUM_SCALAR >::oldMarginalMin_, workingSet_, and workingSetE_.

◆ insertEvidence() [1/2]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidence ( const NodeProperty< std::vector< GUM_SCALAR > > & evidence)
inherited

Insert evidence from Property.

Parameters
evidenceThe on nodes Property containing likelihoods.

Definition at line 277 of file inferenceEngine_tpl.h.

278 {
279 if (!evidence_.empty()) evidence_.clear();
280
281 // use cbegin() to get const_iterator when available in aGrUM hashtables
282 for (const auto& elt: evidence) {
283 try {
284 credalNet_->current_bn().variable(elt.first);
285 } catch (NotFound& err) {
287 continue;
288 }
289
290 evidence_.insert(elt.first, elt.second);
291 }
292
293 // forces the computation of the begin iterator to avoid subsequent data races
294 // @TODO make HashTableConstIterator constructors thread safe
295 evidence_.begin();
296 }
#define GUM_SHOWERROR(e)
Definition exceptions.h:85

References credalNet_, evidence_, and GUM_SHOWERROR.

◆ insertEvidence() [2/2]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidence ( const std::map< std::string, std::vector< GUM_SCALAR > > & eviMap)
inherited

Insert evidence from map.

Parameters
eviMapThe map variable name - likelihood.

Definition at line 251 of file inferenceEngine_tpl.h.

252 {
253 if (!evidence_.empty()) evidence_.clear();
254
255 for (auto it = eviMap.cbegin(), theEnd = eviMap.cend(); it != theEnd; ++it) {
256 NodeId id;
257
258 try {
259 id = credalNet_->current_bn().idFromName(it->first);
260 } catch (NotFound& err) {
262 continue;
263 }
264
265 evidence_.insert(id, it->second);
266 }
267
268 // forces the computation of the begin iterator to avoid subsequent data races
269 // @TODO make HashTableConstIterator constructors thread safe
270 evidence_.begin();
271 }

References credalNet_, evidence_, and GUM_SHOWERROR.

◆ insertEvidenceFile()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidenceFile ( const std::string & path)
virtualinherited

Insert evidence from file.

Parameters
pathThe path to the evidence file.

Reimplemented in gum::credal::CNLoopyPropagation< GUM_SCALAR >, and gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >.

Definition at line 299 of file inferenceEngine_tpl.h.

299 {
301
302 if (!evi_stream.good()) {
304 "void InferenceEngine< GUM_SCALAR "
305 ">::insertEvidence(const std::string & path) : could not "
306 "open input file : "
307 << path);
308 }
309
310 if (!evidence_.empty()) evidence_.clear();
311
313 char * cstr, *p;
314
315 while (evi_stream.good() && std::strcmp(line.c_str(), "[EVIDENCE]") != 0) {
317 }
318
319 while (evi_stream.good()) {
321
322 if (std::strcmp(line.c_str(), "[QUERY]") == 0) break;
323
324 if (line.size() == 0) continue;
325
326 cstr = new char[line.size() + 1];
327 strcpy(cstr, line.c_str());
328
329 p = strtok(cstr, " ");
330 tmp = p;
331
332 // if user input is wrong
333 NodeId node = -1;
334
335 try {
336 node = credalNet_->current_bn().idFromName(tmp);
337 } catch (NotFound& err) {
339 continue;
340 }
341
343 p = strtok(nullptr, " ");
344
345 while (p != nullptr) {
346 values.push_back(GUM_SCALAR(atof(p)));
347 p = strtok(nullptr, " ");
348 } // end of : line
349
350 evidence_.insert(node, values);
351
352 delete[] p;
353 delete[] cstr;
354 } // end of : file
355
356 evi_stream.close();
357
358 // forces the computation of the begin iterator to avoid subsequent data races
359 // @TODO make HashTableConstIterator constructors thread safe
360 evidence_.begin();
361 }

Referenced by gum::credal::CNLoopyPropagation< GUM_SCALAR >::insertEvidenceFile(), and gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::insertEvidenceFile().

Here is the caller graph for this function:

◆ insertModals()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::insertModals ( const std::map< std::string, std::vector< GUM_SCALAR > > & modals)
inherited

Insert variables modalities from map to compute expectations.

Parameters
modalsThe map variable name - modalities.

Definition at line 215 of file inferenceEngine_tpl.h.

216 {
217 if (!modal_.empty()) modal_.clear();
218
219 for (auto it = modals.cbegin(), theEnd = modals.cend(); it != theEnd; ++it) {
220 NodeId id;
221
222 try {
223 id = credalNet_->current_bn().idFromName(it->first);
224 } catch (NotFound& err) {
226 continue;
227 }
228
229 // check that modals are net compatible
230 auto dSize = credalNet_->current_bn().variable(id).domainSize();
231
232 if (dSize != it->second.size()) continue;
233
234 // GUM_ERROR(OperationNotAllowed, "void InferenceEngine< GUM_SCALAR
235 // >::insertModals( const std::map< std::string, std::vector< GUM_SCALAR
236 // > >
237 // &modals) : modalities does not respect variable cardinality : " <<
238 // credalNet_->current_bn().variable( id ).name() << " : " << dSize << "
239 // != "
240 // << it->second.size());
241
242 modal_.insert(it->first, it->second); //[ it->first ] = it->second;
243 }
244
245 //_modal = modals;
246
248 }
void initExpectations_()
Initialize lower and upper expectations before inference, with the lower expectation being initialize...

References credalNet_, GUM_SHOWERROR, and modal_.

◆ insertModalsFile()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::insertModalsFile ( const std::string & path)
inherited

Insert variables modalities from file to compute expectations.

Parameters
pathThe path to the modalities file.

Definition at line 168 of file inferenceEngine_tpl.h.

168 {
170
171 if (!mod_stream.good()) {
173 "void InferenceEngine< GUM_SCALAR "
174 ">::insertModals(const std::string & path) : "
175 "could not open input file : "
176 << path);
177 }
178
179 if (!modal_.empty()) modal_.clear();
180
182 char * cstr, *p;
183
184 while (mod_stream.good()) {
186
187 if (line.size() == 0) continue;
188
189 cstr = new char[line.size() + 1];
190 strcpy(cstr, line.c_str());
191
192 p = strtok(cstr, " ");
193 tmp = p;
194
196 p = strtok(nullptr, " ");
197
198 while (p != nullptr) {
199 values.push_back(GUM_SCALAR(atof(p)));
200 p = strtok(nullptr, " ");
201 } // end of : line
202
203 modal_.insert(tmp, values); //[tmp] = values;
204
205 delete[] p;
206 delete[] cstr;
207 } // end of : file
208
209 mod_stream.close();
210
212 }

References GUM_ERROR, and modal_.

◆ insertQuery()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::insertQuery ( const NodeProperty< std::vector< bool > > & query)
inherited

Insert query variables and states from Property.

Parameters
queryThe on nodes Property containing queried variables states.

Definition at line 364 of file inferenceEngine_tpl.h.

365 {
366 if (!query_.empty()) query_.clear();
367
368 for (const auto& elt: query) {
369 try {
370 credalNet_->current_bn().variable(elt.first);
371 } catch (NotFound& err) {
373 continue;
374 }
375
376 query_.insert(elt.first, elt.second);
377 }
378 }
NodeProperty< std::vector< bool > > query
query query_
Holds the query nodes states.

References query_.

◆ insertQueryFile()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::insertQueryFile ( const std::string & path)
inherited

Insert query variables states from file.

Parameters
pathThe path to the query file.

Definition at line 381 of file inferenceEngine_tpl.h.

381 {
383
384 if (!evi_stream.good()) {
386 "void InferenceEngine< GUM_SCALAR >::insertQuery(const "
387 "std::string & path) : could not open input file : "
388 << path);
389 }
390
391 if (!query_.empty()) query_.clear();
392
394 char * cstr, *p;
395
396 while (evi_stream.good() && std::strcmp(line.c_str(), "[QUERY]") != 0) {
398 }
399
400 while (evi_stream.good()) {
402
403 if (std::strcmp(line.c_str(), "[EVIDENCE]") == 0) break;
404
405 if (line.size() == 0) continue;
406
407 cstr = new char[line.size() + 1];
408 strcpy(cstr, line.c_str());
409
410 p = strtok(cstr, " ");
411 tmp = p;
412
413 // if user input is wrong
414 NodeId node = -1;
415
416 try {
417 node = credalNet_->current_bn().idFromName(tmp);
418 } catch (NotFound& err) {
420 continue;
421 }
422
423 auto dSize = credalNet_->current_bn().variable(node).domainSize();
424
425 p = strtok(nullptr, " ");
426
427 if (p == nullptr) {
428 query_.insert(node, std::vector< bool >(dSize, true));
429 } else {
431
432 while (p != nullptr) {
433 if ((Size)atoi(p) >= dSize)
435 "void InferenceEngine< GUM_SCALAR "
436 ">::insertQuery(const std::string & path) : "
437 "query modality is higher or equal to "
438 "cardinality");
439
440 values[atoi(p)] = true;
441 p = strtok(nullptr, " ");
442 } // end of : line
443
444 query_.insert(node, values);
445 }
446
447 delete[] p;
448 delete[] cstr;
449 } // end of : file
450
451 evi_stream.close();
452 }

References GUM_ERROR.

◆ isEnabledEpsilon()

INLINE bool gum::ApproximationScheme::isEnabledEpsilon ( ) const
overridevirtualinherited

Returns true if stopping criterion on epsilon is enabled, false otherwise.

Returns
Returns true if stopping criterion on epsilon is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 81 of file approximationScheme_inl.h.

81{ return enabled_eps_; }

References enabled_eps_.

◆ isEnabledMaxIter()

INLINE bool gum::ApproximationScheme::isEnabledMaxIter ( ) const
overridevirtualinherited

Returns true if stopping criterion on max iterations is enabled, false otherwise.

Returns
Returns true if stopping criterion on max iterations is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 122 of file approximationScheme_inl.h.

122{ return enabled_max_iter_; }

References enabled_max_iter_.

◆ isEnabledMaxTime()

INLINE bool gum::ApproximationScheme::isEnabledMaxTime ( ) const
overridevirtualinherited

Returns true if stopping criterion on timeout is enabled, false otherwise.

Returns
Returns true if stopping criterion on timeout is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 146 of file approximationScheme_inl.h.

146{ return enabled_max_time_; }

References enabled_max_time_.

◆ isEnabledMinEpsilonRate()

INLINE bool gum::ApproximationScheme::isEnabledMinEpsilonRate ( ) const
overridevirtualinherited

Returns true if stopping criterion on epsilon rate is enabled, false otherwise.

Returns
Returns true if stopping criterion on epsilon rate is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 102 of file approximationScheme_inl.h.

102{ return enabled_min_rate_eps_; }

References enabled_min_rate_eps_.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::computeKL_().

Here is the caller graph for this function:

◆ isGumNumberOfThreadsOverriden()

bool gum::ThreadNumberManager::isGumNumberOfThreadsOverriden ( ) const
virtualinherited

indicates whether the class containing this ThreadNumberManager set its own number of threads

Implements gum::IThreadNumberManager.

Referenced by gum::learning::IBNLearner::createParamEstimator_(), and gum::learning::IBNLearner::createScore_().

Here is the caller graph for this function:

◆ makeInference()

template<typename GUM_SCALAR, class BNInferenceEngine>
virtual void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::makeInference ( )
pure virtual

To be redefined by each credal net algorithm.

Starts the inference.

Implements gum::credal::InferenceEngine< GUM_SCALAR >.

Implemented in gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >.

◆ marginalMax() [1/2]

template<typename GUM_SCALAR>
gum::Tensor< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax ( const NodeId id) const
inherited

Get the upper marginals of a given node id.

Parameters
idThe node id which upper marginals we want.
Returns
A constant reference to this node upper marginals.

Definition at line 477 of file inferenceEngine_tpl.h.

477 {
478 try {
480 res.add(credalNet_->current_bn().variable(id));
481 res.fillWith(marginalMax_[id]);
482 return res;
483 } catch (NotFound& err) { throw(err); }
484 }

Referenced by marginalMax().

Here is the caller graph for this function:

◆ marginalMax() [2/2]

template<typename GUM_SCALAR>
INLINE Tensor< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax ( const std::string & varName) const
inherited

Get the upper marginals of a given variable name.

Parameters
varNameThe variable name which upper marginals we want.
Returns
A constant reference to this variable upper marginals.

Definition at line 462 of file inferenceEngine_tpl.h.

462 {
463 return marginalMax(credalNet_->current_bn().idFromName(varName));
464 }
Tensor< GUM_SCALAR > marginalMax(const NodeId id) const
Get the upper marginals of a given node id.

References credalNet_, and marginalMax().

Here is the call graph for this function:

◆ marginalMin() [1/2]

template<typename GUM_SCALAR>
gum::Tensor< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin ( const NodeId id) const
inherited

Get the lower marginals of a given node id.

Parameters
idThe node id which lower marginals we want.
Returns
A constant reference to this node lower marginals.

Definition at line 467 of file inferenceEngine_tpl.h.

467 {
468 try {
470 res.add(credalNet_->current_bn().variable(id));
471 res.fillWith(marginalMin_[id]);
472 return res;
473 } catch (NotFound& err) { throw(err); }
474 }

References credalNet_, and marginalMin_.

Referenced by marginalMin().

Here is the caller graph for this function:

◆ marginalMin() [2/2]

template<typename GUM_SCALAR>
INLINE Tensor< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin ( const std::string & varName) const
inherited

Get the lower marginals of a given variable name.

Parameters
varNameThe variable name which lower marginals we want.
Returns
A constant reference to this variable lower marginals.

Definition at line 456 of file inferenceEngine_tpl.h.

456 {
457 return marginalMin(credalNet_->current_bn().idFromName(varName));
458 }
Tensor< GUM_SCALAR > marginalMin(const NodeId id) const
Get the lower marginals of a given node id.

References credalNet_, and marginalMin().

Here is the call graph for this function:

◆ maxIter()

INLINE Size gum::ApproximationScheme::maxIter ( ) const
overridevirtualinherited

Returns the criterion on number of iterations.

Returns
Returns the criterion on number of iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 112 of file approximationScheme_inl.h.

112{ return max_iter_; }

References max_iter_.

◆ maxTime()

INLINE double gum::ApproximationScheme::maxTime ( ) const
overridevirtualinherited

Returns the timeout (in seconds).

Returns
Returns the timeout (in seconds).

Implements gum::IApproximationSchemeConfiguration.

Definition at line 133 of file approximationScheme_inl.h.

133{ return max_time_; }

References max_time_.

◆ messageApproximationScheme()

INLINE std::string gum::IApproximationSchemeConfiguration::messageApproximationScheme ( ) const
inherited

Returns the approximation scheme message.

Returns
Returns the approximation scheme message.

Definition at line 59 of file IApproximationSchemeConfiguration_inl.h.

59 {
60 std::stringstream s;
61
62 switch (stateApproximationScheme()) {
63 case ApproximationSchemeSTATE::Continue : s << "in progress"; break;
64
65 case ApproximationSchemeSTATE::Epsilon : s << "stopped with epsilon=" << epsilon(); break;
66
67 case ApproximationSchemeSTATE::Rate : s << "stopped with rate=" << minEpsilonRate(); break;
68
69 case ApproximationSchemeSTATE::Limit : s << "stopped with max iteration=" << maxIter(); break;
70
71 case ApproximationSchemeSTATE::TimeLimit : s << "stopped with timeout=" << maxTime(); break;
72
73 case ApproximationSchemeSTATE::Stopped : s << "stopped on request"; break;
74
75 case ApproximationSchemeSTATE::Undefined : s << "undefined state"; break;
76 };
77
78 return s.str();
79 }
virtual double epsilon() const =0
Returns the value of epsilon.
virtual ApproximationSchemeSTATE stateApproximationScheme() const =0
Returns the approximation scheme state.
virtual double minEpsilonRate() const =0
Returns the value of the minimal epsilon rate.
virtual Size maxIter() const =0
Returns the criterion on number of iterations.
virtual double maxTime() const =0
Returns the timeout (in seconds).

References Continue, Epsilon, epsilon(), Limit, maxIter(), maxTime(), minEpsilonRate(), Rate, stateApproximationScheme(), Stopped, TimeLimit, and Undefined.

Referenced by gum::credal::InferenceEngine< GUM_SCALAR >::getApproximationSchemeMsg(), and gum::credal::MultipleInferenceEngine< GUM_SCALAR, LazyPropagation< GUM_SCALAR > >::stateApproximationScheme().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ minEpsilonRate()

INLINE double gum::ApproximationScheme::minEpsilonRate ( ) const
overridevirtualinherited

Returns the value of the minimal epsilon rate.

Returns
Returns the value of the minimal epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 92 of file approximationScheme_inl.h.

92{ return min_rate_eps_; }

References min_rate_eps_.

◆ nbrIterations()

INLINE Size gum::ApproximationScheme::nbrIterations ( ) const
overridevirtualinherited

Returns the number of iterations.

Returns
Returns the number of iterations.
Exceptions
OperationNotAllowedRaised if the scheme did not perform.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 169 of file approximationScheme_inl.h.

169 {
171 GUM_ERROR(OperationNotAllowed, "state of the approximation scheme is undefined")
172 }
173
174 return current_step_;
175 }

References current_step_, GUM_ERROR, stateApproximationScheme(), and gum::IApproximationSchemeConfiguration::Undefined.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::computeKL_().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ optFusion_()

template<typename GUM_SCALAR, class BNInferenceEngine>
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::optFusion_ ( )
protected

Fusion of threads optimal IBayesNet.

Definition at line 735 of file multipleInferenceEngine_tpl.h.

735 {
736 using dBN = std::vector< bool >;
737
738 Size nsize = Size(workingSet_[0]->size());
739
740 // no parallel insert in hash-tables (OptBN)
741 for (Idx i = 0; i < nsize; i++) {
742 // we don't store anything for observed variables
743 if (_infE_::evidence_.exists(i)) continue;
744
746
747 for (Size j = 0; j < dSize; j++) {
748 // go through all threads
750 keymin[0] = i;
751 keymin[1] = j;
752 keymin[2] = 0;
754 keymax[2] = 1;
755
756 Size tsize = Size(l_marginalMin_.size());
757
758 for (Size tId = 0; tId < tsize; tId++) {
759 if (l_marginalMin_[tId][i][j] == this->marginalMin_[i][j]) {
760 const std::vector< dBN* >& tOpts = l_optimalNet_[tId]->getBNOptsFromKey(keymin);
761 Size osize = Size(tOpts.size());
762
763 for (Size bn = 0; bn < osize; bn++) {
764 _infE_::dbnOpt_.insert(*tOpts[bn], keymin);
765 }
766 }
767
768 if (l_marginalMax_[tId][i][j] == this->marginalMax_[i][j]) {
769 const std::vector< dBN* >& tOpts = l_optimalNet_[tId]->getBNOptsFromKey(keymax);
770 Size osize = Size(tOpts.size());
771
772 for (Size bn = 0; bn < osize; bn++) {
773 _infE_::dbnOpt_.insert(*tOpts[bn], keymax);
774 }
775 }
776 } // end of : all threads
777 } // end of : all modalities
778 } // end of : all variables
779 }

References gum::credal::InferenceEngine< GUM_SCALAR >::dbnOpt_, gum::credal::InferenceEngine< GUM_SCALAR >::evidence_, l_marginalMax_, l_marginalMin_, l_optimalNet_, gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax_, gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin_, optFusion_(), and workingSet_.

Referenced by optFusion_().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ periodSize()

INLINE Size gum::ApproximationScheme::periodSize ( ) const
overridevirtualinherited

Returns the period size.

Returns
Returns the period size.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 155 of file approximationScheme_inl.h.

155{ return period_size_; }
Size period_size_
Checking criteria frequency.

References period_size_.

◆ remainingBurnIn()

INLINE Size gum::ApproximationScheme::remainingBurnIn ( ) const
inherited

Returns the remaining burn in.

Returns
Returns the remaining burn in.

Definition at line 212 of file approximationScheme_inl.h.

212 {
213 if (burn_in_ > current_step_) {
214 return burn_in_ - current_step_;
215 } else {
216 return 0;
217 }
218 }
Size burn_in_
Number of iterations before checking stopping criteria.

References burn_in_, and current_step_.

◆ repetitiveInd()

template<typename GUM_SCALAR>
bool gum::credal::InferenceEngine< GUM_SCALAR >::repetitiveInd ( ) const
inherited

Get the current independence status.

Returns
True if repetitive, False otherwise.

Definition at line 142 of file inferenceEngine_tpl.h.

142 {
143 return repetitiveInd_;
144 }
bool repetitiveInd_
True if using repetitive independence ( dynamic network only ), False otherwise.

References repetitiveInd_.

◆ repetitiveInit_()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::repetitiveInit_ ( )
protectedinherited

Initialize t0_ and t1_ clusters.

Definition at line 801 of file inferenceEngine_tpl.h.

801 {
802 timeSteps_ = 0;
803 t0_.clear();
804 t1_.clear();
805
806 // t = 0 vars belongs to t0_ as keys
807 for (auto node: credalNet_->current_bn().dag().nodes()) {
808 std::string var_name = credalNet_->current_bn().variable(node).name();
809 auto delim = var_name.find_first_of("_");
810
811 if (delim > var_name.size()) {
813 "void InferenceEngine< GUM_SCALAR "
814 ">::repetitiveInit_() : the network does not "
815 "appear to be dynamic");
816 }
817
818 std::string time_step = var_name.substr(delim + 1, 1);
819
820 if (time_step.compare("0") == 0) t0_.insert(node, std::vector< NodeId >());
821 }
822
823 // t = 1 vars belongs to either t0_ as member value or t1_ as keys
824 for (const auto& node: credalNet_->current_bn().dag().nodes()) {
825 std::string var_name = credalNet_->current_bn().variable(node).name();
826 auto delim = var_name.find_first_of("_");
827 std::string time_step = var_name.substr(delim + 1, var_name.size());
828 var_name = var_name.substr(0, delim);
829 delim = time_step.find_first_of("_");
830 time_step = time_step.substr(0, delim);
831
832 if (time_step.compare("1") == 0) {
833 bool found = false;
834
835 for (const auto& elt: t0_) {
836 std::string var_0_name = credalNet_->current_bn().variable(elt.first).name();
837 delim = var_0_name.find_first_of("_");
838 var_0_name = var_0_name.substr(0, delim);
839
840 if (var_name.compare(var_0_name) == 0) {
841 const Tensor< GUM_SCALAR >* tensor(&credalNet_->current_bn().cpt(node));
842 const Tensor< GUM_SCALAR >* tensor2(&credalNet_->current_bn().cpt(elt.first));
843
844 if (tensor->domainSize() == tensor2->domainSize()) t0_[elt.first].push_back(node);
845 else t1_.insert(node, std::vector< NodeId >());
846
847 found = true;
848 break;
849 }
850 }
851
852 if (!found) { t1_.insert(node, std::vector< NodeId >()); }
853 }
854 }
855
856 // t > 1 vars belongs to either t0_ or t1_ as member value
857 // remember timeSteps_
858 for (auto node: credalNet_->current_bn().dag().nodes()) {
859 std::string var_name = credalNet_->current_bn().variable(node).name();
860 auto delim = var_name.find_first_of("_");
861 std::string time_step = var_name.substr(delim + 1, var_name.size());
862 var_name = var_name.substr(0, delim);
863 delim = time_step.find_first_of("_");
864 time_step = time_step.substr(0, delim);
865
866 if (time_step.compare("0") != 0 && time_step.compare("1") != 0) {
867 // keep max time_step
868 if (atoi(time_step.c_str()) > timeSteps_) timeSteps_ = atoi(time_step.c_str());
869
871 bool found = false;
872
873 for (const auto& elt: t0_) {
874 std::string var_0_name = credalNet_->current_bn().variable(elt.first).name();
875 delim = var_0_name.find_first_of("_");
876 var_0_name = var_0_name.substr(0, delim);
877
878 if (var_name.compare(var_0_name) == 0) {
879 const Tensor< GUM_SCALAR >* tensor(&credalNet_->current_bn().cpt(node));
880 const Tensor< GUM_SCALAR >* tensor2(&credalNet_->current_bn().cpt(elt.first));
881
882 if (tensor->domainSize() == tensor2->domainSize()) {
883 t0_[elt.first].push_back(node);
884 found = true;
885 break;
886 }
887 }
888 }
889
890 if (!found) {
891 for (const auto& elt: t1_) {
892 std::string var_0_name = credalNet_->current_bn().variable(elt.first).name();
893 auto delim = var_0_name.find_first_of("_");
894 var_0_name = var_0_name.substr(0, delim);
895
896 if (var_name.compare(var_0_name) == 0) {
897 const Tensor< GUM_SCALAR >* tensor(&credalNet_->current_bn().cpt(node));
898 const Tensor< GUM_SCALAR >* tensor2(&credalNet_->current_bn().cpt(elt.first));
899
900 if (tensor->domainSize() == tensor2->domainSize()) {
901 t1_[elt.first].push_back(node);
902 break;
903 }
904 }
905 }
906 }
907 }
908 }
909 }
int timeSteps_
The number of time steps of this network (only usefull for dynamic networks).

References credalNet_, GUM_ERROR, t0_, t1_, and timeSteps_.

Referenced by setRepetitiveInd().

Here is the caller graph for this function:

◆ saveExpectations()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::saveExpectations ( const std::string & path) const
inherited

Saves expectations to file.

Parameters
pathThe path to the file to be used.

Definition at line 578 of file inferenceEngine_tpl.h.

578 {
579 if (dynamicExpMin_.empty()) //_modal.empty())
580 return;
581
582 // else not here, to keep the const (natural with a saving process)
583 // else if(dynamicExpMin_.empty() || dynamicExpMax_.empty())
584 //_dynamicExpectations(); // works with or without a dynamic network
585
587
588 if (!m_stream.good()) {
590 "void InferenceEngine< GUM_SCALAR "
591 ">::saveExpectations(const std::string & path) : could "
592 "not open output file : "
593 << path);
594 }
595
596 for (const auto& elt: dynamicExpMin_) {
597 m_stream << elt.first; // it->first;
598
599 // iterates over a vector
600 for (const auto& elt2: elt.second) {
601 m_stream << " " << elt2;
602 }
603
605 }
606
607 for (const auto& elt: dynamicExpMax_) {
608 m_stream << elt.first;
609
610 // iterates over a vector
611 for (const auto& elt2: elt.second) {
612 m_stream << " " << elt2;
613 }
614
616 }
617
618 m_stream.close();
619 }

References dynamicExpMin_.

◆ saveMarginals()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::saveMarginals ( const std::string & path) const
inherited

Saves marginals to file.

Parameters
pathThe path to the file to be used.

Definition at line 555 of file inferenceEngine_tpl.h.

555 {
557
558 if (!m_stream.good()) {
560 "void InferenceEngine< GUM_SCALAR >::saveMarginals(const "
561 "std::string & path) const : could not open output file "
562 ": " << path);
563 }
564
565 for (const auto& elt: marginalMin_) {
566 Size esize = Size(elt.second.size());
567
568 for (Size mod = 0; mod < esize; mod++) {
569 m_stream << credalNet_->current_bn().variable(elt.first).name() << " " << mod << " "
570 << (elt.second)[mod] << " " << marginalMax_[elt.first][mod] << std::endl;
571 }
572 }
573
574 m_stream.close();
575 }

References GUM_ERROR.

◆ saveVertices()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::saveVertices ( const std::string & path) const
inherited

Saves vertices to file.

Parameters
pathThe path to the file to be used.

Definition at line 648 of file inferenceEngine_tpl.h.

648 {
650
651 if (!m_stream.good()) {
653 "void InferenceEngine< GUM_SCALAR >::saveVertices(const "
654 "std::string & path) : could not open outpul file : "
655 << path);
656 }
657
658 for (const auto& elt: marginalSets_) {
659 m_stream << credalNet_->current_bn().variable(elt.first).name() << std::endl;
660
661 for (const auto& elt2: elt.second) {
662 m_stream << "[";
663 bool first = true;
664
665 for (const auto& elt3: elt2) {
666 if (!first) {
667 m_stream << ",";
668 first = false;
669 }
670
671 m_stream << elt3;
672 }
673
674 m_stream << "]\n";
675 }
676 }
677
678 m_stream.close();
679 }

References credalNet_, GUM_ERROR, and marginalSets_.

◆ setEpsilon()

INLINE void gum::ApproximationScheme::setEpsilon ( double eps)
overridevirtualinherited

Given that we approximate f(t), stopping criterion on |f(t+1)-f(t)|.

If the criterion was disabled it will be enabled.

Parameters
epsThe new epsilon value.
Exceptions
OutOfBoundsRaised if eps < 0.

Implements gum::IApproximationSchemeConfiguration.

Reimplemented in gum::learning::EMApproximationScheme.

Definition at line 63 of file approximationScheme_inl.h.

63 {
64 if (eps < 0.) { GUM_ERROR(OutOfBounds, "eps should be >=0") }
65
66 eps_ = eps;
67 enabled_eps_ = true;
68 }

References enabled_eps_, eps_, and GUM_ERROR.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsSampling< GUM_SCALAR >::GibbsSampling(), gum::learning::GreedyHillClimbing::GreedyHillClimbing(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::EMApproximationScheme::setEpsilon().

Here is the caller graph for this function:

◆ setMaxIter()

INLINE void gum::ApproximationScheme::setMaxIter ( Size max)
overridevirtualinherited

Stopping criterion on number of iterations.

If the criterion was disabled it will be enabled.

Parameters
maxThe maximum number of iterations.
Exceptions
OutOfBoundsRaised if max <= 1.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 105 of file approximationScheme_inl.h.

105 {
106 if (max < 1) { GUM_ERROR(OutOfBounds, "max should be >=1") }
107 max_iter_ = max;
108 enabled_max_iter_ = true;
109 }

References enabled_max_iter_, GUM_ERROR, and max_iter_.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), and gum::SamplingInference< GUM_SCALAR >::SamplingInference().

Here is the caller graph for this function:

◆ setMaxTime()

INLINE void gum::ApproximationScheme::setMaxTime ( double timeout)
overridevirtualinherited

Stopping criterion on timeout.

If the criterion was disabled it will be enabled.

Parameters
timeoutThe timeout value in seconds.
Exceptions
OutOfBoundsRaised if timeout <= 0.0.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 126 of file approximationScheme_inl.h.

126 {
127 if (timeout <= 0.) { GUM_ERROR(OutOfBounds, "timeout should be >0.") }
128 max_time_ = timeout;
129 enabled_max_time_ = true;
130 }

References enabled_max_time_, GUM_ERROR, and max_time_.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), and gum::SamplingInference< GUM_SCALAR >::SamplingInference().

Here is the caller graph for this function:

◆ setMinEpsilonRate()

INLINE void gum::ApproximationScheme::setMinEpsilonRate ( double rate)
overridevirtualinherited

Given that we approximate f(t), stopping criterion on d/dt(|f(t+1)-f(t)|).

If the criterion was disabled it will be enabled

Parameters
rateThe minimal epsilon rate.
Exceptions
OutOfBoundsif rate<0

Implements gum::IApproximationSchemeConfiguration.

Reimplemented in gum::learning::EMApproximationScheme.

Definition at line 84 of file approximationScheme_inl.h.

84 {
85 if (rate < 0) { GUM_ERROR(OutOfBounds, "rate should be >=0") }
86
87 min_rate_eps_ = rate;
89 }

References enabled_min_rate_eps_, GUM_ERROR, and min_rate_eps_.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsSampling< GUM_SCALAR >::GibbsSampling(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::EMApproximationScheme::setMinEpsilonRate().

Here is the caller graph for this function:

◆ setNumberOfThreads()

virtual void gum::ThreadNumberManager::setNumberOfThreads ( Size nb)
virtualinherited

sets the number max of threads to be used by the class containing this ThreadNumberManager

Parameters
nbthe number of threads to be used. If this number is set to 0, then it is defaulted to aGrUM's number of threads

Implements gum::IThreadNumberManager.

Reimplemented in gum::learning::IBNLearner, gum::learning::RecordCounter, gum::ScheduledInference, and gum::SchedulerParallel.

Referenced by gum::learning::IBNLearner::setNumberOfThreads(), and gum::ScheduledInference::setNumberOfThreads().

Here is the caller graph for this function:

◆ setPeriodSize()

INLINE void gum::ApproximationScheme::setPeriodSize ( Size p)
overridevirtualinherited

How many samples between two stopping is enable.

Parameters
pThe new period value.
Exceptions
OutOfBoundsRaised if p < 1.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 149 of file approximationScheme_inl.h.

149 {
150 if (p < 1) { GUM_ERROR(OutOfBounds, "p should be >=1") }
151
152 period_size_ = p;
153 }

References GUM_ERROR, and period_size_.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), and gum::SamplingInference< GUM_SCALAR >::SamplingInference().

Here is the caller graph for this function:

◆ setRepetitiveInd()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::setRepetitiveInd ( const bool repetitive)
inherited
Parameters
repetitiveTrue if repetitive independence is to be used, false otherwise. Only usefull with dynamic networks.

Definition at line 133 of file inferenceEngine_tpl.h.

133 {
136
137 // do not compute clusters more than once
139 }
void repetitiveInit_()
Initialize t0_ and t1_ clusters.

References repetitiveInd_, and repetitiveInit_().

Here is the call graph for this function:

◆ setVerbosity()

INLINE void gum::ApproximationScheme::setVerbosity ( bool v)
overridevirtualinherited

Set the verbosity on (true) or off (false).

Parameters
vIf true, then verbosity is turned on.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 158 of file approximationScheme_inl.h.

158{ verbosity_ = v; }
bool verbosity_
If true, verbosity is enabled.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), and gum::SamplingInference< GUM_SCALAR >::SamplingInference().

Here is the caller graph for this function:

◆ startOfPeriod()

INLINE bool gum::ApproximationScheme::startOfPeriod ( ) const
inherited

Returns true if we are at the beginning of a period (compute error is mandatory).

Returns
Returns true if we are at the beginning of a period (compute error is mandatory).

Definition at line 199 of file approximationScheme_inl.h.

199 {
200 if (current_step_ < burn_in_) { return false; }
201
202 if (period_size_ == 1) { return true; }
203
204 return ((current_step_ - burn_in_) % period_size_ == 0);
205 }

References burn_in_, and current_step_.

◆ stateApproximationScheme()

INLINE IApproximationSchemeConfiguration::ApproximationSchemeSTATE gum::ApproximationScheme::stateApproximationScheme ( ) const
overridevirtualinherited

Returns the approximation scheme state.

Returns
Returns the approximation scheme state.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 164 of file approximationScheme_inl.h.

164 {
165 return current_state_;
166 }

References current_state_.

Referenced by history(), and nbrIterations().

Here is the caller graph for this function:

◆ stopApproximationScheme()

INLINE void gum::ApproximationScheme::stopApproximationScheme ( )
inherited

Stop the approximation scheme.

Definition at line 221 of file approximationScheme_inl.h.

Referenced by gum::learning::GreedyHillClimbing::learnStructure(), gum::learning::LocalSearchWithTabuList::learnStructure(), and gum::credal::CNLoopyPropagation< GUM_SCALAR >::makeInferenceNodeToNeighbours_().

Here is the caller graph for this function:

◆ stopScheme_()

INLINE void gum::ApproximationScheme::stopScheme_ ( ApproximationSchemeSTATE new_state)
privateinherited

Stop the scheme given a new state.

Parameters
new_stateThe scheme new state.

Definition at line 301 of file approximationScheme_inl.h.

301 {
302 if (new_state == ApproximationSchemeSTATE::Continue) { return; }
303
304 if (new_state == ApproximationSchemeSTATE::Undefined) { return; }
305
306 current_state_ = new_state;
307 timer_.pause();
308
309 if (onStop.hasListener()) { GUM_EMIT1(onStop, messageApproximationScheme()); }
310 }
Signaler1< const std::string & > onStop
Criteria messageApproximationScheme.
#define GUM_EMIT1(signal, arg1)
Definition signaler1.h:61

References gum::IApproximationSchemeConfiguration::Continue, current_state_, and gum::IApproximationSchemeConfiguration::Undefined.

Referenced by gum::credal::MultipleInferenceEngine< GUM_SCALAR, LazyPropagation< GUM_SCALAR > >::disableMaxIter(), gum::credal::MultipleInferenceEngine< GUM_SCALAR, LazyPropagation< GUM_SCALAR > >::disableMaxTime(), gum::credal::MultipleInferenceEngine< GUM_SCALAR, LazyPropagation< GUM_SCALAR > >::isEnabledMaxIter(), gum::credal::MultipleInferenceEngine< GUM_SCALAR, LazyPropagation< GUM_SCALAR > >::maxTime(), and gum::credal::MultipleInferenceEngine< GUM_SCALAR, LazyPropagation< GUM_SCALAR > >::setPeriodSize().

Here is the caller graph for this function:

◆ storeBNOpt() [1/2]

template<typename GUM_SCALAR>
bool gum::credal::InferenceEngine< GUM_SCALAR >::storeBNOpt ( ) const
inherited
Returns
True if optimal bayes net are stored for each variable and each modality, False otherwise.

Definition at line 158 of file inferenceEngine_tpl.h.

158 {
159 return storeBNOpt_;
160 }

References storeBNOpt_.

◆ storeBNOpt() [2/2]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::storeBNOpt ( const bool value)
inherited
Parameters
valueTrue if optimal Bayesian networks are to be stored for each variable and each modality.

Definition at line 121 of file inferenceEngine_tpl.h.

121 {
123 }

References storeBNOpt_.

◆ storeVertices() [1/2]

template<typename GUM_SCALAR>
bool gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices ( ) const
inherited

Get the number of iterations without changes used to stop some algorithms.

Returns
the number of iterations. int iterStop () const;
True if vertice are stored, False otherwise.

Definition at line 153 of file inferenceEngine_tpl.h.

153 {
154 return storeVertices_;
155 }

References storeVertices_.

◆ storeVertices() [2/2]

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices ( const bool value)
inherited
Parameters
valueTrue if vertices are to be stored, false otherwise.

Definition at line 126 of file inferenceEngine_tpl.h.

126 {
128
130 }
void initMarginalSets_()
Initialize credal set vertices with empty sets.

References initMarginalSets_(), and storeVertices_.

Here is the call graph for this function:

◆ toString()

template<typename GUM_SCALAR>
std::string gum::credal::InferenceEngine< GUM_SCALAR >::toString ( ) const
inherited

Print all nodes marginals to standart output.

Definition at line 622 of file inferenceEngine_tpl.h.

622 {
624 output << std::endl;
625
626 // use cbegin() when available
627 for (const auto& elt: marginalMin_) {
628 Size esize = Size(elt.second.size());
629
630 for (Size mod = 0; mod < esize; mod++) {
631 output << "P(" << credalNet_->current_bn().variable(elt.first).name() << "=" << mod
632 << "|e) = [ ";
633 output << marginalMin_[elt.first][mod] << ", " << marginalMax_[elt.first][mod] << " ]";
634
635 if (!query_.empty())
636 if (query_.exists(elt.first) && query_[elt.first][mod]) output << " QUERY";
637
638 output << std::endl;
639 }
640
641 output << std::endl;
642 }
643
644 return output.str();
645 }

References credalNet_, marginalMax_, marginalMin_, and query_.

◆ updateApproximationScheme()

INLINE void gum::ApproximationScheme::updateApproximationScheme ( unsigned int incr = 1)
inherited

Update the scheme w.r.t the new error and increment steps.

Parameters
incrThe new increment steps.

Definition at line 208 of file approximationScheme_inl.h.

208 {
209 current_step_ += incr;
210 }

References current_step_.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::computeKL_(), gum::learning::GreedyHillClimbing::learnStructure(), gum::learning::LocalSearchWithTabuList::learnStructure(), gum::SamplingInference< GUM_SCALAR >::loopApproxInference_(), gum::credal::CNLoopyPropagation< GUM_SCALAR >::makeInferenceByOrderedArcs_(), gum::credal::CNLoopyPropagation< GUM_SCALAR >::makeInferenceByRandomOrder_(), and gum::credal::CNLoopyPropagation< GUM_SCALAR >::makeInferenceNodeToNeighbours_().

Here is the caller graph for this function:

◆ updateCredalSets_()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::updateCredalSets_ ( const NodeId & id,
const std::vector< GUM_SCALAR > & vertex,
const bool & elimRedund = false )
inlineprotectedinherited

Given a node id and one of it's possible vertex, update it's credal set.

To maximise efficiency, don't pass a vertex we know is inside the polytope (i.e. not at an extreme value for any modality)

Parameters
idThe id of the node to be updated
vertexA (tensor) vertex of the node credal set
elimRedundremove redundant vertex (inside a facet)

Definition at line 934 of file inferenceEngine_tpl.h.

936 {
938 auto dsize = vertex.size();
939
940 bool eq = true;
941
942 for (auto it = nodeCredalSet.cbegin(), itEnd = nodeCredalSet.cend(); it != itEnd; ++it) {
943 eq = true;
944
945 for (Size i = 0; i < dsize; i++) {
946 if (std::fabs(vertex[i] - (*it)[i]) > 1e-6) {
947 eq = false;
948 break;
949 }
950 }
951
952 if (eq) break;
953 }
954
955 if (!eq || nodeCredalSet.size() == 0) {
956 nodeCredalSet.push_back(vertex);
957 return;
958 } else return;
959
960 // because of next lambda return condition
961 if (nodeCredalSet.size() == 1) return;
962
963 // check that the point and all previously added ones are not inside the
964 // actual
965 // polytope
966 auto itEnd = std::remove_if(
967 nodeCredalSet.begin(),
968 nodeCredalSet.end(),
969 [&](const std::vector< GUM_SCALAR >& v) -> bool {
970 for (auto jt = v.cbegin(),
971 jtEnd = v.cend(),
972 minIt = marginalMin_[id].cbegin(),
973 minItEnd = marginalMin_[id].cend(),
974 maxIt = marginalMax_[id].cbegin(),
975 maxItEnd = marginalMax_[id].cend();
976 jt != jtEnd && minIt != minItEnd && maxIt != maxItEnd;
977 ++jt, ++minIt, ++maxIt) {
978 if ((std::fabs(*jt - *minIt) < 1e-6 || std::fabs(*jt - *maxIt) < 1e-6)
979 && std::fabs(*minIt - *maxIt) > 1e-6)
980 return false;
981 }
982 return true;
983 });
984
985 nodeCredalSet.erase(itEnd, nodeCredalSet.end());
986
987 // we need at least 2 points to make a convex combination
988 if (!elimRedund || nodeCredalSet.size() <= 2) return;
989
990 // there may be points not inside the polytope but on one of it's facet,
991 // meaning it's still a convex combination of vertices of this facet. Here
992 // we
993 // need lrs.
995 lrsWrapper.setUpV((unsigned int)dsize, (unsigned int)(nodeCredalSet.size()));
996
997 for (const auto& vtx: nodeCredalSet)
999
1000 lrsWrapper.elimRedundVrep();
1001
1002 marginalSets_[id] = lrsWrapper.getOutput();
1003 }

References marginalSets_.

Referenced by gum::credal::CNLoopyPropagation< GUM_SCALAR >::computeExpectations_(), and gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::verticesFusion_().

Here is the caller graph for this function:

◆ updateExpectations_()

template<typename GUM_SCALAR>
void gum::credal::InferenceEngine< GUM_SCALAR >::updateExpectations_ ( const NodeId & id,
const std::vector< GUM_SCALAR > & vertex )
inlineprotectedinherited

Given a node id and one of it's possible vertex obtained during inference, update this node lower and upper expectations.

Parameters
idThe id of the node to be updated
vertexA (tensor) vertex of the node credal set

Definition at line 912 of file inferenceEngine_tpl.h.

914 {
915 std::string var_name = credalNet_->current_bn().variable(id).name();
916 auto delim = var_name.find_first_of("_");
917
918 var_name = var_name.substr(0, delim);
919
920 if (modal_.exists(var_name) /*modal_.find(var_name) != modal_.end()*/) {
921 GUM_SCALAR exp = 0;
922 auto vsize = vertex.size();
923
924 for (Size mod = 0; mod < vsize; mod++)
926
928
930 }
931 }

References credalNet_, expectationMax_, expectationMin_, and modal_.

Referenced by gum::credal::CNLoopyPropagation< GUM_SCALAR >::computeExpectations_().

Here is the caller graph for this function:

◆ updateMarginals_()

template<typename GUM_SCALAR, class BNInferenceEngine>
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::updateMarginals_ ( )
inlineprotected

Fusion of threads marginals.

Definition at line 293 of file multipleInferenceEngine_tpl.h.

293 {
294 // compute the max number of threads to use (avoid nested threads)
296 ? this->threadRanges_.size() - 1
297 : 1; // no nested multithreading
298
299 // create the function to be executed by the threads
300 auto threadedExec = [this](const std::size_t this_thread,
303 auto i = this->threadRanges_[this_thread].first;
304 auto j = this->threadRanges_[this_thread].second;
305 auto domain_size = this->marginalMax_[i].size();
306 const auto end_i = this->threadRanges_[this_thread + 1].first;
307 auto end_j = this->threadRanges_[this_thread + 1].second;
308 const auto marginalMax_size = this->marginalMax_.size();
309 const auto tsize = Size(l_marginalMin_.size());
310
311 while ((i < end_i) || (j < end_j)) {
312 // go through all work indices
313 for (Idx tId = 0; tId < tsize; tId++) {
314 if (l_marginalMin_[tId][i][j] < this->marginalMin_[i][j])
315 this->marginalMin_[i][j] = l_marginalMin_[tId][i][j];
316
317 if (l_marginalMax_[tId][i][j] > this->marginalMax_[i][j])
318 this->marginalMax_[i][j] = l_marginalMax_[tId][i][j];
319 }
320
321 if (++j == domain_size) {
322 j = 0;
323 ++i;
324 if (i < marginalMax_size) domain_size = this->marginalMax_[i].size();
325 }
326 }
327 };
328
329 // launch the threads
333 (nb_threads == 1)
334 ? std::vector< std::pair< NodeId, Idx > >{{0, 0}, {this->marginalMin_.size(), 0}}
335 : this->threadRanges_);
336 }

References gum::threadsSTL::ThreadExecutor::execute(), l_marginalMax_, l_marginalMin_, gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax_, gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin_, gum::threadsSTL::ThreadExecutor::nbRunningThreadsExecutors(), gum::credal::InferenceEngine< GUM_SCALAR >::threadRanges_, and updateMarginals_().

Referenced by updateMarginals_().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ updateOldMarginals_()

template<typename GUM_SCALAR, class BNInferenceEngine>
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::updateOldMarginals_ ( )
protected

Update old marginals (from current marginals).

Call this once to initialize old marginals (after burn-in for example) and then use computeEpsilon_ which does the same job but compute epsilon too.

Definition at line 473 of file multipleInferenceEngine_tpl.h.

473 {
474#pragma omp parallel
475 {
477 long nsize = long(workingSet_[threadId]->size());
478
479#pragma omp for
480
481 for (long i = 0; i < nsize; i++) {
483
484 for (Size j = 0; j < dSize; j++) {
485 Size tsize = Size(l_marginalMin_.size());
486
487 // go through all threads
488 for (Size tId = 0; tId < tsize; tId++) {
489 if (l_marginalMin_[tId][i][j] < this->oldMarginalMin_[i][j])
490 this->oldMarginalMin_[i][j] = l_marginalMin_[tId][i][j];
491
492 if (l_marginalMax_[tId][i][j] > this->oldMarginalMax_[i][j])
493 this->oldMarginalMax_[i][j] = l_marginalMax_[tId][i][j];
494 } // end of : all threads
495 } // end of : all modalities
496 } // end of : all variables
497 } // end of : parallel region
498 }
unsigned int getThreadNumber()
Get the calling thread id.

References gum::threadsOMP::getThreadNumber(), l_marginalMax_, l_marginalMin_, gum::credal::InferenceEngine< GUM_SCALAR >::oldMarginalMax_, gum::credal::InferenceEngine< GUM_SCALAR >::oldMarginalMin_, updateOldMarginals_(), and workingSet_.

Referenced by updateOldMarginals_().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ updateThread_()

template<typename GUM_SCALAR, class BNInferenceEngine>
bool gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::updateThread_ ( Size this_thread,
const NodeId & id,
const std::vector< GUM_SCALAR > & vertex,
const bool & elimRedund = false )
inlineprotected

Update thread information (marginals, expectations, IBayesNet, vertices) for a given node id.

Parameters
this_threadthe id of the thread executing this method
idThe id of the node to be updated.
vertexThe vertex.
elimRedundtrue if redundancy elimination is to be performed, false otherwise and by default.
Returns
True if the IBayesNet is kept (for now), False otherwise.

Definition at line 115 of file multipleInferenceEngine_tpl.h.

119 {
120 // save E(X) if we don't save vertices
122 std::string var_name = workingSet_[tId]->variable(id).name();
123 auto delim = var_name.find_first_of("_");
124 var_name = var_name.substr(0, delim);
125
126 if (l_modal_[tId].exists(var_name)) {
127 GUM_SCALAR exp = 0;
128 Size vsize = Size(vertex.size());
129
130 for (Size mod = 0; mod < vsize; mod++)
132
134
136 }
137 } // end of : if modal (map) not empty
138
139 bool newOne = false;
140 bool added = false;
141 bool result = false;
142 // for burn in, we need to keep checking on local marginals and not global
143 // ones
144 // (faster inference)
145 // we also don't want to store dbn for observed variables since there will
146 // be a
147 // huge number of them (probably all of them).
148 Size vsize = Size(vertex.size());
149
150 for (Size mod = 0; mod < vsize; mod++) {
151 if (vertex[mod] < l_marginalMin_[tId][id][mod]) {
153 newOne = true;
154
157 key[0] = id;
158 key[1] = mod;
159 key[2] = 0;
160
161 if (l_optimalNet_[tId]->insert(key, true)) result = true;
162 }
163 }
164
165 if (vertex[mod] > l_marginalMax_[tId][id][mod]) {
167 newOne = true;
168
171 key[0] = id;
172 key[1] = mod;
173 key[2] = 1;
174
175 if (l_optimalNet_[tId]->insert(key, true)) result = true;
176 }
177 } else if (vertex[mod] == l_marginalMin_[tId][id][mod]
178 || vertex[mod] == l_marginalMax_[tId][id][mod]) {
179 newOne = true;
180
182 && !_infE_::evidence_.exists(id)) {
184 key[0] = id;
185 key[1] = mod;
186 key[2] = 0;
187
188 if (l_optimalNet_[tId]->insert(key, false)) result = true;
189 }
190
192 && !_infE_::evidence_.exists(id)) {
194 key[0] = id;
195 key[1] = mod;
196 key[2] = 1;
197
198 if (l_optimalNet_[tId]->insert(key, false)) result = true;
199 }
200 }
201
202 // store point to compute credal set vertices.
203 // check for redundancy at each step or at the end ?
206 added = true;
207 }
208 }
209
210 // if all variables didn't get better marginals, we will delete
211 if (_infE_::storeBNOpt_ && result) return true;
212
213 return false;
214 }
void _updateThreadCredalSets_(Size this_thread, const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund)
Ask for redundancy elimination of a node credal set of a calling thread.

References gum::credal::InferenceEngine< GUM_SCALAR >::evidence_, l_expectationMax_, l_expectationMin_, l_marginalMin_, l_modal_, l_optimalNet_, gum::credal::InferenceEngine< GUM_SCALAR >::storeBNOpt_, gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices_, and workingSet_.

◆ verbosity()

INLINE bool gum::ApproximationScheme::verbosity ( ) const
overridevirtualinherited

Returns true if verbosity is enabled.

Returns
Returns true if verbosity is enabled.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 160 of file approximationScheme_inl.h.

160{ return verbosity_; }

References verbosity_.

Referenced by ApproximationScheme(), and gum::learning::EMApproximationScheme::EMApproximationScheme().

Here is the caller graph for this function:

◆ vertices()

template<typename GUM_SCALAR>
const std::vector< std::vector< GUM_SCALAR > > & gum::credal::InferenceEngine< GUM_SCALAR >::vertices ( const NodeId id) const
inherited

Get the vertice of a given node id.

Parameters
idThe node id which vertice we want.
Returns
A constant reference to this node vertice.

Definition at line 550 of file inferenceEngine_tpl.h.

550 {
551 return marginalSets_[id];
552 }

References marginalSets_.

Referenced by gum::credal::CNLoopyPropagation< GUM_SCALAR >::computeExpectations_().

Here is the caller graph for this function:

◆ verticesFusion_()

template<typename GUM_SCALAR, class BNInferenceEngine>
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::verticesFusion_ ( )
protected
Deprecated
Fusion of threads vertices.

Definition at line 501 of file multipleInferenceEngine_tpl.h.

501 {
502 // don't create threads if there are no vertices saved
503 if (!_infE_::storeVertices_) return;
504
505 // compute the max number of threads to use (avoid nested threads)
508 : 1; // no nested multithreading
509
510 // create the function to be executed by the threads
511 Size tsize = Size(l_marginalMin_.size());
512 auto threadedExec = [this, tsize](const std::size_t this_thread,
517 // go through all threads
518 for (Size tId = 0; tId < tsize; ++tId) {
520
521 // for each vertex, if we are at any opt marginal, add it to the set
522 for (const auto& vtx: nodeThreadCredalSet) {
523 // we run redundancy elimination at each step because there could
524 // be 100000 threads and the set will be so huge...
525 // BUT not if vertices are of dimension 2 ! opt check and equality
526 // should be enough
527 _infE_::updateCredalSets_(i, vtx, (vtx.size() > 2) ? true : false);
528 } // end of : nodeThreadCredalSet
529 } // end of : all threads
530 } // end of : all variables
531 };
532
533 const Size working_size = workingSet_.size();
535 // compute the ranges over which the threads will work
536 const auto nsize = workingSet_[work_index]->size();
538 const auto ranges = gum::dispatchRangeToThreads(0, nsize, (unsigned int)(real_nb_threads));
540 }
541 }
void updateCredalSets_(const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund=false)
Given a node id and one of it's possible vertex, update it's credal set.

References gum::dispatchRangeToThreads(), gum::threadsSTL::ThreadExecutor::execute(), gum::ThreadNumberManager::getNumberOfThreads(), l_marginalMin_, l_marginalSets_, gum::threadsSTL::ThreadExecutor::nbRunningThreadsExecutors(), gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices_, gum::credal::InferenceEngine< GUM_SCALAR >::updateCredalSets_(), verticesFusion_(), and workingSet_.

Referenced by verticesFusion_().

Here is the call graph for this function:
Here is the caller graph for this function:

Member Data Documentation

◆ _nb_threads_

Size gum::ThreadNumberManager::_nb_threads_ {0}
privateinherited

the max number of threads used by the class

Definition at line 126 of file threadNumberManager.h.

126{0};

◆ burn_in_

Size gum::ApproximationScheme::burn_in_
protectedinherited

◆ credalNet_

◆ current_epsilon_

double gum::ApproximationScheme::current_epsilon_
protectedinherited

Current epsilon.

Definition at line 378 of file approximationScheme.h.

Referenced by initApproximationScheme().

◆ current_rate_

double gum::ApproximationScheme::current_rate_
protectedinherited

Current rate.

Definition at line 384 of file approximationScheme.h.

Referenced by initApproximationScheme().

◆ current_state_

ApproximationSchemeSTATE gum::ApproximationScheme::current_state_
protectedinherited

The current state.

Definition at line 393 of file approximationScheme.h.

Referenced by ApproximationScheme(), initApproximationScheme(), stateApproximationScheme(), and stopScheme_().

◆ current_step_

◆ dbnOpt_

template<typename GUM_SCALAR>
VarMod2BNsMap< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::dbnOpt_
protectedinherited

Object used to efficiently store optimal bayes net during inference, for some algorithms.

Definition at line 158 of file inferenceEngine.h.

Referenced by InferenceEngine(), getVarMod2BNsMap(), and gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::optFusion_().

◆ dynamicExpMax_

template<typename GUM_SCALAR>
dynExpe gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMax_
protectedinherited

Upper dynamic expectations.

If the network if not dynamic it's content is the same as expectationMax_.

Definition at line 111 of file inferenceEngine.h.

Referenced by dynamicExpectations_(), dynamicExpMax(), and eraseAllEvidence().

◆ dynamicExpMin_

template<typename GUM_SCALAR>
dynExpe gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMin_
protectedinherited

Lower dynamic expectations.

If the network is not dynamic it's content is the same as expectationMin_.

Definition at line 108 of file inferenceEngine.h.

Referenced by dynamicExpectations_(), eraseAllEvidence(), and saveExpectations().

◆ enabled_eps_

bool gum::ApproximationScheme::enabled_eps_
protectedinherited

If true, the threshold convergence is enabled.

Definition at line 402 of file approximationScheme.h.

Referenced by ApproximationScheme(), disableEpsilon(), enableEpsilon(), isEnabledEpsilon(), and setEpsilon().

◆ enabled_max_iter_

bool gum::ApproximationScheme::enabled_max_iter_
protectedinherited

If true, the maximum iterations stopping criterion is enabled.

Definition at line 420 of file approximationScheme.h.

Referenced by ApproximationScheme(), disableMaxIter(), enableMaxIter(), isEnabledMaxIter(), and setMaxIter().

◆ enabled_max_time_

bool gum::ApproximationScheme::enabled_max_time_
protectedinherited

If true, the timeout is enabled.

Definition at line 414 of file approximationScheme.h.

Referenced by ApproximationScheme(), continueApproximationScheme(), disableMaxTime(), enableMaxTime(), isEnabledMaxTime(), and setMaxTime().

◆ enabled_min_rate_eps_

bool gum::ApproximationScheme::enabled_min_rate_eps_
protectedinherited

If true, the minimal threshold for epsilon rate is enabled.

Definition at line 408 of file approximationScheme.h.

Referenced by ApproximationScheme(), disableMinEpsilonRate(), enableMinEpsilonRate(), isEnabledMinEpsilonRate(), and setMinEpsilonRate().

◆ eps_

double gum::ApproximationScheme::eps_
protectedinherited

Threshold for convergence.

Definition at line 399 of file approximationScheme.h.

Referenced by ApproximationScheme(), epsilon(), and setEpsilon().

◆ evidence_

◆ expectationMax_

template<typename GUM_SCALAR>
expe gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax_
protectedinherited

◆ expectationMin_

template<typename GUM_SCALAR>
expe gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin_
protectedinherited

◆ generators_

template<typename GUM_SCALAR, class BNInferenceEngine>
std::vector< std::mt19937 > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::generators_
protected

the generators used for computing random values

Definition at line 141 of file multipleInferenceEngine.h.

Referenced by initThreadsData_().

◆ history_

std::vector< double > gum::ApproximationScheme::history_
protectedinherited

The scheme history, used only if verbosity == true.

Definition at line 396 of file approximationScheme.h.

◆ l_clusters_

◆ l_evidence_

template<typename GUM_SCALAR, class BNInferenceEngine>
_margis_ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_evidence_
protected

Threads evidence.

Definition at line 125 of file multipleInferenceEngine.h.

Referenced by eraseAllEvidence().

◆ l_expectationMax_

template<typename GUM_SCALAR, class BNInferenceEngine>
_expes_ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_expectationMax_
protected

Threads upper expectations, one per thread.

Definition at line 119 of file multipleInferenceEngine.h.

Referenced by eraseAllEvidence(), expFusion_(), initThreadsData_(), and updateThread_().

◆ l_expectationMin_

template<typename GUM_SCALAR, class BNInferenceEngine>
_expes_ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_expectationMin_
protected

Threads lower expectations, one per thread.

Definition at line 117 of file multipleInferenceEngine.h.

Referenced by eraseAllEvidence(), expFusion_(), initThreadsData_(), and updateThread_().

◆ l_inferenceEngine_

template<typename GUM_SCALAR, class BNInferenceEngine>
std::vector< BNInferenceEngine* > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_inferenceEngine_
protected

Threads BNInferenceEngine.

Definition at line 136 of file multipleInferenceEngine.h.

Referenced by eraseAllEvidence().

◆ l_marginalMax_

template<typename GUM_SCALAR, class BNInferenceEngine>
_margis_ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_marginalMax_
protected

Threads upper marginals, one per thread.

Definition at line 115 of file multipleInferenceEngine.h.

Referenced by eraseAllEvidence(), initThreadsData_(), optFusion_(), updateMarginals_(), and updateOldMarginals_().

◆ l_marginalMin_

template<typename GUM_SCALAR, class BNInferenceEngine>
_margis_ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_marginalMin_
protected

Threads lower marginals, one per thread.

Definition at line 113 of file multipleInferenceEngine.h.

Referenced by eraseAllEvidence(), initThreadsData_(), optFusion_(), updateMarginals_(), updateOldMarginals_(), updateThread_(), and verticesFusion_().

◆ l_marginalSets_

template<typename GUM_SCALAR, class BNInferenceEngine>
_credalSets_ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_marginalSets_
protected

Threads vertices.

Definition at line 123 of file multipleInferenceEngine.h.

Referenced by eraseAllEvidence(), initThreadsData_(), and verticesFusion_().

◆ l_modal_

template<typename GUM_SCALAR, class BNInferenceEngine>
_modals_ gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_modal_
protected

Threads modalities.

Definition at line 121 of file multipleInferenceEngine.h.

Referenced by eraseAllEvidence(), expFusion_(), initThreadsData_(), and updateThread_().

◆ l_optimalNet_

template<typename GUM_SCALAR, class BNInferenceEngine>
std::vector< VarMod2BNsMap< GUM_SCALAR >* > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::l_optimalNet_
protected

Threads optimal IBayesNet.

Definition at line 138 of file multipleInferenceEngine.h.

Referenced by eraseAllEvidence(), initThreadsData_(), optFusion_(), and updateThread_().

◆ last_epsilon_

double gum::ApproximationScheme::last_epsilon_
protectedinherited

Last epsilon value.

Definition at line 381 of file approximationScheme.h.

◆ marginalMax_

◆ marginalMin_

◆ marginalSets_

template<typename GUM_SCALAR>
credalSet gum::credal::InferenceEngine< GUM_SCALAR >::marginalSets_
protectedinherited

◆ max_iter_

Size gum::ApproximationScheme::max_iter_
protectedinherited

The maximum iterations.

Definition at line 417 of file approximationScheme.h.

Referenced by ApproximationScheme(), maxIter(), and setMaxIter().

◆ max_time_

double gum::ApproximationScheme::max_time_
protectedinherited

The timeout.

Definition at line 411 of file approximationScheme.h.

Referenced by ApproximationScheme(), maxTime(), and setMaxTime().

◆ min_rate_eps_

double gum::ApproximationScheme::min_rate_eps_
protectedinherited

Threshold for the epsilon rate.

Definition at line 405 of file approximationScheme.h.

Referenced by ApproximationScheme(), minEpsilonRate(), and setMinEpsilonRate().

◆ modal_

◆ oldMarginalMax_

◆ oldMarginalMin_

◆ onProgress

◆ onStop

Signaler1< const std::string& > gum::IApproximationSchemeConfiguration::onStop
inherited

Criteria messageApproximationScheme.

Definition at line 83 of file IApproximationSchemeConfiguration.h.

Referenced by gum::learning::IBNLearner::distributeStop().

◆ period_size_

Size gum::ApproximationScheme::period_size_
protectedinherited

Checking criteria frequency.

Definition at line 426 of file approximationScheme.h.

Referenced by ApproximationScheme(), periodSize(), and setPeriodSize().

◆ query_

template<typename GUM_SCALAR>
query gum::credal::InferenceEngine< GUM_SCALAR >::query_
protectedinherited

Holds the query nodes states.

Definition at line 119 of file inferenceEngine.h.

Referenced by eraseAllEvidence(), insertQuery(), and toString().

◆ repetitiveInd_

template<typename GUM_SCALAR>
bool gum::credal::InferenceEngine< GUM_SCALAR >::repetitiveInd_
protectedinherited

◆ storeBNOpt_

◆ storeVertices_

◆ t0_

template<typename GUM_SCALAR>
cluster gum::credal::InferenceEngine< GUM_SCALAR >::t0_
protectedinherited

Clusters of nodes used with dynamic networks.

Any node key in t0_ is present at \( t=0 \) and any node belonging to the node set of this key share the same CPT than the key. Used for sampling with repetitive independence.

Definition at line 127 of file inferenceEngine.h.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_mcThreadDataCopy_(), getT0Cluster(), and repetitiveInit_().

◆ t1_

template<typename GUM_SCALAR>
cluster gum::credal::InferenceEngine< GUM_SCALAR >::t1_
protectedinherited

Clusters of nodes used with dynamic networks.

Any node key in t1_ is present at \( t=1 \) and any node belonging to the node set of this key share the same CPT than the key. Used for sampling with repetitive independence.

Definition at line 134 of file inferenceEngine.h.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_mcThreadDataCopy_(), getT1Cluster(), and repetitiveInit_().

◆ threadMinimalNbOps_

template<typename GUM_SCALAR>
Size gum::credal::InferenceEngine< GUM_SCALAR >::threadMinimalNbOps_ {Size(20)}
protectedinherited

◆ threadRanges_

template<typename GUM_SCALAR>
std::vector< std::pair< NodeId, Idx > > gum::credal::InferenceEngine< GUM_SCALAR >::threadRanges_
protectedinherited

the ranges of elements of marginalMin_ and marginalMax_ processed by each thread

these ranges are stored into a vector of pairs (NodeId, Idx). For thread number i, the pair at index i is the beginning of the range that the thread will have to process: this is the part of the marginal distribution vector of node NodeId starting at index Idx. The pair at index i+1 is the end of this range (not included).

Warning
the size of threadRanges_ is the number of threads + 1.

Definition at line 170 of file inferenceEngine.h.

Referenced by computeEpsilon_(), gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::computeEpsilon_(), displatchMarginalsToThreads_(), and gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::updateMarginals_().

◆ timer_

◆ timeSteps_

template<typename GUM_SCALAR>
int gum::credal::InferenceEngine< GUM_SCALAR >::timeSteps_
protectedinherited

The number of time steps of this network (only usefull for dynamic networks).

Deprecated

Definition at line 177 of file inferenceEngine.h.

Referenced by repetitiveInit_().

◆ verbosity_

bool gum::ApproximationScheme::verbosity_
protectedinherited

If true, verbosity is enabled.

Definition at line 429 of file approximationScheme.h.

Referenced by ApproximationScheme(), and verbosity().

◆ workingSet_

template<typename GUM_SCALAR, class BNInferenceEngine>
std::vector< _bnet_* > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::workingSet_
protected

◆ workingSetE_

template<typename GUM_SCALAR, class BNInferenceEngine>
std::vector< List< const Tensor< GUM_SCALAR >* >* > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::workingSetE_
protected

Threads evidence.

Definition at line 133 of file multipleInferenceEngine.h.

Referenced by eraseAllEvidence(), and initThreadsData_().


The documentation for this class was generated from the following files: