Inference by Enumeration Calculator
Model hidden hypotheses with up to three observations. Compare priors, evidences, and posterior shifts clearly. Export results, inspect formulas, and validate assumptions stepwise easily.
Calculator
Example Data Table
| Prior P(H) | E1 State | P(E1=true|H) | P(E1=true|¬H) | E2 State | P(E2=true|H) | P(E2=true|¬H) | E3 State | P(E3=true|H) | P(E3=true|¬H) | P(H|e) |
|---|---|---|---|---|---|---|---|---|---|---|
| 0.35 | true | 0.82 | 0.18 | false | 0.30 | 0.72 | true | 0.74 | 0.22 | 0.9538 |
Formula Used
Inference by enumeration computes exact posteriors by summing over every allowed hidden state. In this binary setup, the hidden variable has two states: H and ¬H.
Likelihood for H: P(e|H) = ∏ P(Ei = observed state | H)
Likelihood for ¬H: P(e|¬H) = ∏ P(Ei = observed state | ¬H)
Unnormalized score: score(H) = P(H) × P(e|H)
Normalization constant: P(e) = score(H) + score(¬H)
Posterior: P(H|e) = score(H) / P(e)
Bayes factor: BF = P(e|H) / P(e|¬H)
If an evidence variable is observed false, the calculator uses 1 − P(Ei=true|state). If an evidence variable is ignored, its factor becomes 1 and it does not affect the final posterior.
How to Use This Calculator
- Enter the prior probability for the hidden hypothesis.
- Set a decision threshold for classifying the posterior.
- Choose how many decimal places to display.
- For each evidence variable, mark it true, false, or ignored.
- Enter conditional probabilities under H and under ¬H.
- Press Calculate to view the posterior above the form.
- Review the enumeration table, graph, and diagnostic metrics.
- Use the CSV or PDF options to export results.
About Inference by Enumeration
Inference by enumeration is a classic exact reasoning method used in probabilistic AI systems. It evaluates every relevant hidden state, computes each state’s contribution to the evidence, and then normalizes those contributions to produce a posterior distribution. This calculator applies that idea to a binary hypothesis with up to three evidence variables. That keeps the workflow simple while still showing the core mechanics of exact Bayesian inference.
This page is useful for model checking, classroom demonstrations, feature reliability studies, anomaly analysis, and lightweight decision support. Because each evidence term is entered separately, you can inspect how changing one conditional probability alters the posterior. The decision threshold also helps when you want a final rule for classification instead of only a probability estimate.
In AI and machine learning, enumeration is often introduced before faster approximate techniques. It gives a transparent baseline, helps verify hand calculations, and makes debugging easier when a larger probabilistic model produces unexpected results. The enumeration table on this page exposes the full path from prior beliefs to posterior beliefs, which is valuable when interpretability matters.
FAQs
1. What does this calculator estimate?
It estimates the posterior probability of a binary hypothesis after observing up to three evidence variables. The method is exact, not approximate, because it explicitly enumerates the hidden states and normalizes their contributions.
2. Why can an evidence item be marked false?
Some observations are informative because they did not occur. When you choose false, the calculator uses the complement probability, which is 1 minus the entered true-likelihood for that model state.
3. What happens when I choose ignore?
Ignored evidence does not influence the result. Its multiplicative factor becomes 1 for both states, so the posterior is driven only by the remaining active evidence variables and the prior.
4. Is this the same as full Bayesian network inference?
It demonstrates the same exact normalization logic, but in a simplified binary setting. Full Bayesian networks can contain many nodes and more hidden variables, which increases the number of terms that must be enumerated.
5. What does the Bayes factor tell me?
The Bayes factor compares how strongly the observed evidence supports H versus ¬H. Values above 1 support H, values below 1 support ¬H, and larger departures from 1 indicate stronger evidence.
6. Why are prior and posterior odds included?
Odds show how belief shifts before and after seeing evidence. They are useful when you want to interpret updating strength, compare scenarios quickly, or connect the result to the Bayes factor directly.
7. When would enumeration be preferred?
Enumeration is preferred when the model is small and transparency matters. It is especially useful for learning, verification, sanity checks, and auditing exact results before using larger approximate inference methods.
8. Can I use this for machine learning diagnostics?
Yes. It works well for checking probabilistic assumptions, comparing feature reliability, validating small Bayesian rules, and illustrating how evidence combinations affect the posterior in interpretable decision workflows.