In actuarial science, every model carries two fundamental limits: the ontological limit and the epistemic limit.
The first concerns what the world is; the second concerns what we can know about the world. The distinction seems subtle, but ignoring it produces incoherent models, weak credibility, and — more dangerously — an artificial sense of robustness in the indicators used for decision-making.
An actuary must rigorously distinguish between the two. Otherwise, they risk explaining noise as structure or treating uncertainty as if it were merely a lack of data.
Ontological Constraint: when reality itself is misrepresented Link to heading
The ontological limit refers to the fact that the world has its own structure, independent of our datasets. Examples:
- severity distributions have real heavy tails that cannot be ignored by capping and then recalibrating as if additivity still held;
- portfolio behaviour is driven by causal mechanisms, not just temporary correlations.
A model that ignores ontological limits produces elegant fictions. A common pattern: the normal distribution is used because it works well, but the world is not normal — and this simplification trims reality rather than explaining it.
Epistemic Uncertainty: what can be known, not what exists Link to heading
The epistemic limit concerns the available information — data quality, series length, granularity, reporting errors. This is where confusion often arises: the model is penalized for what cannot be known even in principle. A classic example:
Pricing motor liability insurance (RCA) by car colour or setting severity by PF/PJ status demonstrates the absence of actuarial sense: allowing such practices would immediately eliminate any actuarial meaning, leaving only bureaucratic norm.
Consequences for internal models and risk assessment Link to heading
Confusing the two limits systematically produces:
-
Overconfident calibrations. Parameters are estimated as if the process were epistemically stable, even though it is ontologically unstable.
-
Risk indicators that mislead. For instance, a Value-at-Risk that appears calm simply because the epistemic limit ignored genuine jumps and tail behaviour.
-
Artificial institutional confidence. Organizational culture conflates we don’t know with it does not exist, leading to fragile decisions.
This is the structural weakness of many Solvency II models: reality is forced into a standardized frame even in areas where the phenomenon cannot legitimately be reduced to a standard parametric structure.
Ironically, the regime itself presents the standard formula as a temporary solution, acceptable only until an adequate internal model is developed — yet practice has inverted the roles, installing false security precisely where, epistemically, knowledge is insufficient.
Conclusion Link to heading
Actuarial work is not just about formulas and calibrations, but about understanding the boundaries of knowledge. A model cannot exceed the world, and it cannot exceed the data — yet it can remain coherent as long as we do not mix these two limits.
The ontological limit tells us how far reality goes. The epistemological limit tells us how far scientific knowledge can reach.
Confusing the two produces some of the most refined errors in the profession — errors visible only to the actuary who is willing to expose a model’s uncomfortable weaknesses.
This mindset defines the core of Actuarial Mathematics, especially through credibility theory.
You don’t need to master every detail — you just need the right actuary to build the right model.