We give a method of principles to decompose the predictive uncertainty of a model in random and epistemic components with explicit semantics that relate them to the distribution of real world data. While many works in the literature have proposed such decompositions, they lack the type of formal guarantees that we provide. Our method is based on the new notion of higher order calibration, which generalizes ordinary calibration to the configuration of higher order predictors mixtures On labeling distributions at each point. We show how to measure and achieve higher order calibration using access to k<annotation encoding="application/x-tex”>kk-Snapshots, namely, examples where each point has k<annotation encoding="application/x-tex”>kk Independent conditional labels. Under the higher order calibration, it is guaranteed that the random uncertainty estimated at a point coincides with the random uncertainty of the real world averaged over all points where the prediction is performed. As far as we know, this is the first formal guarantee of this type that does not place assumptions in the distribution of real world data. It is important to highlight that the higher order calibration is also applicable to existing higher order predictors, such as Bayesian and joint models, and provides a natural evaluation metric for these models. We demonstrate through the experiments that our method produces decompositions of significant uncertainty for the classification of images.