Calibration is a well-studied property of predictors that ensures meaningful uncertainty estimates. Multicalibration is a related notion, originating in algorithmic fairness, which requires that predictors be calibrated simultaneously over a potentially complex and overlapping collection of protected subpopulations (such as groups defined by ethnicity, race, or income). We conducted the first comprehensive study evaluating the utility of multi-calibration postprocessing on a large set of tabular, image, and language datasets for models ranging from simple decision trees to fine-tuned LLMs with 90 million parameters. Our findings can be summarized as follows: (1) models that are factory calibrated tend to be relatively multi-calibrated without any additional post-processing; (2) multiple calibration postprocessing can help inherently uncalibrated models; and (3) traditional calibration measures can sometimes provide multicalibration implicitly. More generally, we also distill many independent observations that may be useful for practical and effective applications of multicalibration postprocessing in real-world contexts.