We investigate the unreasonable effectiveness of classifier-free guidance (CFG). CFG is the dominant conditional sampling method for text-to-image diffusion models, but unlike other diffusion aspects, it still has a shaky theoretical foundation. In this paper, we refute common misconceptions, by showing that CFG interacts differently with DDPM and DDIM, and neither sampler with CFG outputs the gamma-boosted distribution. Then, we clarify the behavior of CFG by showing that it is a kind of Predictor-Corrector (PC) method that alternates between denoising and sharpening, which we call Predictor-Corrector Guidance (PCG). We show that in the SDE limit, DDPM-CFG is equivalent to PCG with a DDIM predictor applied to the conditional distribution and a Langevin dynamics corrector applied to a gamma-boosted distribution. While the standard PC corrector is applied to the conditional distribution and improves the sampling accuracy, our corrector sharpens the distribution.