We study the differentially private convex stochastic optimization (DP-SCO) problem with heavy-tailed gradients, where we assume a kth<annotation encoding="application/x-tex”>k^{\text{th}}kth-momentum limit on the Lipschitz constants of sample functions, instead of a smooth limit. We propose a new approach based on reductions that allows us to obtain the first optimal rates (up to logarithmic factors) in the heavy-tailed environment, achieving error GRAM2⋅1north+GRAMk⋅(dnorthmy)1−1k<annotation encoding="application/x-tex”>G_2 \cdot \frac 1 {\sqrt n} + G_k \cdot (\frac{\sqrt d}{n\varepsilon})^{1 – \frac 1 k}GRAM2⋅north1+GRAMk⋅(northmyd)1−k1 low (my,d)<annotation encoding="application/x-tex”>(\varepsilon, \delta)(my,d)-approximate differential privacy, up to a slight polygon(recordnorthd)<annotation encoding="application/x-tex”>\textup{polylog}(\frac{\log n}{\delta})polygon(dIohgramnorth) factor, where GRAM22<annotation encoding="application/x-tex”>G_2^2GRAM22 and GRAMkk<annotation encoding="application/x-tex”>G_k^kGRAMkk are the 2North Dakota<annotation encoding="application/x-tex”>2^{\text{nd}}2North Dakota and kth<annotation encoding="application/x-tex”>k^{\text{th}}kth momentum bounds on sample Lipschitz constants, which almost coincide with a lower bound of (Lowy et al. 2023).