In 1991, Brenier proved a theorem that generalizes polar decomposition to square matrices, factored as PSDs. ×<annotation encoding="application/x-tex”>\times× unitary – to any vector field F:Rd→Rd<annotation encoding="application/x-tex”>F:\mathbb{R}^d\right arrow \mathbb{R}^dF:Rd→RdThe theorem, known as the polar factorization theorem, states that any field F<annotation encoding="application/x-tex”>FF can be recovered as the gradient composition of a convex function you<annotation encoding="application/x-tex”>youyou with a map that preserves the measurement METER<annotation encoding="application/x-tex”>METERMETERnamely F=∇you∘METER<annotation encoding="application/x-tex”>F=\nabla u \circ MF=∇you∘METERWe propose a practical implementation of this powerful theoretical result and explore possible uses within machine learning. The theorem is closely related to optimal transport (OT) theory and we borrow from recent advances in the field of neural optimal transport to parameterize the potential you<annotation encoding="application/x-tex”>youyou as an input convex neural network. The map METER<annotation encoding="application/x-tex”>METERMETER can be assessed punctually using you∗<annotation encoding="application/x-tex”>you^*you∗the convex conjugate of you<annotation encoding="application/x-tex”>youyouthrough identity METER=∇you∗∘F<annotation encoding="application/x-tex”>M=\nabla u^* \circ FMETER=∇you∗∘For learned as an auxiliary network. Because METER<annotation encoding="application/x-tex”>METERMETER is, in general, non-injective, we consider the additional task of estimating the ill-posed inverse map that can approximate the pre-image measurement METER−1<annotation encoding="application/x-tex”>M^{-1}METER−1 Using a stochastic generator, we illustrate possible applications of the Brenier polar factorization to non-convex optimization problems, as well as sampling densities that are not logarithmically concave.