We study private stochastic convex optimization (SCO) under user-level differential privacy (DP) constraints. In this scenario, there is north<annotation encoding="application/x-tex”>northnorth users, each possessor metro<annotation encoding="application/x-tex”>metrometro data items, and we need to protect the privacy of each user's entire collection of data items. Existing algorithms for user-level DP SCO are not practical in many large-scale machine learning scenarios because: (i) they make restrictive assumptions about the smoothness parameter of the loss function and require the number of users to grow polynomially with the dimension of the space parameter; or (ii) are prohibitively slow and require at least (metronorth)3/2<annotation encoding="application/x-tex”>(minutes)^{3/2}(Minnesota)3/2 gradient calculations for smooth losses and (metronorth)3<annotation encoding="application/x-tex”>(minutes)^3(Minnesota)3 calculations for non-smooth losses. To address these limitations, we provide novel user-level DP algorithms with state-of-the-art execution time and excess risk guarantees, without strict assumptions. First, we develop a state-of-the-art excess-risk linear-time algorithm (for a non-trivial linear-time algorithm) under a mild smoothness assumption. Our second algorithm is applied to smooth arbitrary losses and achieves optimal excess risk at ≈(metronorth)9/8<annotation encoding="application/x-tex”>\approx (mn)^{9/8}≈(Minnesota)9/8 gradient calculations. Third, for non-uniform loss functions, we obtain an optimal excess risk at north11/8metro5/4<annotation encoding="application/x-tex”>n^{11/8}m^{5/4}north11/8metro5/4 gradient calculations. Furthermore, our algorithms do not require the number of users to grow polynomially with the dimension.