We investigate the field of differentially private stochastic convex optimization (DP-SCO) in the context of user-level privacy, where each user may possess multiple data items. Current approaches for user-level DP-SCO either have excessive runtime or necessitate a number of users that increases polynomially with the problem’s dimensionality. Our research introduces novel algorithms for user-level DP-SCO that achieve optimal rates, operate within polynomial time, and require a number of users that grows logarithmically with the dimension. Additionally, our algorithms are the first to achieve optimal rates for non-smooth functions within polynomial time. These algorithms are built upon multiple-pass DP-SGD, combined with a unique private mean estimation procedure designed for focused data. This procedure incorporates an outlier removal step prior to estimating the gradient’s mean.