*= Equal collaborators
Expert online prediction is a fundamental problem in machine learning and several works have studied this problem under privacy restrictions. We propose and analyze new algorithms for this problem that exceed the regret limits of the best existing algorithms for non-adaptive adversaries. For approximate differential privacy, our algorithms reach regret limits of for the stochastic fit and for outside opponents (where is the number of experts). For pure DP, our algorithms are the first to obtain sublinear regret for unaware adversaries in the high-dimensional regime. . Furthermore, we demonstrate new lower bounds for adaptive adversaries. Our results imply that, unlike in the non-private setting, there is a strong separation between optimal regret for adaptive and non-adaptive adversaries for this problem. Our lower bounds also show a separation between pure and approximate differential privacy for adaptive adversaries where the latter is necessary to achieve non-private privacy. regret.