We return to the problem of designing scalable protocols for private statistics and private federated learning when each device has its private data. Local differentially private algorithms require little trust but are (provably) limited in utility. Centrally differentially private algorithms can enable significantly better utility but require a trusted curator. This gap has generated significant interest in the design and implementation of simple cryptographic primitives, which can enable central-like utility guarantees without having to trust a central server.
Our first contribution is to propose a new primitive that enables efficient implementation of several commonly used algorithms and enables near-core privacy accounting without requiring the strong trust assumptions that this entails. The Shuffling and Aggregation primitives that have been proposed in previous work enable this for some algorithms, but have significant limitations as primitives. We propose a Samplable Anonymous Aggregation primitive, which computes an aggregate over a random subset of the inputs, and show that it leads to improved privacy-utility trade-offs for several fundamental tasks. Second, we propose a system architecture that implements this primitive and perform a security analysis of the proposed system. Our design combines additive secret sharing with anonymization and authentication infrastructures.