Much of the literature on differential privacy focuses on item-level privacy,
where loosely speaking, the goal is to provide privacy per item or training
example. However, recently many practical applications such as federated
learning require preserving privacy for all items of a single user, which is
much harder to achieve. Therefore understanding the theoretical limit of
user-level privacy becomes crucial.

We study the fundamental problem of learning discrete distributions over $k$
symbols with user-level differential privacy. If each user has $m$ samples, we
show that straightforward applications of Laplace or Gaussian mechanisms
require the number of users to be $mathcal{O}(k/(malpha^2) +
k/epsilonalpha)$ to achieve an $ell_1$ distance of $alpha$ between the true
and estimated distributions, with the privacy-induced penalty
$k/epsilonalpha$ independent of the number of samples per user $m$. Moreover,
we show that any mechanism that only operates on the final aggregate counts
should require a user complexity of the same order. We then propose a mechanism
such that the number of users scales as $tilde{mathcal{O}}(k/(malpha^2) +
k/sqrt{m}epsilonalpha)$ and hence the privacy penalty is
$tilde{Theta}(sqrt{m})$ times smaller compared to the standard mechanisms in
certain settings of interest. We further show that the proposed mechanism is
nearly-optimal under certain regimes.

We also propose general techniques for obtaining lower bounds on restricted
differentially private estimators and a lower bound on the total variation
between binomial distributions, both of which might be of independent interest.

By admin