In vertical federated learning, two-party split learning has become an
important topic and has found many applications in real business scenarios.
However, how to prevent the participants’ ground-truth labels from possible
leakage is not well studied. In this paper, we consider answering this question
in an imbalanced binary classification setting, a common case in online
business applications. We first show that, norm attack, a simple method that
uses the norm of the communicated gradients between the parties, can largely
reveal the ground-truth labels from the participants. We then discuss several
protection techniques to mitigate this issue. Among them, we have designed a
principled approach that directly maximizes the worst-case error of label
detection. This is proved to be more effective in countering norm attack and
beyond. We experimentally demonstrate the competitiveness of our proposed
method compared to several other baselines.

By admin