With the wide application of deep neural networks, it is important to verify
a host’s possession over a deep neural network model and protect the model. To
meet this goal, various mechanisms have been designed. By embedding extra
information into a network and revealing it afterward, the watermark becomes a
competitive candidate in proving integrity for deep learning systems. However,
concurrent watermarking schemes can hardly be adopted for emerging distributed
learning paradigms that raise extra requirements during the ownership
verification. A spearheading distributed learning paradigm is federated
learning (FL) where many parties participate in training one single model. Each
author participating in the FL should be able to verify its ownership
independently. Moreover, there are other potential threat and corresponding
security requirements under this scenario. To meet those requirements, in this
paper, we demonstrate a watermarking protocol for protecting deep neural
networks in the setting of FL. By incorporating the state-of-the-art
watermarking scheme and the cryptological primitive designed for distributed
storage, the protocol meets the need for ownership verification in the FL
scenario without violating the privacy for each participant. This work paves
the way for generalizing watermark as a practical security mechanism for
protecting deep learning models in distributed learning platforms.

By admin