In the era of big data, deep learning has become an increasingly popular
topic. It has outstanding achievements in the fields of image recognition,
object detection, and natural language processing et al. The first priority of
deep learning is exploiting valuable information from a large amount of data,
which will inevitably induce privacy issues that are worthy of attention.
Presently, several privacy-preserving deep learning methods have been proposed,
but most of them suffer from a non-negligible degradation of either efficiency
or accuracy. Negative database (textit{NDB}) is a new type of data
representation which can protect data privacy by storing and utilizing the
complementary form of original data. In this paper, we propose a
privacy-preserving deep learning method named NegDL based on textit{NDB}.
Specifically, private data are first converted to textit{NDB} as the input of
deep learning models by a generation algorithm called textit{QK}-hidden
algorithm, and then the sketches of textit{NDB} are extracted for training and
inference. We demonstrate that the computational complexity of NegDL is the
same as the original deep learning model without privacy protection.
Experimental results on Breast Cancer, MNIST, and CIFAR-10 benchmark datasets
demonstrate that the accuracy of NegDL could be comparable to the original deep
learning model in most cases, and it performs better than the method based on
differential privacy.

By admin