Many adversarial attacks have been proposed to investigate the security
issues of deep neural networks. In the black-box setting, current model
stealing attacks train a substitute model to counterfeit the functionality of
the target model. However, the training requires querying the target model.
Consequently, the query complexity remains high, and such attacks can be
defended easily. This study aims to train a generalized substitute model called
“Simulator”, which can mimic the functionality of any unknown target model. To
this end, we build the training data with the form of multiple tasks by
collecting query sequences generated during the attacks of various existing
networks. The learning process uses a mean square error-based
knowledge-distillation loss in the meta-learning to minimize the difference
between the Simulator and the sampled networks. The meta-gradients of this loss
are then computed and accumulated from multiple tasks to update the Simulator
and subsequently improve generalization. When attacking a target model that is
unseen in training, the trained Simulator can accurately simulate its
functionality using its limited feedback. As a result, a large fraction of
queries can be transferred to the Simulator, thereby reducing query complexity.
Results of the comprehensive experiments conducted using the CIFAR-10,
CIFAR-100, and TinyImageNet datasets demonstrate that the proposed approach
reduces query complexity by several orders of magnitude compared to the
baseline method. The implementation source code is released at
https://github.com/machanic/SimulatorAttack.

By admin