Federated learning (FL) allows a set of agents to collaboratively train a
model without sharing their potentially sensitive data. This makes FL suitable
for privacy-preserving applications. At the same time, FL is susceptible to
adversarial attacks due to decentralized and unvetted data. One important line
of attacks against FL is the backdoor attacks. In a backdoor attack, an
adversary tries to embed a backdoor functionality to the model during training
that can later be activated to cause a desired misclassification. To prevent
backdoor attacks, we propose a lightweight defense that requires minimal change
to the FL protocol. At a high level, our defense is based on carefully
adjusting the aggregation server’s learning rate, per dimension and per round,
based on the sign information of agents’ updates. We first conjecture the
necessary steps to carry a successful backdoor attack in FL setting, and then,
explicitly formulate the defense based on our conjecture. Through experiments,
we provide empirical evidence that supports our conjecture, and we test our
defense against backdoor attacks under different settings. We observe that
either backdoor is completely eliminated, or its accuracy is significantly
reduced. Overall, our experiments suggest that our defense significantly
outperforms some of the recently proposed defenses in the literature. We
achieve this by having minimal influence over the accuracy of the trained
models. In addition, we also provide convergence rate analysis for our proposed
scheme.

By admin