Training Deep Neural Networks that are robust to norm bounded adversarial
attacks remains an elusive problem. While exact and inexact verification-based
methods are generally too expensive to train large networks, it was
demonstrated that bounded input intervals can be inexpensively propagated from
a layer to another through deep networks. This interval bound propagation
approach (IBP) not only has improved both robustness and certified accuracy but
was the first to be employed on large/deep networks. However, due to the very
loose nature of the IBP bounds, the required training procedure is complex and
involved. In this paper, we closely examine the bounds of a block of layers
composed in the form of Affine-ReLU-Affine. To this end, we propose expected
tight bounds (true bounds in expectation), referred to as ETB, which are
provably tighter than IBP bounds in expectation. We then extend this result to
deeper networks through blockwise propagation and show that we can achieve
orders of magnitudes tighter bounds compared to IBP. Furthermore, using a
simple standard training procedure, we can achieve impressive
robustness-accuracy trade-off on both MNIST and CIFAR10.

By admin