Applications such as autonomous vehicles and medical screening use deep
learning models to localize and identify hundreds of objects in a single frame.
In the past, it has been shown how an attacker can fool these models by placing
an adversarial patch within a scene. However, these patches must be placed in
the target location and do not explicitly alter the semantics elsewhere in the
image.

In this paper, we introduce a new type of adversarial patch which alters a
model’s perception of an image’s semantics. These patches can be placed
anywhere within an image to change the classification or semantics of locations
far from the patch. We call this new class of adversarial examples `remote
adversarial patches’ (RAP).

We implement our own RAP called IPatch and perform an in-depth analysis on
image segmentation RAP attacks using five state-of-the-art architectures with
eight different encoders on the CamVid street view dataset. Moreover, we
demonstrate that the attack can be extended to object recognition models with
preliminary results on the popular YOLOv3 model. We found that the patch can
change the classification of a remote target region with a success rate of up
to 93% on average.

By admin