Recent studies have revealed the vulnerability of graph convolutional
networks (GCNs) to edge-perturbing attacks, such as maliciously inserting or
deleting graph edges. However, a theoretical proof of such vulnerability
remains a big challenge, and effective defense schemes are still open issues.
In this paper, we first generalize the formulation of edge-perturbing attacks
and strictly prove the vulnerability of GCNs to such attacks in node
classification tasks. Following this, an anonymous graph convolutional network,
named AN-GCN, is proposed to counter against edge-perturbing attacks.
Specifically, we present a node localization theorem to demonstrate how the GCN
locates nodes during its training phase. In addition, we design a staggered
Gaussian noise based node position generator, and devise a spectral graph
convolution based discriminator in detecting the generated node positions.
Further, we give the optimization of the above generator and discriminator.
AN-GCN can classify nodes without taking their position as input. It is
demonstrated that the AN-GCN is secure against edge-perturbing attacks in node
classification tasks, as AN-GCN classifies nodes without the edge information
and thus makes it impossible for attackers to perturb edges anymore. Extensive
evaluations demonstrated the effectiveness of the general edge-perturbing
attack model in manipulating the classification results of the target nodes.
More importantly, the proposed AN-GCN can achieve 82.7% in node classification
accuracy without the edge-reading permission, which outperforms the
state-of-the-art GCN.

By admin