Iterative Counter-Example Guided Robustness Verification for Neural Networks
Karthik Hanumanthaiah, Samik Basu
Abstract
We investigate the problem of efficiency in robustness verification of neural networks. The challenge in attaining efficiency is typically addressed by considering different types of abstraction techniques focused toward the non-linearity in the computation in the neural network. We propose a complementary approach of abstracting the neural network by discarding the neurons. Our abstraction is based on the robustness property being verified. We then apply existing robustness verification technique on the abstract network and develop an iterative algorithm akin to widely used counter-example guided abstraction-refinement based verification strategy. We present the viability of our strategy by deploying the Marabou verification framework and applying it to verify the robustness properties of the neural network used in the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu).