AI2 is a sound and scalable analysis framework for proving robustness of deep neural networks. It supports networks with convolutional, max-pooling, and fully-connected layers. The figure below shows how the analysis works.

AI2 overview

Publications

AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation, IEEE S&P 2018
Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev

    © The AI2 framework is developed by the Software Reliability Lab, Department of Computer Science, ETH Zurich