Neural Network Reachability

We present a method for computing exact reachable sets for deep neural networks with rectified linear unit (ReLU) activation (missing reference). Our method is well-suited for use in rigorous safety analysis of robotic perception and control systems with deep neural network components. Our algorithm can compute both forward and backward reachable sets for a ReLU network iterated over multiple time steps, as would be found in a perception-action loop in a robotic system. Our algorithm is unique in that it builds the reachable sets by expanding a front of polyhedral cells in the input space, rather than iterating layer-by-layer through the network as in other methods. If an unsafe cell is found, our algorithm can return this result without completing the full reachability computation, thus giving an anytime property that accelerates safety verification. Source code for our algorithm can be found here.

(missing reference)

Related Publications:


    Related Videos: