← Back to Paper List

Incremental Neural Network Verification via Learned Conflicts

Raya Elsaleh, Liam Davis, Haoze Wu, Guy Katz
arXiv (2026)
Reasoning

📝 Paper Summary

Neural Network Verification Formal Methods for ML
This paper accelerates neural network verification by recording infeasible search paths (conflicts) from one query and reusing them to prune the search space of subsequent, refined queries.
Core Problem
Verification tools typically solve sequences of closely related queries independently, discarding valuable information about infeasible regions and leading to redundant computational effort.
Why it matters:
  • Safety-critical applications like autonomous driving require repeated verification queries (e.g., robustness radius) that are computationally expensive
  • Current solvers restart from scratch for every query, failing to exploit the structural similarities in iterative analysis workflows
  • The NP-complete nature of neural network verification necessitates every possible efficiency gain to scale to larger networks
Concrete Example: In robustness radius computation, a verifier might repeatedly check if a network is safe within radius epsilon, then epsilon/2, etc. A standard verifier re-discovers the same infeasible activation patterns for epsilon/2 that it already found for epsilon, wasting time.
Key Novelty
Incremental Conflict Reuse for Branch-and-Bound Verifiers
  • Records 'conflicts' (sets of neural activation decisions that lead to contradictions) during the verification of a base query
  • Formalizes a 'refinement' relation ensuring that conflicts learned on a broader query remain valid for stricter subsequent queries
  • Uses a SAT solver as a side-module to manage these inherited conflicts, checking partial assignments against them to prune the search tree early
Evaluation Highlights
  • Achieves speedups of up to 1.9x compared to a non-incremental baseline on Marabou
  • Demonstrates consistent runtime reductions across three distinct tasks: local robustness radius, input splitting, and minimal sufficient feature set extraction
  • Validates that overhead from managing learned conflicts is outweighed by the reduction in search space exploration
Breakthrough Assessment
7/10
Provides a solid, logically sound method for accelerating iterative verification, a common practical use case. While not a fundamental architectural shift for networks, it significantly optimizes the verification workflow.
×