Preprints
-
Syndicate: Synergistic Synthesis of Ranking Function and Invariants for Termination Analysis
Yasmin Chandini Sarita, Avaljot Singh,
, Gagandeep Singh, Mahesh ViswanathanSeveral techniques have been developed to prove the termination of programs. Finding ranking functions is one of the common approaches to do so. A ranking function must be bounded and must reduce at every iteration for all the reachable program states. Since the set of reachable states is often unknown, invariants serve as an over-approximation. Further, in the case of nested loops, the initial set of program states for the nested loop can be determined by the invariant of the outer loop. So, invariants play an important role in proving the validity of a ranking function in the absence of the exact reachable states. However, in the existing techniques, either the invariants are synthesized independently, or combined with ranking function synthesis into a single query, both of which are inefficient. We observe that a guided search for invariants and ranking functions can have benefits in terms of the number of programs that can be proved to terminate and the time needed to identify a proof of termination. So, in this work, we develop Syndicate, a novel framework that synergistically guides the search for both the ranking function and an invariant that together constitute a proof of termination. Owing to our synergistic approach, Syndicate can not only prove the termination of more benchmarks but also achieves a reduction ranging from 17% to 70% in the average runtime as compared to existing state-of-the-art termination analysis tools. We also prove that Syndicate is relatively complete, i.e., if there exists a ranking function and an invariant in their respective templates that can be used to prove the termination of a program, then Syndicate will always find it if there exist complete procedures for the template-specific functions in our framework.
@misc{syndicate_arxiv,
title={Syndicate: Synergistic Synthesis of Ranking Function and Invariants for Termination Analysis},
author={Yasmin Sarita and Avaljot Singh and Shaurya Gomber and Gagandeep Singh and Mahesh Vishwanathan},
year={2024},
eprint={2404.05951},
archivePrefix={arXiv},
primaryClass={cs.LO},
url={https://arxiv.org/abs/2404.05951},
}
Workshops
-
Neural Abstract Interpretation
Shaurya Gomber, Gagandeep Singh
VerifAI @ ICLR 2025 • SRC @ PLDI 2024
Abstract interpretation is a widely used method for the formal analysis and verification of programs and neural networks. However, designing efficient abstract transformers for expressive relational domains such as Octagon and Polyhedra is challenging as one needs to carefully balance the fundamental trade-off between the cost, soundness, and precision of the transformer for downstream tasks. Further, scalable implementations involve intricate performance optimizations like Octagon and Polyhedra decomposition. Given the inherent complexity of abstract transformers and the proven capability of neural networks to effectively approximate complex functions, we envision and propose the concept of Neural Abstract Transformers: neural networks that serve as abstract transformers. The proposed Neural Abstract Interpretation (NAI) framework provides supervised and unsupervised methods to learn efficient neural transformers automatically, which reduces development costs. We instantiate the NAI framework for two widely used numerical domains: Interval and Octagon. Evaluations on these domains demonstrate the effectiveness of the NAI framework to learn sound and precise neural transformers. An added advantage of our technique is that neural transformers are differentiable, unlike hand-crafted alternatives. As an example, we showcase how this differentiability allows framing invariant generation as a learning problem, enabling neural transformers to generate valid octagonal invariants for a program.
Thesis
-
Neural Abstract Interpretation: Leveraging neural networks for automated, efficient and differentiable abstract interpretation
Shaurya Gomber
MS Thesis '24 (UIUC)
🏆 David J. Kuck Outstanding Master’s Thesis Award 2024@mastersthesis{nai_gomber_2024,
title={Neural abstract interpretation: Leveraging neural networks for automated,
efficient and differentiable abstract interpretation},
author={Gomber, Shaurya},
year={2024},
school={University of Illinois at Urbana-Champaign},
url={https://www.ideals.illinois.edu/items/131524}
}