Preprints
-
Universal Synthesis of Differentiably Tunable Numerical Abstract Transformers
Numerical abstract interpretation is a widely used framework for the static analysis of numerical programs. However, existing numerical abstract interpreters rely on hand-crafted, instruction-specific transformers tailored to each domain, with no general algorithm for handling common operations across domains. This limits extensibility, prevents precise compositional reasoning over instruction sequences, and forces all downstream tasks to use the same fixed transformer regardless of their precision, efficiency, or task-specific requirements. To address these limitations, we propose a universal transformer synthesis algorithm that constructs a parametric family of sound abstract transformers for any given polyhedral numerical domain and a concrete operator from the class of Quadratic-Bounded Guarded Operators (QGO), which includes both individual instructions and structured sequences. Each instantiation in this family is sound by construction, enabling downstream analyses to adapt the transformer to their particular needs. The space of transformers is differentiable but complex. To efficiently explore this space of transformers, we introduce the Adaptive Gradient Guidance (AGG) procedure, a gradient-guided search strategy that steers the search process based on downstream analysis objectives and runtime constraints. We implement these ideas in the USTAD framework and evaluate their effectiveness across three numerical abstract domains: Zones, Octagons, and Polyhedra. Our results demonstrate that the universal synthesis algorithm successfully constructs sound families of transformers across domains, and that USTAD achieves significant, tunable precision gains over baselines by leveraging compositional reasoning and efficient gradient-guided traversal of the transformer space.
@misc{ustad_arxiv, title={Universal Synthesis of Differentiably Tunable Numerical Abstract Transformers}, author={Shaurya Gomber and Debangshu Banerjee and Gagandeep Singh}, year={2025}, eprint={2507.11827}, archivePrefix={arXiv}, primaryClass={cs.PL}, note={\url{https://arxiv.org/abs/2507.11827}}, }
-
Efficient Ranking Function-Based Termination Analysis with Bi-Directional Feedback
Several techniques have been developed to prove the termination of programs. Finding ranking functions is one of the common approaches to do so. A ranking function must be bounded and must reduce at every iteration for all the reachable program states. Since the set of reachable states is often unknown, invariants serve as an over-approximation. Further, in the case of nested loops, the initial set of program states for the nested loop can be determined by the invariant of the outer loop. So, invariants play an important role in proving the validity of a ranking function in the absence of the exact reachable states. However, in the existing techniques, either the invariants are synthesized independently, or combined with ranking function synthesis into a single query, both of which are inefficient. We observe that a guided search for invariants and ranking functions can have benefits in terms of the number of programs that can be proved to terminate and the time needed to identify a proof of termination. So, in this work, we develop Syndicate, a novel framework that synergistically guides the search for both the ranking function and an invariant that together constitute a proof of termination. Owing to our synergistic approach, Syndicate can not only prove the termination of more benchmarks but also achieves a reduction ranging from 17% to 70% in the average runtime as compared to existing state-of-the-art termination analysis tools. We also prove that Syndicate is relatively complete, i.e., if there exists a ranking function and an invariant in their respective templates that can be used to prove the termination of a program, then Syndicate will always find it if there exist complete procedures for the template-specific functions in our framework.
@misc{ syndicate_arxiv, title={Efficient Ranking Function-Based Termination Analysis with Bi-Directional Feedback}, author={Yasmin Sarita and Avaljot Singh and Shaurya Gomber and Gagandeep Singh and Mahesh Vishwanathan}, year={2024}, eprint={2404.05951}, archivePrefix={arXiv}, primaryClass={cs.LO}, url={https://arxiv.org/abs/2404.05951}, }
Workshops
-
Neural Abstract Interpretation
Abstract interpretation is a widely used method for the formal analysis and verification of programs and neural networks. However, designing efficient abstract transformers for expressive relational domains such as Octagon and Polyhedra is challenging as one needs to carefully balance the fundamental trade-off between the cost, soundness, and precision of the transformer for downstream tasks. Further, scalable implementations involve intricate performance optimizations like Octagon and Polyhedra decomposition. Given the inherent complexity of abstract transformers and the proven capability of neural networks to effectively approximate complex functions, we envision and propose the concept of Neural Abstract Transformers: neural networks that serve as abstract transformers. The proposed Neural Abstract Interpretation (NAI) framework provides supervised and unsupervised methods to learn efficient neural transformers automatically, which reduces development costs. We instantiate the NAI framework for two widely used numerical domains: Interval and Octagon. Evaluations on these domains demonstrate the effectiveness of the NAI framework to learn sound and precise neural transformers. An added advantage of our technique is that neural transformers are differentiable, unlike hand-crafted alternatives. As an example, we showcase how this differentiability allows framing invariant generation as a learning problem, enabling neural transformers to generate valid octagonal invariants for a program.
@inproceedings{gomber2025neural, title={Neural Abstract Interpretation}, author={Shaurya Gomber and Gagandeep Singh}, booktitle={ICLR 2025 Workshop: VerifAI: AI Verification in the Wild}, year={2025}, url={https://openreview.net/forum?id=WTyyhWhp4m} }
Thesis
-
Neural Abstract Interpretation: Leveraging neural networks for automated, efficient and differentiable abstract interpretation🏆 David J. Kuck Outstanding Master’s Thesis Award 2024
@mastersthesis{ nai_gomber_2024, title={Neural abstract interpretation: Leveraging neural networks for automated, efficient and differentiable abstract interpretation}, author={Gomber, Shaurya}, year={2024}, school={University of Illinois at Urbana-Champaign}, url={https://www.ideals.illinois.edu/items/131524} }