My picture

He Zhu

Assistant Professor
Computer Science Department
Rutgers University, New Brunswick
Email: hz375@cs.rutgers.edu

[Research Statement] [Awards] [Service] [Teaching] [Publications]
I am looking for motivated PhD students!
Please drop me an email with your CV if you are interested.

I am an assistant professor in the Department of Computer Science at Rutgers University-New Brunswick. My research focuses on neurosymbolic programming, which lies at the intersection of automated programming and deep learning and spans over programming languages, formal methods, and machine learning. I investigate program synthesis and formal program reasoning techniques to make machine learning systems more reliable and trustworthy. The long-term research goal is to build intelligent and interpretable AI systems that allow the tight integration of deep learning and symbolic reasoning and that can be certified robust and reliable.

I obtained my Ph.D. from Purdue CS, advised by Suresh Jagannathan.

Research Summary

An overview of my work in the past 5 years. [Research Statement]


  • Neurosymbolic Program Synthesis
    • (1) Differentiable Synthesis: Unlike training deep neural nets, synthesizing programs requires optimizing over a combinatorial, non-differentiable, and rapidly exploded space of program structures. My research asks: can we jointly search discrete program structures and program parameters using efficient optimization-based techniques?
      [dPads] (NeurIPS'21), [pi-PRL] (ICLR'22)
      dPads Image
    • (2) Reward-guided Synthesis: Robotic tasks typically involve a long horizon, extending over thousands of control steps and are characterized by sparse rewards. My research investigates: can we use inductive biases in language constructs such as state-conditioned loops to effectively compress the exploration space for long-horizon, multi-stage, and procedural robot-control tasks?
      [ReGuS] (PLDI'24)
      ReGuS Image
  • Symbol Discovery via Abstraction Refinement for Neurosymbolic Programs
    • (1) Comparative Abstraction Refinement: Automatically learning state and action abstractions has been a key area in RL and robotics. While prior work has explored learning relational abstractions from task demonstrations, existing approaches either assume access to low-level controllers that are already available or predefined predicates—limiting their applicability. The simultaneous discovery of both (as the symbolic definitions for neurosymbolic programming) remains an open challenge. My research asks: can we develop comparative abstraction refinement techniques that automatically infer state and action abstractions by contrasting expert demonstrations with learned policy behavior to reveal key subgoals and the underlying subtask structure?
      [RoboScribe] (2025)
      RoboScribe Framework
    • (2) Learning from Exploration: A major limitation in comparative abstraction refinement is the need for manual task demonstration provision for comparative abstraction refinement. My research explores: can environment abstraction be fully automated by allowing an agent to explore and collect successful trajectories?
      [CE2] (NeurIPS'24), [MUN] (NeurIPS'24), [Cago] (2025)
      Cluster Visualization
  • Neurosymbolic Program Verification A central thrust of my research lies in integrating learning and verification to ensure that neurosymbolic programs satisfy rigorous correctness specifications. My core algorithmic insight is to exploit the hierarchical structure of neurosymbolic programs to enable (i) deductive verification of high-level symbolic programs by abstracting neural components in abstract state spaces, and (ii) modular verification of low-level neural components within the concrete state spaces. This separation allows for efficient verification of the program’s logical structure without being hindered by the complexity of its neural components. The neural components are then compositionally verified within their concrete domains. [VEL] (TACAS'23), [VELM] (CAV'24), [VEL-∞] (CAV'25)

  • Publications

    Deductive Synthesis of Reinforcement Learning Agents for Infinite-Horizon Tasks [pdf]
    Yuning Wang and He Zhu.
    Proceedings of the 37th International Conference on Computer Aided Verification (CAV), 2025

    Exploring the Edges of Latent State Clusters for Goal-Conditioned Reinforcement Learning [pdf]
    Yuanlin Duan, Guofeng Cui and He Zhu.
    Neural Information Processing Systems (NeurIPS), 2024

    Learning World Models for Unconstrained Goal Navigation [pdf]
    Yuanlin Duan, Wensen Mao and He Zhu.
    Neural Information Processing Systems (NeurIPS), 2024

    Safe Exploration in Reinforcement Learning by Reachability Analysis over Learned Models [pdf]
    Yuning Wang and He Zhu.
    Proceedings of the 36th International Conference on Computer Aided Verification (CAV), 2024

    Reward-guided Synthesis of Intelligent Agents with Control Structures [pdf]
    Guofeng Cui, Yuning Wang, Wenjie Qiu and He Zhu.
    Proceedings of the 45th ACM SIGPLAN conference on Programming Language Design and Implementation (PLDI), 2024

    Instructing Goal-Conditioned Reinforcement Learning Agents with Temporal Logic Objectives [pdf]
    Wenjie Qiu, Wensen Mao and He Zhu.
    Neural Information Processing Systems (NeurIPS), 2023

    Verification-guided Programmatic Controller Synthesis [pdf]
    Yuning Wang and He Zhu.
    29th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), 2023

    ReLAX: Reinforcement Learning Agent Explainer for Arbitrary Predictive Models [pdf]
    Ziheng Chen, Fabrizio Silvestri, Jia Wang, He Zhu, Hongshik Ahn and Gabriele Tolomei.
    Proceedings of the 31st ACM International Conference on Information and Knowledge Management (CIKM), 2022

    Learn Basic Skills and Reuse: Modularized Adaptive Neural Architecture Search (MANAS) [pdf]
    Hanxiong Chen, Yunqi Li, Jia Wang, He Zhu and Yongfeng Zhang.
    Proceedings of the 31st ACM International Conference on Information and Knowledge Management (CIKM), 2022

    Defending Observation Attacks in Deep Reinforcement Learning via Detection and Denoising [pdf]
    Zikang Xiong, Joe Eappen, He Zhu and Suresh Jagannathan.
    European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), 2022

    Programmatic Reinforcement Learning without Oracles [pdf]
    Spotlight
    Wenjie Qiu and He Zhu.
    International Conference on Learning Representations (ICLR), 2022

    Graph Collaborative Reasoning [pdf]
    Hanxiong Chen, Yunqi Li, Shaoyun Shi, Shuchang Liu, He Zhu and Yongfeng Zhang.
    Proceedings of the 15th ACM International Conference on Web Search and Data Mining (WSDM), 2022.

    Differentiable Synthesis of Program Architectures [pdf]
    Guofeng Cui and He Zhu.
    Neural Information Processing Systems (NeurIPS), 2021

    ART: Abstraction Refinement-Guided Training for Provably Correct Neural Networks [pdf]
    Xuankang Lin, He Zhu, Roopsha Samanta and Suresh Jagannathan.
    Formal Methods in Computer-Aided Design (FMCAD), 2020

    An Inductive Synthesis Framework for Verifiable Reinforcement Learning [pdf][code]
    ACM SIGPLAN distinguished paper award
    He Zhu, Zikang Xiong, Stephen Magill and Suresh Jagannathan
    Proceedings of the 40th ACM SIGPLAN conference on Programming Language Design and Implementation (PLDI), 2019

    A Data-Driven CHC Solver [pdf][code]
    ACM SIGPLAN distinguished paper award
    He Zhu, Stephen Magill and Suresh Jagannathan
    Proceedings of the 39th ACM SIGPLAN conference on Programming Language Design and Implementation (PLDI), 2018

    Automatically Learning Shape Specifications [pdf][code]
    He Zhu, Gustavo Petri and Suresh Jagannathan
    Proceedings of the 37th ACM SIGPLAN conference on Programming Language Design and Implementation (PLDI), 2016

    Learning Refinement Types [pdf][code]
    He Zhu, Aditya V. Nori and Suresh Jagannathan
    Proceedings of the 20th ACM SIGPLAN International Conference on Functional Programming (ICFP), 2015

    Poling: SMT Aided Linearizability Proofs [pdf]
    He Zhu, Gustavo Petri and Suresh Jagannathan
    Proceedings of the 27th International Conference on Computer Aided Verification (CAV), 2015

    Dependent Array Type Inference from Tests [pdf]
    He Zhu, Aditya V. Nori and Suresh Jagannathan
    Proceedings of the 16th International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI), 2015

    Compositional and Lightweight Dependent Type Inference for ML [pdf]
    He Zhu and Suresh Jagannathan
    Proceedings of the 14th International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI), 2013


    Students

    Guofeng Cui, Wenjie Qiu, Yuning Wang, Wensen Mao, Zining Fan, Yuanlin Duan.

    Acknowledgement

    My research is supported by:
  • NSF Grant (2021): Synthesis and Verification for Programmatic Reinforcement Learning
  • NSF Grant (2020): Formal Symbolic Reasoning of Deep Reinforcement Learning Systems
  • DARPA Grant (2020): Symbiotic Design for Cyber Physical Systems

  • Teaching

  • CS 314. Principles of Programming Languages [F25] [S25] [F23] [S22] [S21] [S20]
  • CS 515. Programming Languages and Compilers I [F24] [F22] [F21] [F20] [F19]
  • CS 515. Programming Languages and Compilers II [S24]

  • Service

  • Area Chair: ICML 2025, NeurIPS (2025, 2024)
  • AEC Chair: CAV 2020
  • Program Committee: CAV (2025, 2020), PLDI (2023, 2021), OOPSLA 2022, HCVS (2021, 2019)
  • NSF Panelist 2025, 2024, 2023, 2022, 2020
  • Reviewer: ICLR (2025, 2024), ICML (2024, 2022), NeurIPS 2023, FMSD (2022), TSE (2021,2020,2019)

  • Awards

  • ACM SIGPLAN PLDI 2019 Distinguished Paper Award.
  • ACM SIGPLAN PLDI 2018 Distinguished Paper Award.
  • Maurice H. Halstead Memorial Award (Purdue University).