Trustworthy Reasoning and Intelligence.

My research group investigates the theoretical foundations and practical applications of trustworthy artificial intelligence. We develop mathematically rigorous methods to ensure that learning-enabled and autonomous systems operate safely, fairly, and accountably. Our work spans formal verification, explainability, and the integration of logic-based reasoning with data-driven approaches. We are part of the Programming Languages and Verification (CUPLV) Group at CU Boulder.

Research Themes

Our key research themes include:

Formal Methods for Reinforcement Learning

We investigate the foundational and algorithmic integration of formal methods with reinforcement learning to enable reliable and structured decision-making in complex environments. Our research spans:

  • Translating logical specifications into learnable reward structures,
  • Symbolic and automata-theoretic representations such as reward machines and regular-language-based state abstractions,
  • Reinforcement learning for recursive and branching MDPs,
  • Continuous-time and continuous-space RL grounded in physical models,
  • Novel frameworks like asymmetrically-discounted and past-discounted objectives.

By unifying techniques from logic, automata, control theory, and learning, we aim to build safe, interpretable, and specification-aware RL systems.

Trustworthy Reasoning with LLMs

We develop methods for enabling large language models (LLMs) to reason in trustworthy, auditable, and verifiable ways. Our work integrates LLMs with symbolic tools—such as formal logic, automata, and SMT solvers—to produce explanations and decisions that are explainable, auditable, and backed by formal reasoning.

We design modular, multi-agent architectures that separate planning, reasoning, and explanation, allowing LLMs to:

  • Generate formal proofs and structured reasoning steps,
  • Translate formal artifacts into human-understandable explanations,
  • Critique and verify outputs for logical consistency and regulatory compliance.

This enables applications in explainable formal methods, legally accountable AI, and safe decision-making in high-stakes domains.

AI, Software, and Accountability

We study the interplay between algorithmic decision-making, software systems, and legal and ethical accountability. Our research develops formal and data-driven methods for:

  • Detecting and mitigating algorithmic unfairness,
  • Ensuring regulatory and legal compliance,
  • Auditing, certifying, and explaining software behavior in high-stakes contexts.

We focus on domains such as tax preparation software, juvenile justice, and public services, where fairness, transparency, and legal compliance are critical.

Secure and Safe CPS

We design provably correct and resilient controllers for safety-critical cyber-physical systems (CPS). Our research combines control theory, formal verification, and symbolic reasoning to ensure both safety and security in dynamic, uncertain environments.

We develop tools and techniques for:

  • Synthesizing trustworthy controllers using control barrier functions, Lyapunov methods, and temporal logic,
  • Ensuring security and robustness against adversarial or uncertain inputs,
  • Integrating reinforcement learning with formal guarantees for continuous and hybrid systems.

Current Group Members

Postdoctoral Associates
  • Shadi Tasdighi-Kalat
  • John Komp
PhD Students
  • David Baines (since Fall 2022) – Reinforcement Learning for Finance
  • Alireza Nadali (since Fall 2022) – Transfer Learning for Control
  • Amin Falah (since Spring 2023) – Reinforcement Learning for Continuous-Time MDPs
  • Lekai Chen (co-advised with Alvaro Velasquez, since Fall 2023) – Large Language Models and Formal Languages
Undergraduate / Master's Students
  • Miles Zheng

Past Members

PhD Alumni
  1. Saeid Tizpaz-Niari (PhD 2020, Differential Performance Debugging and its Application to Side-Channel Analysis) – Faculty at University of Illinois Chicago
  2. Devendra Bhave (PhD 2020, Perfect Subclasses of Real-Time Recursive Systems) – Principal Software Engineer at MathWorks Inc.
  3. Tianhan Lu (PhD 2023, Selectively-Amortized Resource Bounding) – Research Scientist at Meta, Inc.
  4. Taylor Dohmen (PhD 2024, Regular Transformations in Sequential Optimization & Reinforcement Learning) – AI Research Scientist at 3M
  5. John Komp (PhD 2024, From Text to Life: Optimized Pacemaker Therapy Through Formal Methods and Safe Reinforcement Learning)
  6. Vishnu Murali (PhD 2024, Inductive Functional Proofs Beyond Barrier Certificates) – Postdoctoral Researcher at CU Boulder
  7. Mateo Perez (PhD 2025, Formal Languages and Automata for Reinforcement Learning) – CEO and Founder of Jazzberry
  8. Shadi Tasdighi-Kalat (PhD 2025, Eliciting Decision Proxies in Multi-Agent Games) – Postdoctoral Researcher at CU Boulder
Postdoctoral Associates (Past)
MS/BS Thesis Advisees (CU Boulder)
  • Saksham Srivastava – “LLMs and Program Synthesis”
  • Jordan Perr-Sauer (MS, Summer–Fall 2022) – “Verification of Neural Networks”
  • Zachary Mckevitt (BS, Fall 2020–Spring 2022, co-advised with Tamara Lehman) – “RNNs for Automatic Transient Execution Attack Detection”
  • Vishnu Murali (MS, 2019–2020) – “Optimal Repair for Omega-Regular Properties” (switched to PhD)
MS/BS Research Advisees (CU Boulder)
  • Aaptha Boggaram – “Artificial Cardiac Pacemakers and AI”
  • Varsha Dewangan – “Technical Challenges in Maintaining Tax Prep Software with LLMs”
  • Ali Almutawa, Jr. (BS, Summer 2022, co-advised with Fabio Somenzi)
  • Adam Adl (BS, Summer 2022, co-advised with Fabio Somenzi)
  • Ian Mckibben (BS, Summer 2021, co-advised with Fabio Somenzi)
  • Emily Millican (BS, Spring 2020) – Now Embedded Software Engineer at Ball Aerospace
  • John Paul Martin (MS, 2018–2020, co-advised with Evan Chang)
  • Juraj Culak (MS, 2017–2019)
  • Mateo Perez (BS, Summer 2018–Spring 2020, co-advised with Fabio Somenzi) – CEO and Founder of Jazzberry
  • Shemal Somil Lalaji (MS, Fall 2019) – Independent Study Project; now at Visa
  • Capstone Team “Love Bugs” (ECEE, Self-Driving Car Project, co-advised with Fabio Somenzi): Mohammed Al Hasani, George Matthew Helmick, Rodolfo Gonzalez Hill V, Myungshin Im, Michael Shea Oliver, and Mateo Perez
  • Aniruddha Phatak (MS/PhD, 2019–2020, co-advised with Pavol Cerny)
  • Aniket Lata (MS, 2015–2016, co-advised with Evan Chang) – Now at Qualcomm
  • Ram Das Diwakaran (MS, 2016–2017, co-advised with Sriram Sankaranarayanan) – Now at Generac

Resources

  • CUPLV GitHub Organization – Open-source tools and benchmarks developed by our group.
  • arXiv Preprints – Recent and past preprints of our research papers.
  • Slides from my previous talks are available upon request.
  • Course materials—including slides, lecture notes, and assignments—for selected graduate and undergraduate courses are available upon request.

Apply Now

If you are interested in joining the group as a PhD student, please read my research interests and send a concise email outlining your background and fit.