We investigate the foundational and algorithmic integration of formal methods with reinforcement learning to enable reliable and structured decision-making in complex environments. Our research spans:
By unifying techniques from logic, automata, control theory, and learning, we aim to build safe, interpretable, and specification-aware RL systems.
We develop methods for enabling large language models (LLMs) to reason in trustworthy, auditable, and verifiable ways. Our work integrates LLMs with symbolic tools—such as formal logic, automata, and SMT solvers—to produce explanations and decisions that are explainable, auditable, and backed by formal reasoning.
We design modular, multi-agent architectures that separate planning, reasoning, and explanation, allowing LLMs to:
This enables applications in explainable formal methods, legally accountable AI, and safe decision-making in high-stakes domains.
We study the interplay between algorithmic decision-making, software systems, and legal and ethical accountability. Our research develops formal and data-driven methods for:
We focus on domains such as tax preparation software, juvenile justice, and public services, where fairness, transparency, and legal compliance are critical.
We design provably correct and resilient controllers for safety-critical cyber-physical systems (CPS). Our research combines control theory, formal verification, and symbolic reasoning to ensure both safety and security in dynamic, uncertain environments.
We develop tools and techniques for:
If you are interested in joining the group as a PhD student, please read my research interests and send a concise email outlining your background and fit.