Performance of Contractive Diffusion Policies vs baseline diffusion policies

Contractive Diffusion Policies: Robust Action Diffusion via Contractive Score-Based Sampling with Differential Equations

CDPs add a simple contraction regularizer to diffusion policies, pulling nearby sampling trajectories together to suppress solver and score-matching errors. This yields more robust action generation in offline RL and imitation learning, especially in low-data regimes.

September 2025 · Amin Abyaneh, Charlotte Morissette, Mohamad Danesh, Anas Houssaini, David Meger, Gregory Dudek, Hsiu-Chin Lin
Overview of VOCALoco framework showing skill prediction and selection

VOCALoco: Viability-Optimized Cost-aware Adaptive Locomotion

VOCALoco predicts the viability and cost of transport for several pretrained locomotion skills from local heightmaps, then executes the safest, most efficient one. It improves robustness on stairs and transfers zero-shot to real hardware.

June 2025 · Stanley Wu and Mohamad H. Danesh and Simon Li and Hanna Yurchyk and Amin Abyaneh and Anas El Houssaini and David Meger and Hsiu-Chin Lin
Learning contractive dynamical systems with neural ODEs for imitation.

Contractive Dynamical Imitation Policies for Efficient Out-of-Sample Recovery

Links Paper Project page Slides Summer at NCCR and EPFL This work is the result of an amazing collaboration with EPFL and NCCR Automation. The project’s contributions are owed to dedicated work and ideas of senior PhD candidate, Mahrokh G. Boroujeni, and Prof. Giancarlo Ferrari-Trecate, both from EPFL’s Laboratoire d’Automatique. Special thanks to NCCR’s Visiting Researcher’s Fellowship and EPFL’s hospitality for arranging a productive work environment throughout my stay. Summary of the work Imitation learning is a data-driven approach to learning policies from expert behavior, but it is prone to unreliable outcomes in out-of-sample (OOS) regions. While previous research relying on stable dynamical systems guarantees convergence to a desired state, it often overlooks transient behavior. We propose a framework for learning policies modeled by contractive dynamical systems, ensuring that all policy rollouts converge regardless of perturbations, and in turn, enable efficient OOS recovery. By leveraging recurrent equilibrium networks and coupling layers, the policy structure guarantees contractivity for any parameter choice, which facilitates unconstrained optimization. We also provide theoretical upper bounds for worst-case and expected loss to rigorously establish the reliability of our method in deployment. Empirically, we demonstrate substantial OOS performance improvements for simulated robotic manipulation and navigation tasks. ...

January 2025 · Amin Abyaneh, Mahrokh Boroujeni, Hsiu-Chin Lin, Giancarlo Ferrari-Trecate
Learning polynomial dynamical systems with parametric ODEs for imitation.

Learning Lyapunov-Stable Polynomial Dynamical Systems through Imitation

Links Paper Project page Slides Missing my first conference Unfortunately, I couldn’t attend my first conference paper presentation at CoRL'23 due to US visa processing delays. I applied for a visa well in advance, but faced significant backlog issues that unfortunately persisted beyond the conference date. As of now (several months later), I’m still awaiting visa approval, which has been pending since October 2023. Summary of the approach Imitation learning is a paradigm to address complex motion planning problems by learning a policy to imitate an expert’s behavior. However, relying solely on the expert’s data might lead to unsafe actions when the robot deviates from the demonstrated trajectories. Stability guarantees have previously been provided utilizing nonlinear dynamical systems, acting as high-level motion planners, in conjunction with the Lyapunov stability theorem. Yet, these methods are prone to inaccurate policies, high computational cost, sample inefficiency, or quasi stability when replicating complex and highly nonlinear trajectories. To mitigate this problem, we present an approach for learning a globally stable nonlinear dynamical system as a motion planning policy. We model the nonlinear dynamical system as a parametric polynomial and learn the polynomial’s coefficients jointly with a Lyapunov candidate. To showcase its success, we compare our method against the state of the art in simulation and conduct real-world experiments with the Kinova Gen3 Lite manipulator arm. Our experiments demonstrate the sample efficiency and reproduction accuracy of our method for various expert trajectories, while remaining stable in the face of perturbations. ...

October 2023 · Amin Abyaneh, Hsiu-Chin Lin