Concept: contraction in diffusion sampling

Contractive Diffusion Policies

ICLR 2026

CDPs add a simple contraction regularizer to diffusion policies, pulling nearby sampling trajectories together to suppress solver and score-matching errors. This yields more robust action generation in offline RL and imitation learning, especially in low-data regimes.

2 min · Amin Abyaneh, Charlotte Morissette, Mohamad Danesh, Anas Houssaini, David Meger, Gregory Dudek, Hsiu-Chin Lin
SCDS design overview

Contractive Dynamical Imitation Policies for Efficient Out-of-Sample Recovery

ICLR 2025

Imitation learning policies often fail when robots drift out-of-sample. We propose SCDS, a framework that models policies as contractive dynamical systems, ensuring all rollouts converge regardless of perturbations. The architecture guarantees contractivity by construction, enabling unconstrained optimization. We also provide formal bounds on worst-case and expected loss, and demonstrate strong out-of-sample recovery on manipulation and navigation tasks.

3 min · Amin Abyaneh*, Mahrokh Boroujeni*, Hsiu-Chin Lin, Giancarlo Ferrari-Trecate