About Me

I received the B.E. in Automobile Engineering from Beihang University in 2009, and received the M.E. in Mechatronics Engineering from Shanghai Jiao Tong University in 2012. During 03/2012 - 07/2013, I worked as a high-precision motion control engineer in Shanghai Micro Electronics Equipment Co., China. I received the Ph.D. in Chemical Engineering (Process Control) from the University of New South Wales (UNSW) in 2017, supervised by Prof. Jie Bao. Between 04/2017 and 09/2018, I was a postdoctoral researcher in the School of Chemical Engineering at UNSW.

News

08/05/25
Our paper "Robustly Invertible Nonlinear Dynamics and the BiLipREN: Contracting Neural Models with Contracting Inverses" has been posted on arXiv (joint work with Yurui Zhang and Ian R. Manchester). In this paper, We propose a new subclass of REN models with robust invertibility guarantees, i.e., the input signal to the neural networks can be robustly recovered from the output signals.
28/04/25
Our paper "Negative Imaginary Neural ODEs: Learning to Control Mechanical Systems with Stability Guarantees" (arXiv, joint work with Kanghong Shi and Ian R. Manchester) has been accepted to NOLCOS 2025. In this paper, We propose a new class of Neural ODEs with negative imaginary guarantees, which can be applied to learn stabilizing controllers for many mechanical systems.
03/04/25
Our paper "R2DN: Scalable Parameterization of Contracting and Lipschitz Recurrent Deep Networks" has been posted on arXiv (joint work with Nicholas H. Barbara and Ian R. Manchester). In this paper, we introduce a new contracting and Lipschitz neural dynamical model classes, called Robust Recurrent Deep Network (R2DN), which shares the similar structure and guarantees as our previous REN model but is much faster. Code can be found in here.
03/03/25
I joined the Reliable and Secure AI team at ServiceNow Research as a visiting researcher.
31/01/25
Our paper "Norm-Bounded Low-Rank Adaptation" has been posted on arXiv (joint work with Krishnamurthy Dvijotham and Ian R. Manchester). In this paper, we present two complete parameterizations of norm-bounded low-rank adaptation, i.e., they cover all matrices satisfying the prescribed rank and unitarily-invariant norm bound. Experiments on vision fine-tuning benchmarks show that it can achieve good adaptation performance while avoiding model catastrophic forgetting and also substantially improve robustness to a wide range of hyper-parameters, including rank, learning rate and number of training epochs.
29/10/24
Our paper "LipKernel: Lipschitz-Bounded Convolutional Neural Networks via Dissipative Layers" has been posted on arXiv (joint work with Patricia Pauli, Ian R. Manchester, and Frank Allgöwer). In this paper, we propose a novel layer-wise parameterization for convolutional neural networks that includes builtin robustness guarantees by enforcing a prescribed Lipschitz bound. Numerical experiments show that the run-time using our method is orders of magnitude faster than state-of-the-art Lipschitz-bounded networks that parameterize convolutions in the Frourier domain, making our approach attractive for robust learning-based real-time perception in robotics.

Selected Publications

Google Scholar
Monotone, Bi-Lipschitz, and Polyak-Lojasiewicz Networks
International Conference on Machine Learning (ICML), 2024
Ruigang Wang, Krishnamurthy Dvijotham, Ian R. Manchester

Recurrent Equilibrium Networks: Flexible Dynamic Models with Guaranteed Stability and Robustness
IEEE Transactions on Automatic Control (TAC, full paper), 2024
Max Revay, Ruigang Wang, Ian R. Manchester

Direct parameterization of lipschitz-bounded deep networks
(Oral) International Conference on Machine Learning (ICML), 2023
Ruigang Wang, Ian R. Manchester

Reduced-order nonlinear observers via contraction analysis and convex optimization
IEEE Transactions on Automatic Control (TAC, full paper), 2022
Bowen Yi, Ruigang Wang, Ian R. Manchester

On Robust Reinforcement Learning with Lipschitz-Bounded Policy Networks
Systems Theory in Data and Optimization: Proceedings of SysDO 2024
Nicholas H. Barbara, Ruigang Wang, Ian R. Manchester

Learning stable and passive neural differential equations
63nd IEEE Conference on Decision and Control (CDC), 2024
Jing Cheng, Ruigang Wang, Ian R. Manchester

Learning Over All Stabilizing Nonlinear Controllers for a Partially-Observed Linear System
IEEE Control Systems Letters, 2023
Ruigang Wang, Nicholas H. Barbara, Max Revay, Ian R. Manchester

Learning stable and robust linear parameter-varying state-space models
62nd IEEE Conference on Decision and Control (CDC), 2023
Chris Verhoek, Ruigang Wang, Roland Toth

Lipschitz-bounded 1D convolutional neural networks using the Cayley transform and the controllability Gramian
62nd IEEE Conference on Decision and Control (CDC), 2023
Patricia Pauli, Ruigang Wang, Ian R. Manchester, Frank Allgöwer

Learning Over Contracting and Lipschitz Closed-Loops for Partially-Observed Nonlinear Systems
62nd IEEE Conference on Decision and Control (CDC), 2023
Nicholas H. Barbara, Ruigang Wang, Ian R. Manchester