Bio

I received my Ph.D. degree in Electrical Engineering from University of Maryland College Park in 2022, my M.Tech degree in Control and Automation from Indian Institute of Technology Delhi in 2016, and the B.E. degree in Electronics and Telecommunication Engineering from Jadavpur University in 2014. I am currently a post-doc in the division of Decision and Data Sciences at Tata Consultancy Services Research, Mumbai. During 2016-2017, I was affiliated with Siksha `O' Anusandhan, India as an Assistant Professor in the Department of Electronics and Communication Engineering. My primary research is in the intersection of optimization and control theory, focused on developing novel tools for solving optimization problems, and aimed at balancing the decisive triad of performance, efficiency, and reliability.

Research Interests

Kushal Chakrabarti

Consultant

Data and Decision Sciences
Tata Consultancy Services Research
Tata Consultancy Services Ltd.
Olympus - A, Opp. Rodas Enclave, Hiranandani Estate
Ghodbunder Road, Patlipada, Thane(W): 400607
Maharashtra, India
Email: kchak@umd.edu

Distributed optimization

With the recent data-driven technological advancements, optimization is ubiquitous in several applications. In many contemporary applications, the data points are dispersed over several sources due to restrictions such as industrial competition, administrative regulations, and user privacy. The traditional gradient-descent algorithm can solve such optimization problems with differentiable cost functions. However, the convergence speed and robustness against noise of the gradient-descent method and its accelerated variants is highly influenced by the conditioning of the optimization problem being solved. With Nirupam Gupta and Nikhil Chopra, we developed an iterative pre-conditioning technique (IPG) for distributed optimization and a local pre-conditioning technique for decentralized optimization. IPG's robustness against noise has proven to impact specific problems such as beamforming, observer design, localization, and quantum circuit optimization. IPG has the potential to be applied to federated learning, which I plan to investigate in the future. Moreover, the IPG algorithm is being implemented in PyTorch, and will be available as a callable routine.

Peer-reviewed
  • Kushal Chakrabarti, and Nikhil Chopra. “A Newton-Type Observer Robust to Measurement Noise”. 2023 American Control Conference (ACC). IEEE. May 2023. (accepted)
  • Kushal Chakrabarti, Nirupam Gupta, and Nikhil Chopra. “On Pre-Conditioning of Decentralized Gradient-Descent when Solving a System of Linear Equations”. IEEE Transactions on Control of Network Systems. vol.9, no.2, pp.811-822, Apr 2022. doi:10.1109/TCNS.2022.3165089.
  • Kushal Chakrabarti, Nirupam Gupta, and Nikhil Chopra. “Iterative Preconditioning for Expediting the Distributed Gradient-Descent Method: The case of Linear Least-Squares Problem”. Automatica. vol.137, no. March 2022, pp.110095, Mar 2022. doi:10.1016/j.automatica.2021.110095.
  • Kushal Chakrabarti, Amrit S. Bedi, Fikadu T. Dagefu, Jeffrey N. Twigg, and Nikhil Chopra. “Fast Distributed Beamforming without Receiver Feedback”. In Fifty-Sixth Asilomar Conference on Signals, Systems, and Computers. IEEE. pp.1408-1412, Nov 2022.
  • Kushal Chakrabarti, Nirupam Gupta, and Nikhil Chopra. “Accelerating Distributed SGD for Linear Regression using Iterative Pre-Conditioning”. Learning for Dynamics and Control. PMLR. vol.144, pp.447-458, Jun 2021.
  • Kushal Chakrabarti, Nirupam Gupta, and Nikhil Chopra. “Robustness of Iteratively Pre-Conditioned Gradient-Descent Method: The Case of Distributed Linear Regression Problem”. IEEE Control Systems Letters. vol.5, no.6, pp.2180-2185, Dec 2020. doi:10.1109/LCSYS.2020.3045533.
  • Kushal Chakrabarti, Nirupam Gupta, and Nikhil Chopra. “Iterative Pre-Conditioning to Expedite the Gradient-Descent Method”. 2020 American Control Conference (ACC). IEEE. pp.3977-3982, Jul 2020. doi:10.23919/ACC45564.2020.9147603.
  • Kushal Chakrabarti, Nirupam Gupta, and Nikhil Chopra. “On Distributed Solution of Ill-Conditioned System of Linear Equations under Communication Delays”. 2019 Sixth Indian Control Conference (ICC). IEEE. pp.413-418, Dec 2019. doi:10.1109/ICC47138.2019.9123154.
ArXiv Preprints
  • Kushal Chakrabarti, Nirupam Gupta, and Nikhil Chopra. “On Accelerating Distributed Convex Optimizations”. arXiv preprint arXiv:2108.08670. Aug 2021.

Control-theoretic perspective of optimization algorithms

While machine learning is advancing our technology forward, some classical methods are worth exploring which could provide support to machine learning. Along this line of research, my another work is on modeling optimization algorithms for non-convex optimization as closed-loop continuous-time dynamical systems. With Nikhil Chopra, we developed a unified state-space framework for adaptive gradient methods, allowing us to exploit control-theoretic methodology in analyzing prominent adaptive gradient methods used to train deep neural networks. From a synthesis point of view, we utilized the classical transfer function paradigm to propose new variants of Adam. Applications on benchmark machine learning tasks demonstrate our proposed algorithms’ efficiency compared to the state-of-the-art optimizer for deep neural networks, such as Adam and its several variants. Our findings suggest further exploring the existing control theory tools in complex machine learning problems.

Peer-reviewed
  • Kushal Chakrabarti, and Nikhil Chopra. “A Control Theoretic Framework for Adaptive Gradient Optimizers in Machine Learning”. arXiv preprint arXiv:2206.02034. Jun 2022. (submitted to Automatica)
  • Kushal Chakrabarti, and Nikhil Chopra. “A State-Space Perspective on the Expedited Gradient Methods: Nadam, RAdam, and Rescaled Gradient Flow”. 2022 Eighth Indian Control Conference (ICC). IEEE. pp.31-36, Dec 2022.
  • Kushal Chakrabarti, and Nikhil Chopra. “Analysis and Synthesis of Adaptive Gradient Algorithms in Machine Learning: The Case of AdaBound and MAdamSSM”. 2022 61st IEEE Conference on Decision and Control (CDC). IEEE. pp.795-800, Dec 2022. doi:10.1109/CDC51059.2022.9992512.
  • Kushal Chakrabarti, and Nikhil Chopra. “Generalized AdaGrad (G-AdaGrad) and Adam: A State-Space Perspective”. 2021 60th IEEE Conference on Decision and Control (CDC). IEEE. pp.1496-1501, Dec 2021. doi:10.1109/CDC45484.2021.9682994.

Parameter estimation in biomolecular systems

Determination of model parameters is an essential and a challenging task in systems biology. The challenges arise from various factors, such as inherent non-Gaussian noise, unmodeled dynamics. Moreover, this noise depends on parameters for such systems. Kalman filtering technique has been used extensively to estimate parameters in biomolecular systems. However, process noise covariance, which may be used to estimate parameters in Kalman filtering, itself depends on unknown parameters. With Abhishek Dey and Shaunak Sen, we formulated an estimate-dependent expression of the unknown process noise covariance based on the chemical Langevin equation, which is updated in iteration. We found that this can give reasonably good parameter estimates for biomolecular systems and other systems with parameter-dependent noise.

Peer-reviewed
  • Abhishek Dey, Kushal Chakrabarti, Krishan Kumar Gola, and Shaunak Sen. “A Kalman Filter Approach for Biomolecular Systems with Noise Covariance Updating”. 2019 Sixth Indian Control Conference (ICC). IEEE. pp.262-267, Dec 2019. doi:10.1109/ICC47138.2019.9123219.
Plain Academic