Bio
I received my Ph.D. degree in Electrical Engineering from University of Maryland College Park in 2022, my M.Tech degree in Control and Automation from Indian Institute of Technology Delhi in 2016, and the B.E. degree in Electronics and Telecommunication Engineering from Jadavpur University in 2014. I am currently a post-doc in the division of Decision and Data Sciences at Tata Consultancy Services Research, Mumbai. During 2016-2017, I was affiliated with Siksha `O' Anusandhan, India as an Assistant Professor in the Department of Electronics and Communication Engineering. My primary research is in the intersection of optimization and control theory, focused on developing novel tools for solving optimization problems, and aimed at balancing the decisive triad of performance, efficiency, and reliability.
Research Interests
Consultant
Data and Decision Sciences
Tata Consultancy Services Research
Tata Consultancy Services Ltd.
Olympus - A, Opp. Rodas Enclave, Hiranandani Estate
Ghodbunder Road, Patlipada, Thane(W): 400607
Maharashtra, India
Email: kchak@umd.edu
With the recent data-driven technological advancements, optimization is ubiquitous in several applications. In many contemporary applications, the data points are dispersed over several sources due to restrictions such as industrial competition, administrative regulations, and user privacy. The traditional gradient-descent algorithm can solve such optimization problems with differentiable cost functions. However, the convergence speed and robustness against noise of the gradient-descent method and its accelerated variants is highly influenced by the conditioning of the optimization problem being solved. With Nirupam Gupta and Nikhil Chopra, we developed an iterative pre-conditioning technique (IPG) for distributed optimization and a local pre-conditioning technique for decentralized optimization. IPG's robustness against noise has proven to impact specific problems such as beamforming, observer design, localization, and quantum circuit optimization. IPG has the potential to be applied to federated learning, which I plan to investigate in the future. Moreover, the IPG algorithm is being implemented in PyTorch, and will be available as a callable routine.
Peer-reviewedWhile machine learning is advancing our technology forward, some classical methods are worth exploring which could provide support to machine learning. Along this line of research, my another work is on modeling optimization algorithms for non-convex optimization as closed-loop continuous-time dynamical systems. With Nikhil Chopra, we developed a unified state-space framework for adaptive gradient methods, allowing us to exploit control-theoretic methodology in analyzing prominent adaptive gradient methods used to train deep neural networks. From a synthesis point of view, we utilized the classical transfer function paradigm to propose new variants of Adam. Applications on benchmark machine learning tasks demonstrate our proposed algorithms’ efficiency compared to the state-of-the-art optimizer for deep neural networks, such as Adam and its several variants. Our findings suggest further exploring the existing control theory tools in complex machine learning problems.
Peer-reviewedDetermination of model parameters is an essential and a challenging task in systems biology. The challenges arise from various factors, such as inherent non-Gaussian noise, unmodeled dynamics. Moreover, this noise depends on parameters for such systems. Kalman filtering technique has been used extensively to estimate parameters in biomolecular systems. However, process noise covariance, which may be used to estimate parameters in Kalman filtering, itself depends on unknown parameters. With Abhishek Dey and Shaunak Sen, we formulated an estimate-dependent expression of the unknown process noise covariance based on the chemical Langevin equation, which is updated in iteration. We found that this can give reasonably good parameter estimates for biomolecular systems and other systems with parameter-dependent noise.
Peer-reviewed