Research Overview
We are interested in stochastic modeling and analysis that provides sharp insights into the design of big data infrastructure. Our vision is to democratize machine learning by transcending the need for big compute and big data. We design scalable and sample-efficient algorithms that enable users to perform accurate training and inference even with limited computation capabilities and small amounts of data.
Tools and Approach
We use a diverse set of analytical tools drawn from probability, queueing, coding theory, and statistical learning. Our approach involves the following steps, often traversed in an iterative manner:
- Choosing a model that captures the essential elements of the system, while being simple and tractable.
- Designing algorithms that are provably optimal within the model framework.
- Evaluating the proposed algorithms on the real system, going beyond the assumptions of the model.
Selected Projects and Talk Videos
Computational Heterogeneity in Federated Learning
The emerging area of federated learning seeks to achieve this goal by orchestrating distributed model training using a large number of resource-constrained mobile devices that collect data from their environment. Due to limited communication capabilities as well as privacy concerns, the data collected by these devices cannot be sent to the cloud for centralized processing. Instead, the nodes perform local training updates and only send the resulting model to the cloud. In this project, we tackle computational heterogeneity in federated optimization, for example, heterogeneous local updates made by the edge clients and intermittent client availability.
- Cooperative SGD: A Unified Framework for the Analysis of Local-Update SGD
- Jianyu Wang, Gauri Joshi
- Journal of Machine Learning Research (JMLR), 2021
- Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization
- Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, H. Vincent Poor
- Neural Information Processing Systems (NeurIPS), Dec 2020
- Client Selection in Federated Learning: Convergence Analysis and Power-of-Choice Selection Strategies
- Yae Jee Cho, Jianyu Wang, Gauri Joshi
- preprint
- Local Adaptivity in Federated Learning: Convergence and Consistency
- Jianyu Wang, Zheng Xu, Zachary Garrett, Zachary Charles, Luyang Liu, Gauri Joshi
- preprint
Distributed Machine Learning
In large-scale machine learning, training is performed by running stochastic gradient descent (SGD) in a distributed fashion using multiple worker nodes. Synchronization delays due to straggling workers and communication delays in aggregating updates from worker nodes can significantly increase the run-time of the training job. Our goal is to design a system-aware distributed SGD algorithms that strikes the best trade-off between the training time, and the error in the trained model.
- Overlap Local-SGD: An Algorithmic Approach to Hide Communication Delays in Distributed SGD
- Jianyu Wang, Hao Liang, Gauri Joshi
- International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2020
- MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
- Jianyu Wang, Anit Sahu, Gauri Joshi, Soummya Kar
- NeurIPS workshop of Federated Learning for Data Privacy and Confidentiality, Dec 2019
- Distinguished Student Paper Award
- Adaptive Communication Strategies to Achieve the Best Error-Runtime Trade-off in Local-Update SGD
- Jianyu Wang, Gauri Joshi
- SysML Conference, March 2019
- Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD
- Sanghamitra Dutta, Gauri Joshi, Soumyadip Ghosh, Parijat Dube, Priya Nagpurkar,
- International Conference on Artificial Intelligence and Statistics (AISTATS), Apr 2018
Correlated Multi-armed Bandits
Multi-armed bandit algorithms have numerous data-driven decision-making applications including advertising, A/B testing and hyper-parameter tuning. Unlike classic works that assume independence of the rewards observed by sampling different arms, we consider correlated rewards. This correlation can be exploited to identify sub-optimal or "non-competitive" arms and avoid exploring them. This allows us to reduce a K armed bandit problem to a C armed bandit problem, where C is the number of "competitive" arms.
- Best-arm Identification in Correlated Multi-armed Bandits
- Samarth Gupta, Gauri Joshi, Osman Yagan
- Journal on Selected Areas of Information Theory (JSAIT) Special Issue on Sequential, Active, and Reinforcement Learning, 2021
- Multi-armed Bandits with Correlated Arms
- Samarth Gupta, Shreyas Chaudhari, Gauri Joshi, Osman Yagan
- IEEE Transactions on Information Theory, May 2021
- A Unified Approach to Translate Classic Bandit Algorithms to the Structured Bandit Setting
- Samarth Gupta, Shreyas Chaudhari, Subhojyoti Mukherjee, Gauri Joshi, Osman Yagan
- Journal on Selected Areas of Information Theory (JSAIT) Special Issue on Estimation and Inference, 2021
- Correlated Multi-armed Bandits with a Latent Random Source
- Samarth Gupta, Gauri Joshi, Osman Yagan
- International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2020
Redundancy in Parallel Computing
In large-scale computing where a job is divided into parallel tasks that are run on different computing, slow or straggling nodes can become the bottleneck. We use redundancy in the form of replication or erasure coding to design robust distributed computing algorithms that can seamless adjust to varying node computation speeds. Applications include distributed matrix computations, data analytics and inference.
- Rateless Codes for Near-Perfect Load Balancing in Distributed Matrix-vector Multiplication
- Ankur Mallick, Malhar Chaudhari, Ganesh Palanikumar, Utsav Sheth, Gauri Joshi
- ACM SIGMETRICS, June 2020
- Best Paper Award
- Efficient Redundancy Techniques for Latency Reduction in Cloud Systems
- Gauri Joshi, Emina Soljanin and Gregory Wornell,
- ACM Transactions on Modeling and Performance Evaluation of Computing Systems, 2017