|
Meta, AI Research Intern. Topic: Privacy-preserving synthetic data generation using large language models (LLMs). Redmond, WA, Summer 2023 |
Meta, AI Research Intern. Topic: Federated Learning for large language models (LLMs). Redmond, WA, Fall 2022 |
Amazon, Applied Science Intern. Topic: Self-supervised learning for Learning-to-rank (LTR). Palo Alto, CA, Summer 2022 |
Amazon, Applied Science Intern. Topic: Reinforcement learning for sub-same day delivery optimization. Seattle, WA, Summer 2021 |
Uber, Research Intern. Topic: Deep Radar Simulation. San Francisco, CA, Summer 2019 |
arxiv |
PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs Charlie Hou, Akshat Shrivastava, Hongyuan Zhan, Rylan Conway, Trang Le, Adithya Sagar, Giulia Fanti, Daniel Lazar ICML 2024 (Oral), ICLR 2024 PrivML workshop (Honorable Mention for Best Paper, Oral) |
arxiv |
On the Convergence of Differentially-Private Fine-tuning: To Linearly Probe or to Fully Fine-tune? Shuqi Ke, Charlie Hou, Giulia Fanti, Sewoong Oh Under submission. |
arxiv |
Pretrained deep models outperform GBDTs in Learning-To-Rank under label scarcity Charlie Hou, Kiran Koshy Thekumparampil, Michael Shavlovsky, Giulia Fanti, Yesh Dattatreya, Sujay Sanghavi ICML 2023 MFPL workshop (Oral) |
arxiv |
Privately Customizing Prefinetuning to Better Match User Data in Federated Learning Charlie Hou, Hongyuan Zhan, Akshat Shrivastava, Sid Wang, Sasha Livshits, Giulia Fanti, Daniel Lazar ICLR 2023 TrustML workshop |
arxiv |
FedChain: Chained Algorithms for Near-Optimal Communication Cost in Federated Learning Charlie Hou, Kiran K. Thekumparampil, Giulia Fanti, Sewoong Oh ICLR 2022, ICML 2021 FL workshop (Oral) |
arxiv |
Efficient Algorithms for Federated Saddle Point Optimization
Charlie Hou, Kiran K. Thekumparampil, Giulia Fanti, Sewoong Oh Preprint |
arxiv |
SquirRL: Automating Attack Analysis on Blockchain Incentive Mechanisms with Deep Reinforcement Learning Charlie Hou*, Mingxun Zhou*, Yan Ji, Phil Daian, Florian Tramer, Giulia Fanti, Ari Juels (*equal contribution) NDSS 2021 |
Honorable Mention for Best Paper Award at ICLR 2024 PrivML Workshop |
Google Collabs Research Award ($80k grant and $20k in GCP credits) 2022. With Giulia Fanti and Sewoong Oh |
Tiger Chef Champion, 2018 |
Reviewer, NeurIPS 2023 |
Reviewer, ICLR 2023 |