Charlie Hou

I am a fifth year Ph.D. student at Carnegie Mellon University advised by Giulia Fanti.

I work on large language models (LLMs), learning-to-rank (LTR), and privacy. In the past I also worked on security.

Before I started at CMU, I did my undergrad at Princeton University, where I worked with Yuxin Chen and Miklos Racz.

Google Scholar, Github, LinkedIn, CV

Contact me at [hou.charlie2 at gmail dot com].



Work Experience

Meta, AI Research Intern.
Topic: Privacy-preserving synthetic data generation using large language models (LLMs).
Redmond, WA, Summer 2023
Meta, AI Research Intern.
Topic: Federated Learning for large language models (LLMs).
Redmond, WA, Fall 2022
Amazon, Applied Science Intern.
Topic: Self-supervised learning for Learning-to-rank (LTR).
Palo Alto, CA, Summer 2022
Amazon, Applied Science Intern.
Topic: Reinforcement learning for sub-same day delivery optimization.
Seattle, WA, Summer 2021
Uber, Research Intern.
Topic: Deep Radar Simulation.
San Francisco, CA, Summer 2019



Research

arxiv
PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs
Charlie Hou, Akshat Shrivastava, Hongyuan Zhan, Rylan Conway, Trang Le, Adithya Sagar, Giulia Fanti, Daniel Lazar
ICML 2024 (Oral), ICLR 2024 PrivML workshop (Honorable Mention for Best Paper, Oral)
arxiv
On the Convergence of Differentially-Private Fine-tuning: To Linearly Probe or to Fully Fine-tune?
Shuqi Ke, Charlie Hou, Giulia Fanti, Sewoong Oh
Under submission.
arxiv
Pretrained deep models outperform GBDTs in Learning-To-Rank under label scarcity
Charlie Hou, Kiran Koshy Thekumparampil, Michael Shavlovsky, Giulia Fanti, Yesh Dattatreya, Sujay Sanghavi
ICML 2023 MFPL workshop (Oral)
arxiv
Privately Customizing Prefinetuning to Better Match User Data in Federated Learning
Charlie Hou, Hongyuan Zhan, Akshat Shrivastava, Sid Wang, Sasha Livshits, Giulia Fanti, Daniel Lazar
ICLR 2023 TrustML workshop
arxiv
FedChain: Chained Algorithms for Near-Optimal Communication Cost in Federated Learning
Charlie Hou, Kiran K. Thekumparampil, Giulia Fanti, Sewoong Oh
ICLR 2022, ICML 2021 FL workshop (Oral)
arxiv
Efficient Algorithms for Federated Saddle Point Optimization
Charlie Hou, Kiran K. Thekumparampil, Giulia Fanti, Sewoong Oh
Preprint
arxiv
SquirRL: Automating Attack Analysis on Blockchain Incentive Mechanisms with Deep Reinforcement Learning
Charlie Hou*, Mingxun Zhou*, Yan Ji, Phil Daian, Florian Tramer, Giulia Fanti, Ari Juels (*equal contribution)
NDSS 2021



Awards

Honorable Mention for Best Paper Award at ICLR 2024 PrivML Workshop
Google Collabs Research Award ($80k grant and $20k in GCP credits) 2022.
With Giulia Fanti and Sewoong Oh
Tiger Chef Champion, 2018



Professional Service

Reviewer, NeurIPS 2023
Reviewer, ICLR 2023