Skip to main content

Hong Shen

Assistant Research Professor
Human-Computer Interaction Institute
School of Computer Science
Carnegie Mellon University

Office: 2602D Newell-Simon Hall
Admin assistant: Becky Wang



I am an Assistant Research Professor in the Human-Computer Interaction Institute at Carnegie Mellon University, where I direct the CARE (Collective AI Research and Evaluation) Lab. I received my PhD from the University of Illinois at Urbana-Champaign.

I'm an interdisciplinary scholar situated at the intersection of human-computer interaction, communications, and public policy. Broadly, I study the social, ethical and policy implications of digital platforms and algorithmic systems, with a strong emphasis on bias, fairness, social justice and power relations in Artificial Intelligence and Machine Learning. My work has been generously supported by the National Science Foundation, Amazon Research, Cisco Research, Google Research, Microsoft Research, The Public Interest Technology University Network, Block Center for Technology and Society, CyLab, and more.

RECENT NEWS
  • 10/2024: We received a planning grant from the National Science Foundation to develop participatory, community-centered AI solutions for strengthening peer-run mental health services for marginalized populations.


GROUP

In the CARE (Collective AI Research and Evaluation) Lab, we develop innovative tools, methods, and processes that empower impacted communities, everyday users and the general public to collectively evaluate and mitigate harmful machine behaviors across digital platforms and algorithmic systems. I'm very fortunate to advise and work with the following inspiring PhD students and Postdocs:

PhD students

  • Jini Kim (co-advised with Jodi Forlizzi)

Postdoctoral researcher


SELECTED PUBLICATIONS (BY TOPIC)
A complete list of my publications can be found on my Google Scholar page.

User-driven AI auditing, red teaming and value alignment

  • Fan, X., Xiao, Q., Zhou, X., Pei, J., Sap, M., Lu, Z., Shen, H. (2024) User-Driven Value Alignment: Understanding Users’ Perceptions and Strategies for Addressing Biased and Discriminatory Statements in AI Companions. [PDF]
  • Kingsley, S.†, Zhi, J.†, Deng, W. H., Lee, J., Zhang, S., Eslami, M.‡, Holstein, K.‡, Hong, J.I.‡, Li, T.‡, Shen, H.‡ (2024). Investigating What Factors Influence Users’ Rating of Harmful Algorithmic Bias and Discrimination. In Proceedings of the 12th AAAI Conference on Human Computation and Crowdsourcing (HCOMP’24). [PDF] Best Paper Award 🏆
  • Shen H.†, DeVos A.†, Eslami M.‡ and Holstein K.‡ (2021). Everyday Algorithm Auditing: Understanding the Power of Everyday Users in Surfacing Harmful Algorithmic Behaviors. Proc. ACM Hum.-Comput.Interact 5, CSCW2, Article 433 (October 2021). [PDF]

Participatory, community-centered AI design

  • Tang, N., Zhi, J., Kuo, T., Kainaroi, C., Northup, J., Holstein, K., Zhu, H., Hedari, H., Shen, H. (2024). AI Failure Cards: Understanding and Supporting Grassroots Efforts to Mitigate AI Failures in Homeless Services. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT'24). [PDF]
  • Kuo, T.†, Shen, H.†, Geum, J. S., Jones, N., Hong, J.I., Zhu, H.‡ , Holstein, K.‡ (2023). Understanding Frontline Workers’ and Unhoused Populations’ Perspectives on AI Used in Homeless Services. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI’23). [PDF] Best Paper Award 🏆
  • Shen H., Wang L., Deng W., Ciell, Velgersdijk R. and Zhu H. (2022). The Model Card Authoring Toolkit: Toward Community-centered, Deliberation-driven AI Design. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT’22). [PDF]

Responsible AI (RAI) tools, methods and processes

  • Kapania, S.†, Wang, R.†, Li, T., Li, T., Shen, H. (2024). “I'm Categorizing LLM as a Productivity Tool”: Examining Ethics of LLM Use in HCI Research Practices. To Appear in Proc. ACM Hum.-Comput.Interact (CSCW). [PDF]
  • Shen, H., Deng W., Chattopadhyay A., Wu Z.S., Wang X and Zhu H. (2021). Value Cards: An Educational Toolkit for Teaching Social Impacts of Machine Learning through Deliberation. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT'21).[PDF]
  • Shen, H., Jin H., Cabrera A., Perer A., Zhu H and Hong J. I. (2020). Design Alternative Representations of Confusion Matrices to Support Non-Expert Public Understanding of Algorithm Performance. Proc. ACM Hum.-Comput. Interact. 4, CSCW2, Article 153 (October 2020). [PDF]
TEACHING
  • 05-410/05-610: User-Centered Research and Evaluation (UCRE), School of Computer Science.
  • 05-499/05-899: Fairness, Accountability, Transparency, Ethics (FATE) in Sociotechnical Systems, School of Computer Science.
  • 90-769/90-442: Critical AI Studies for Public Policy, School of Public Policy and Management.