Projects
Our lab members are leading many exicting projects, and we always welcome new, novel ideas.
Privacy notice in GAI Chatbot ecosystems
As Generative AI (GAI) Chatbot ecosystems evolve to interpret input, make predictions, and generate responses, they operate within complex ecosystems that raise new privacy challenges. While system-level solutions address some risks, users still struggle to manage personal data across dynamic interactions. To investigate these challenges, we conducted a mixed-methods study examining the factors that influence users’ perceptions of privacy control, changes in trust, and willingness to engage with GAI Chatbot ecosystems during both personalization and external data-sharing scenarios. We also explored users’ needs and expectations for privacy design in contexts where GAI Chatbot ecosystems access data for personalization or share data across systems.
This is a collaborative project with Google as a part of a Google Faculty Award.
Privacy Sandbox for Informal Learning Spaces
Privacy education for children often remains abstract, leaving them unsure how to apply concepts in real-world digital interactions. This project presents an experiential learning game that helps children actively practice making privacy decisions in realistic, everyday online scenarios. By engaging in contextual and embodied learning, children can learn to identify sensitive information, weigh the risks of sharing, and exercise autonomy over their privacy choices. Rather than teaching static rules, the game empowers children to build practical privacy skills they can apply across digital contexts, bridging the gap between theory and practice.
This is a collaborative project with Chaoran Chen, Julia Qian, and Dr. Toby Li from the University of Notre Dame as a part of our NSF project.
Empathy-based Privacy Sandbox for Mobile
This project introduces a sandbox-based system that simulates diverse user behaviors to investigate how mobile applications adapt to inferred contexts. By spoofing a wide range of device signals—including motion and ambient sensors, system time, location, Google Ad ID, calendar data, and more—the system emulates lifestyle-based personas to reveal dynamic app behavior. Our toolkit coordinates these spoofed signals across multiple apps and captures visual changes through interface screenshots, which are then analyzed using GPT-4 Vision to summarize and compare app responses. The goal is to surface hidden personalization mechanisms and provide a controlled, risk-free platform for studying opaque data-driven behaviors in mobile ecosystems. This work lays the foundation for transparency tools that empower users to observe, question, and better understand how behavioral signals influence their digital experiences.
This is a collaborative project with Chaoran Chen and Dr. Toby Li from the University of Notre Dame, and Dr. Tianshi Li from Northeastern University as a part of our NSF project.
Inclusive Avatar for Social Virtual Reality
The purpose of this study is to understand how people with disabilities (PWD) experience and perceive disability-related harassment in social virtual reality (VR) environments, and to design effective protection mechanisms tailored to their needs. By examining how different types of disabilities shape individuals’ experiences and perceptions of harm, safety, and vulnerability in these environments, the project aims to construct a harm and harassment model tailored to PWD. The final contribution will provide a deeper understanding of the diverse challenges faced by disabled users in social VR and offer a foundation for designing more inclusive and protective virtual experiences.
This is a collaborative project with Kexin Zhang and Dr. Yuhang Zhao from the University of Wisconsin-Madison as a part of our NSF project.