About Me

I am an Associate Professor in the Information School at the University of Washington, an affiliate faculty at the Paul G. Allen School of Computer Science & Engineering and the Founding Co-Director of RAISE, a Center for Responsibility in AI Systems and Experiences.

My research interests are in Social Computing and Responsible AI and my work sits at the intersection of Computer Science and Social Science. As an interdisciplinary scholar, I draw from a range of methods from the fields of human computer interaction, large-scale data analytics, machine learning, and AI to understand human behavior online and better support human-human and human-machine communication.

From August 2017 to 2020, I was an assistant professor in the Department of Computer Science at Virginia Tech. Before that I got my PhD in Computer Science from Georgia Tech.

Current Research Topics


Generative and Responsible AI

Generative AI is rapidly transforming our information space. It has allowed the capability to mass produce biased, problematic, and value misaligned information at unprecedented scale and speed, thus raising new trust and safety issues and posing new challenges for our online information space. How do we ensure that genAI tools produce socio-demographically diverse, value-aligned, and responsible data? How has AI changed what kind of biased and problematic information is spread online? What are the new opportunities and epistemic challenges generative AI technology is posing in the domain of health, journalism, fact-checking and others? We are currently exploring answers to these questions, some in collaboration with external stakeholders.

Auditing and Contesting Algorithmic Decisions

There is a growing concern that biased and problematic content online are often amplified by the very algorithms driving the online platforms. Yet, their opacity makes it impossible to determine how and when algorithms are amplifying such content. How do we advance responsibility in algorithm design? My lab is conducting computational audits to determine the adverse effects of algorithms. We have also audited large language models to unravel the systemic biases they perpuate, often at disporportionate scale in non-Western contexts. We are also interested in questions around contestability in large scale online systems.

Designing to Defend Against Problematic Information

Our lab is also interested in designing systems to counter problematic information. Checkout our work on OtherTube and NudgeCred. NudgeCred is a socio-technical intervention powered by the idea of nudges, choice preserving architectures that steer people in a particular direction, while also allowing them to go their own way. We have just started delving into questions around designing trustworthy journalism. How can news organizations effectively demonstrate to the public the key aspects of good journalism, the primary features that make a story trustworthy, and the core aspects that govern the production and reporting of a news story?

Recent Publications (all) Check scholar page for an up-to-date list