About Me
I am an assistant professor in the Information School at University of Washington. My research interests are in Social Computing, where I combine ideas from both computer science and social science to uncover insights about social life online via large datasets. Currently, one major focus of my research is understanding and designing defenses against problematic information in online social platforms. My work employs a range of interdisciplinary methods from the fields of human computer interaction, data mining, machine learning, and natural language processing.
From August 2017 to 2020, I was an assistant professor in the Department of Computer Science at Virginia Tech. Before that I got my PhD in Computer Science from Georgia Tech. Currently, I am also an adjunct affiliate of UW CSE, an affiliate faculty of the Center for an Informed Public and a co-founding director of RAISE, a Center for Responsibility in AI Systems and Experiences.
Current Research Topics
Generative and Responsible AI
Generative AI is rapidly transforming our information space. It has allowed the capability to mass produce biased and problematic information at unprecedented scale and speed, thus raising new trust and safety issues and posing new challenges for the guardians of our online information space, e.g. fact-checkers. How do we ensure that genAI tools produce socio-demographically diverse and responsible data? How has AI changed what kind of misinformation or disinformation is spread online? What are the new opportunities and challenges generative AI technology is posing for fact-checkers? We are currently exploring answers to these questions, some in collaboration with external stakeholders.Auditing and Contesting Algorithmic Decisions
There is a growing concern that problematic content online are often amplified by the very algorithms driving the online platforms. Yet, their opacity makes it impossible to determine how and when algorithms are amplifying such content. How do we advance responsibility in algorithm design? How do we advance responsibility in algorithm design? My lab is conducting computational audits to determine the adverse effects of algorithms. We have conducted the first systematic misinformation audit on YouTube to empirically establish the “misinformation filter bubble effect”, followed by auditing an e-commerce platform to reveal how algorithms amplify vaccine misinformation. We are also interested in questions around contestability in large scale online systems.Understanding Problematic Phenomena
Today, online social systems have become integral to our daily lives. Yet, these systems are now rife with problematic content, whether they be harmful misinformation, damaging conspiracy theories, or violent extremist propaganda. Left unchecked, these problems can negatively impact our democracy and society at large. My lab has been investigating what makes people join online conspiratorial communities, how dramatic events effect conspiratorial discussions, what are the narrative motifs of these discussions. We have also looked at online extremism and have answered questions ranging from how hate groups frame their hateful agenda to what roles they play.Designing to Defend Against Problematic Information
Our lab is also interested in designing systems to counter problematic information. Checkout our work on OtherTube and NudgeCred. NudgeCred is a socio-technical intervention powered by the idea of nudges, choice preserving architectures that steer people in a particular direction, while also allowing them to go their own way. We have just started delving into questions around designing trustworthy journalism. How can news organizations effectively demonstrate to the public the key aspects of good journalism, the primary features that make a story trustworthy, and the core aspects that govern the production and reporting of a news story?Understanding Misinformation in the Global South
Misinformation research have primarily focused on the Global North, while ignoring the rest of the world. In the next several years, we strive to go beyond the current US/Euro-centric focus on misinformation research. Most of this work will be pursued through two grant initiatives: the Fact-Checking Innovation grant which has enabled working with fact-checkers from Kenya and ONR-YIP early career award which seeks to obtain a holistic understanding of adversarial online influence in the Indian Ocean Region (IOR).Recent Publications (all)
-
Characterizing Political Campaigning with Lexical Mutants on Indian Social Media
S. Phadke, T. Mitra | ICWSM 2024 | paper
-
Assessing enactment of content regulation policies: A post hoc crowd-sourced audit of election misinformation on YouTube
P. Juneja, M. Bhuiyan, T. Mitra | CHI 2023 | paper | doi
-
NewsComp: Facilitating Diverse News Reading through Comparative Annotation
M. Bhuiyan, S. Lee, N. Goyal, T. Mitra | CHI 2023 | paper | doi
-
Mixed Multi-Model Semantic Interaction for Graph-based Narrative Visualizations
B. Keith, T. Mitra, C. North | IUI 2023 | paper | doi
-
A Survey on Event-based News Narrative Extraction
B. Keith, T. Mitra, C. North | ACM Computing Surveys 2023 | paper | doi