Aylin CaliskanMy first name is pronounced as Eileen
Assistant Professor |
![]() |
I have formal training in computer science, information systems engineering, and robotics. My research interests lie in artificial intelligence (AI) ethics, AI bias, computer vision, natural language processing, and machine learning. Bias and ethics in natural language processing and machine learning have been my primary focus since 2016. I analyze the underpinning mechanisms of information transfer between human society and AI. To investigate the reasoning behind AI representations and decisions, I develop evaluation methods and transparency enhancing approaches that detect, quantify, and characterize human-like associations and biases learned by machines. I study how machines that automatically learn implicit associations impact humans and society. As AI is co-evolving with society, my goal is to ensure that AI is developed and deployed responsibly, with consideration given to societal implications.
News |
source code Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. A Story of Discrimination and Unfairness: Implicit Bias Embedded in Language Models. 9th Hot Topics in Privacy Enhancing Technologies
Accepted on 5/20/2016 |
Research |
Artificial Intelligence, Bias, and Ethics The 32nd International Joint Conference on Artificial Intelligence (IJCAI 2023) IJCAI Early Career Spotlight Paper ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource Languages AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2023) Evaluating Biased Attitude Associations of Language Models in an Intersectional Context AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2023) Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias The 2023 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2023) Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale The 2023 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2023) Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks The 2023 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2023) Envisioning Legal Mitigations for Intentional and Unintentional Harms Associated with Large Language Models (Extended Abstract) Fortieth International Conference on Machine Learning Workshop on Generative AI and Law (ICML GenLaw 2023) Regularizing Model Gradients with Concepts to Improve Robustness to Spurious Correlations (Poster) Fortieth International Conference on Machine Learning Workshop on Spurious Correlations, Invariance, and Stability (ICML SCIS 2023) Managing the risks of inevitably biased visual artificial intelligence systems Brookings 2022 Historical Representations of Social Groups Across 200 Years of Word Embeddings from Google Books Proceedings of the National Academy of Sciences (PNAS 2022) Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2022) American == White in Multimodal Language-and-Image AI AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2022) Measuring Gender Bias in Word Embeddings of Gendered Languages Requires Disentangling Grammatical Gender Signals AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2022) Evidence for Hypodescent in Visual Semantic AI The 2022 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2022) Markedness in Visual Semantic AI The 2022 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2022) Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022) VAST: The Valence-Assessing Semantics Test for Contextualizing Language Models Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI 2022) Detecting Emerging Associations and Behaviors With Regional and Diachronic Word Embeddings 16th IEEE International Conference on Semantic Computing (ICSC 2022) Learning to Behave: Improving Covert Channel Security with Behavior-Based Designs Privacy Enhancing Technologies Symposium (PETS 2022) Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models Empirical Methods in Natural Language Processing (EMNLP 2021) ValNorm Quantifies Semantics to Reveal Consistent Valence Biases Across Languages and Over Centuries Empirical Methods in Natural Language Processing (EMNLP 2021) Social biases in word embeddings and their relation to human cognition Book Chapter in The Handbook of Language Analysis in Psychology. Guilford Press, 2021 Editors Morteza Dehghani and Ryan Boyd Detecting and mitigating bias in natural language processing Brookings 2021 Disparate Impact of Artificial Intelligence Bias in Ridehailing Economy's Price Discrimination Algorithms AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2021) Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2021) Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases The 2021 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2021) A Set of Distinct Facial Traits Learned by Machines Is Not Predictive of Appearance Bias in the Wild AI and Ethics, 2021 Automatically Characterizing Targeted Information Operations Through Biases Present in Discourse on Twitter 15th IEEE International Conference on Semantic Computing (ICSC 2021) |
Privacy | Terms