• Gandalf Nicolas and Aylin Caliskan
    Directionality and Representativeness are Differentiable Components of Stereotypes in Large Language Models
    PNAS Nexus, 2024
  • Gandalf Nicolas and Aylin Caliskan
    A Taxonomy of Stereotype Content in Large Language Models
    arXiv 2024
  • Kyra Wilson and Aylin Caliskan
    Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval
    AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2024)
  • Sourojit Ghosh*, Nina Lutz*, and Aylin Caliskan (*denotes equal contributions)
    "I don't see myself represented here at all": User Experiences of Stable Diffusion Outputs Containing Representational Harms across Gender Identities and Nationalities
    AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2024)
  • Sourojit Ghosh*, Pranav Narayanan Venkit*, Sanjana Gautam*, Shomir Wilson, and Aylin Caliskan (*denotes equal contributions)
    Do Generative AI Models Output Harm while Representing Non-Western Cultures: Evidence from A Community-Centered Approach
    AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2024)
  • Chahat Raj, Anjishnu Mukherjee, Aylin Caliskan, Antonios Anastasopoulos, and Ziwei Zhu
    Breaking Bias, Building Bridges: Evaluation and Mitigation of Social Biases in LLMs via Contact Hypothesis
    AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2024)
  • Aylin Caliskan and Kristian Lum
    Effective AI regulation requires understanding general-purpose AI
    Brookings 2024
  • Steven A. Lehr, Aylin Caliskan, Suneragiri Liyanage, and Mahzarin R. Banaji
    ChatGPT as Research Scientist: Probing GPT's Capabilities as a Research Librarian, Research Ethicist, Data Generator and Data Predictor
    Proceedings of the National Academy of Sciences (PNAS 2024)
  • Chahat Raj, Anjishnu Mukherjee, Aylin Caliskan, Antonios Anastasopoulos, and Ziwei Zhu
    BiasDora: Exploring Hidden Biased Associations in Vision-Language Models
    In Findings of the Association for Computational Linguistics: EMNLP 2024
  • Amanda Alvarez, Aylin Caliskan, M. J. Crockett, Shirley S. Ho, Lisa Messeri, and Jevin West (alphabetical order)
    Science communication with generative AI
    Nature Human Behaviour, 2024
  • Tessa Elizabeth Sadie Charlesworth, Kshitish Ghate, Aylin Caliskan, and Mahzarin R. Banaji
    Extracting intersectional stereotypes from embeddings: Developing and validating the Flexible Intersectional Stereotype Extraction procedure
    PNAS Nexus, 2024
  • Anjishnu Mukherjee, Aylin Caliskan, Ziwei Zhu, Antonios Anastasopoulos
    Global Gallery: The Fine Art of Painting Culture Portraits through Multilingual Instruction Tuning
    North American Chapter of the Association for Computational Linguistics (NAACL 2024)
  • Inyoung Cheong, Aylin Caliskan, and Tadayoshi Kohno
    Safeguarding Human Values: Rethinking US Law for Generative AI's Societal Impacts
    AI and Ethics 2024
  • Yiwei Yang, Anthony Zhe Liu, Robert Wolfe, Aylin Caliskan, Bill Howe
    Label-Efficient Group Robustness via Out-of-Distribution Concept Curation
    Conference on Computer Vision and Pattern Recognition (CVPR 2024)
  • Aylin Caliskan
    Artificial Intelligence, Bias, and Ethics
    The 32nd International Joint Conference on Artificial Intelligence (IJCAI 2023)
    IJCAI Early Career Spotlight Paper
  • Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, Aylin Caliskan
    Demographic Stereotypes in Text-to-Image Generation
    Stanford University Human-Centered Artificial Intelligence Policy Brief 2023
  • Isaac Slaughter, Craig Greenberg, Reva Schwartz, and Aylin Caliskan
    Pre-trained Speech Processing Models Contain Human-Like Biases that Propagate to Speech Emotion Recognition
    In Findings of the Association for Computational Linguistics: EMNLP 2023
  • Sourojit Ghosh and Aylin Caliskan
    'Person' == Light-skinned, Western Man, and Sexualization of Women of Color: Stereotypes in Stable Diffusion
    In Findings of the Association for Computational Linguistics: EMNLP 2023
  • Sourojit Ghosh and Aylin Caliskan
    ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource Languages
    AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2023)
  • Shiva Omrani Sabbaghi, Robert Wolfe, and Aylin Caliskan
    Evaluating Biased Attitude Associations of Language Models in an Intersectional Context
    AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2023)
  • Robert Wolfe, Yiwei Yang, Bill Howe, and Aylin Caliskan
    Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias
    The 2023 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2023)
  • Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, and Aylin Caliskan
    Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
    The 2023 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2023)
  • Katelyn X. Mei, Sonia Fereidooni, and Aylin Caliskan
    Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks
    The 2023 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2023)
  • Inyoung Cheong, Aylin Caliskan, and Tadayoshi Kohno
    Envisioning Legal Mitigations for Intentional and Unintentional Harms Associated with Large Language Models (Extended Abstract)
    Fortieth International Conference on Machine Learning Workshop on Generative AI and Law (ICML GenLaw 2023)
  • Yiwei Yang, Anthony Zhe Liu, Robert Wolfe, Aylin Caliskan, and Bill Howe
    Regularizing Model Gradients with Concepts to Improve Robustness to Spurious Correlations (Poster)
    Fortieth International Conference on Machine Learning Workshop on Spurious Correlations, Invariance, and Stability (ICML SCIS 2023)
  • Aylin Caliskan and Ryan Steed
    Managing the risks of inevitably biased visual artificial intelligence systems
    Brookings 2022
  • Tessa Charlesworth, Aylin Caliskan, and Mahzarin R. Banaji
    Historical Representations of Social Groups Across 200 Years of Word Embeddings from Google Books
    Proceedings of the National Academy of Sciences (PNAS 2022)
  • Aylin Caliskan, Pimparkar Parth Ajay, Tessa Charlesworth, Robert Wolfe, and Mahzarin R. Banaji
    Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics
    AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2022)
  • Robert Wolfe and Aylin Caliskan
    American == White in Multimodal Language-and-Image AI
    AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2022)
  • Shiva Omrani Sabbaghi and Aylin Caliskan
    Measuring Gender Bias in Word Embeddings of Gendered Languages Requires Disentangling Grammatical Gender Signals
    AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2022)
  • Robert Wolfe, Mahzarin R. Banaji, and Aylin Caliskan
    Evidence for Hypodescent in Visual Semantic AI
    The 2022 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2022)
  • Robert Wolfe and Aylin Caliskan
    Markedness in Visual Semantic AI
    The 2022 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2022)
  • Robert Wolfe and Aylin Caliskan
    Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations
    60th Annual Meeting of the Association for Computational Linguistics (ACL 2022)
  • Robert Wolfe and Aylin Caliskan
    VAST: The Valence-Assessing Semantics Test for Contextualizing Language Models
    Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI 2022)
  • Robert Wolfe and Aylin Caliskan
    Detecting Emerging Associations and Behaviors With Regional and Diachronic Word Embeddings
    16th IEEE International Conference on Semantic Computing (ICSC 2022)
  • Ryan Wails, Andrew Stange, Eliana Troper, Aylin Caliskan, Roger Dingledine, Rob Jansen, and Micah Sherr
    Learning to Behave: Improving Covert Channel Security with Behavior-Based Designs
    Privacy Enhancing Technologies Symposium (PETS 2022)
  • Robert Wolfe and Aylin Caliskan
    Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models
    Empirical Methods in Natural Language Processing (EMNLP 2021)
  • Autumn Toney-Wails and Aylin Caliskan
    ValNorm Quantifies Semantics to Reveal Consistent Valence Biases Across Languages and Over Centuries
    Empirical Methods in Natural Language Processing (EMNLP 2021)
  • Aylin Caliskan
    Detecting and mitigating bias in natural language processing
    Brookings 2021
  • Akshat Pandey and Aylin Caliskan
    Disparate Impact of Artificial Intelligence Bias in Ridehailing Economy's Price Discrimination Algorithms
    AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2021)
  • Wei Guo and Aylin Caliskan
    Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases
    AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2021)
  • Ryan Steed and Aylin Caliskan
    Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases
    The 2021 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2021)
  • Ryan Steed and Aylin Caliskan
    A Set of Distinct Facial Traits Learned by Machines Is Not Predictive of Appearance Bias in the Wild
    AI and Ethics, 2021
  • Aylin Caliskan and Molly Lewis
    Social biases in word embeddings and their relation to human cognition
    Book Chapter in The Handbook of Language Analysis in Psychology. Guilford Press, 2021
    Editors Morteza Dehghani and Ryan Boyd
  • Autumn Toney, Akshat Pandey, Wei Guo, David Broniatowski, and Aylin Caliskan
    Automatically Characterizing Targeted Information Operations Through Biases Present in Discourse on Twitter
    15th IEEE International Conference on Semantic Computing (ICSC 2021)
  • Aylin Caliskan and Begum Kaplan
    If I Tap It, Will They Come? An Introductory Analysis of Fairness in a Large-Scale Ride Hailing Dataset
    Academy of Marketing Science 44th Annual Conference (AMS 2020)
  • Edwin Dauber, Aylin Caliskan, Richard Harang, Gregory Shearer, Michael Weisman, Frederica Nelson, and Rachel Greenstadt.
    Git Blame Who?: Stylistic Authorship Attribution of Small, Incomplete Source Code Fragments
    The 19th Privacy Enhancing Technologies Symposium (PETS 2019)
  • Aylin Caliskan, Fabian Yamaguchi, Edwin Dauber, Richard Harang, Konrad Rieck, Rachel Greenstadt, and Arvind Narayanan.
    When Coding Style Survives Compilation: De-anonymizing Programmers from Executable Binaries
    The Network and Distributed System Security Symposium (NDSS 2018) - Source code
  • Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan.
    Semantics derived automatically from language corpora contain human-like biases
    Science 2017 - Source code and data
  • Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan.
    A Story of Discrimination and Unfairness: Implicit Bias Embedded in Language Models.
    9th Hot Topics in Privacy Enhancing Technologies (HotPETS 2016)
    Accepted on 5/20/2016
    • Best Talk Award
  • Aylin Caliskan, Richard Harang, Andrew Liu, Arvind Narayanan, Clare Voss, Fabian Yamaguchi, and Rachel Greenstadt.
    De-anonymizing Programmers via Code Stylometry
    24th Usenix Security Symposium (Usenix 2015) - Source code
  • Aylin Caliskan.
    How do we decide how much to reveal? (Hint: Our privacy behavior might be socially constructed.)

    Special Issue on Security, Privacy, and Human Behavior, ACM Computers and Society, February 2015
  • Aylin Caliskan, Jonathan Walsh, and Rachel Greenstadt.
    Privacy Detective: Detecting Private Information and Collective Privacy Behavior in a Large Social Network
    Workshop on Privacy in the Electronic Society (WPES 2014)
  • Sadia Afroz, Aylin Caliskan, Ariel Stolerman, Rachel Greenstadt, and Damon McCoy.
    Doppelgänger Finder: Taking Stylometry To The Underground
    35th IEEE Symposium on Security Privacy (Oakland SP 2014)
  • Alex Kantchelian, Sadia Afroz, Ling Huang, Aylin Caliskan, Brad Miller, Michael Carl Tschantz, Rachel Greenstadt, Anthony Joseph and J.D. Tygar.
    Approaches to Adversarial Drift
    6th ACM Workshop on Artificial Intelligence and Security (AISec 2013)
  • Sadia Afroz, Aylin Caliskan, Jordan Santell, Aaron Chapin, Rachel Greenstadt.
    How Privacy Flaws Affect Consumer Perception
    The 3rd workshop on Socio-Technical Aspects in Security and Trust (STAST 2013)
  • Ariel Stolerman, Aylin Caliskan and Rachel Greenstadt.
    From Language to Family and Back: Native Language and Language Family Identification from English Text
    The 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, NAACL HLT SRW 2013
  • Andrew McDonald, Sadia Afroz, Aylin Caliskan, Ariel Stolerman and Rachel Greenstadt.
    Use Fewer Instances of the Letter "i": Toward Writing Style Anonymization
    The 12th Privacy Enhancing Technologies Symposium (PETS 2012)
    • Andreas Pfitzmann PETS Best Student Paper Award 2012.
  • Aylin Caliskan and Rachel Greenstadt.
    Translate once, translate twice, translate thrice and attribute: Identifying authors and machine translation tools in translated text
    6th IEEE International Conference on Semantic Computing (ICSC 2012)
  • Other Publications

  • Clara Berridge, Aylin Caliskan, Ryan Calo, Mary D. Fan, Alexis Hiniker, Tadayoshi Kohno, Franziska Roesner
    Comments to the Federal Trade Commission re: Commercial Surveillance ANPR, R111004

    Federal Trade Commission, November 2022.
  • Aylin Caliskan along with the American Psychological Association (APA)
    Comments in response to the Request for Information (RFI) on an Implementation Plan for a National Artificial Intelligence Research Resource (NAIRR)

    The National Science Foundation (NSF) and the White House Office of Science and Technology Policy (OSTP), October 2021.
  • Aylin Caliskan, Bernease Herman, and Emily M. Bender
    Comments in Response to `A Proposal for Identifying and Managing Bias in Artificial Intelligence' from the National Institute of Standards and Technology [Docket Number: [190312229–9229–01]]

    National Institute of Standards and Technology (NIST) Draft NIST Special Publication 1270, September 2021.
  • David Broniatowski, Aylin Caliskan, Valerie Reyna, and Reva Schwartz
    Comments in response to the National Institute of Standards and Technology Request for Information on Developing a Federal AI Standards Engagement Plan [Docket Number: [190312229–9229–01]]

    National Institute of Standards and Technology (NIST) White Paper, June 2019.
  • Technical Report: Arunkumar Byravan, Aylin Caliskan, Jonas Cleveland, Daniel Gilles, Jaimeen Kapadia, Theparit Peerasathien, Bharath Sankaran, Alex Tozzo.
    ENVOY: Exploration and Navigation Vehicle for geolOgY
    NASA/NIA RASC-AL Space Exploration Competition, 2011 - Innovation in robotics to operate on the Moon, Mars, and beyond.



  • Privacy | Terms