• Note: This is not a complete list. I rarely update this page.
  • Based on exact title search for: "Semantics derived automatically from language corpora contain human-like biases."
    by Altmetric. Summary - 2020.

  • GeekWire, AI imaging software generates a gallery of stereotypes, says Univ. of Washington researchers. by Lisa Stiffler. 11.28.2023
  • Washington Post, This is how AI image generators see the world: How AI is crafting a world where our worst stereotypes are realized, AI image generators like Stable Diffusion and DALL-E amplify bias in gender, race and beyond — despite efforts to detoxify the data fueling these results. Nitasha Tiku, Kevin Schaul and Szu Yu Chen. 11.1.2023
  • AIhub, Interview with Aylin Caliskan: AI, bias, and ethics. Andrea Rafai. 10.18.2023
  • MIT Tech Review on Artificial Intelligence, What if we could just ask AI to be less biased? Plus: ChatGPT is about to revolutionize the economy. We need to decide what that looks like by Melissa Heikkilä. 3.28.2023
  • MIT Tech Review on Artificial Intelligence, These new tools let you see for yourself how biased AI image models are - Bias and stereotyping are still huge problems for systems like DALL-E 2 and Stable Diffusion, despite companies’ attempts to fix it by Melissa Heikkilä. 3.22.2023
  • New York Times, Can ChatGPT Plan Your Vacation? Here’s What to Know About A.I. and Travel by Julie Weed 3.16.2023
  • MIT Tech Review on Artificial Intelligence, How it feels to be sexually objectified by an AI by Melissa Heikkilä. 12.13.2022
  • Wall Street Journal, ‘Magic’ AI Avatars Are Already Losing Their Charm by Sara Ashley O'Brien. 12.2022
  • MIT Tech Review on Artificial Intelligence, Biased AI Image Generation by the viral AI avatar app Lensa by Melissa Heikkilä. 12.12.2022
  • MIT Tech Review on Pricing Algorithms, In Machines We Trust by Jennifer Strong. 10.2021
  • MIT Sloan Management Review on Intersectionality and AI by Ayanna Howard. 8.2021
  • MIT Tech Review on Bias in Visual Representations by Karen Hao. 1.2021
  • Uber Faces Civil Rights Lawsuit Alleging ‘Racially Biased’ Driver Ratings by Rachel Sandler. Forbes, 10.26.2020.
  • Was your Uber, Lyft fare high because of algorithm bias? by Coral Murphy. USA TODAY, 7.22.2020.
  • Researchers find racial discrimination in `dynamic pricing' algorithms used by Uber, Lyft, and others by Kyle Wiggers. Venture Beat, 6.12.2020.
  • Uber Faces Civil Rights Lawsuit Alleging ‘Racially Biased’ Driver Ratings by Rachel Sandler. Forbes, 10.26.2020.
  • AI analyzed 3.3 million scientific abstracts and discovered possible new materials by Karen Hao. MIT Technology Review, 07.09.2019.
  • AI Voice Assistants Reinforce Gender Biases, U.N. Report Says by Mahita Gajanan. Time, 05.22.2019.
  • Why It's Dangerous For AI To Regulate Itself by Kayvan Alikhani. Forbes, 05.22.2019.
  • Developing a moral compass from human texts by Patrick Bal. Technische Universitat Darmstadt, 02.07.2019.
  • Yes, artificial intelligence can be racist by Brian Resnic. Vox, 01.24.2019.
  • Mental health and artificial intelligence: losing your voice While we still can, let us ask, "Will AI exacerbate discrimination?" by Dan McQuillan. Open Democracy, 11.12.2018.
  • Alexa and Google Home are no threat to regional accents – here’s why by Erin Carrie. The Conversation, 08.21.2018.
  • Stylistic analysis can de-anonymize code, even compiled code by Cory Doctorow. Boing Boing, 08.10.2018.
  • Machine Learning Can Identify the Authors of Anonymous Code: Even Anonymous Coders Leave Fingerprints by Louise Matsakis. Wired, 08.10.2018.
  • AI Without Borders: How To Create Universally Moral Machines by Abinash Tripathy. Forbes, 04.11.2018.
  • Princeton researchers discover why AI become racist and sexist by Annalee Newitz. Ars Technica, 04.18.2017.
  • Training AI robots to act 'human' makes them sexist and racist by Mike Wehner. New York Post, 04.17.2017.
  • How artificial intelligence learns how to be racist Simple: It's mimicking us. by Brian Resnick. Vox, 04.17.2017.
  • When Artificial Intelligence = Not Enough Intelligence. by Michael Eric Ross. Omni, 04.16.2017.
  • L'intelligence artificielle reproduit aussi le sexisme et le racisme des humains. by Morgane Tual. Le Monde, 04.15.2017.
  • What would make a computer biased? Learning a language spoken by humans. by Melissa Healy. Los Angeles Times, 04.14.2017.
  • A.I. Is Just as Sexist and Racist as We Are, and It's All Our Fault. by Peter Hess. Inverse, 04.14.2017.
  • Podcast: Watching shoes untie, Cassini's last dive through the breath of a cryovolcano, and how human bias influences machine learning.
    by Sarah Crespi, David Grimm. Science Magazine, 04.13.2017.
  • Even artificial intelligence can acquire biases against race and gender. by Matthew Hutson. Science Magazine, 04.13.2017.
  • AI programs exhibit racial and gender biases, research reveals. by Hannah Devlin. The Guardian, 04.13.2017.
  • AI Learns Gender and Racial Biases from Language. by Jeremy Hsu. IEEE Spectrum, 04.13.2017.
  • Computers, Artificial Intelligence Show Bias and Prejudice, Too. by Maggie Fox. NBC News, 04.13.2017.
  • AI robots learning racism, sexism and other prejudices from humans, study finds. by Ian Johnston. The Independent, 04.13.2017.
  • Robots are learning to be racist AND sexist: Scientists reveal how AI programs exhibit human-like biases.
    by Stacy Liberatore. Daily Mail, 04.13.2017.
  • Just like humans, artificial intelligence can be sexist and racist. by Matthew Burges. Wired UK, 04.13.2017.
  • Scientists Taught A Robot Language. It Immediately Turned Racist. by Nidhi Subbaraman. BuzzFeed, 04.13.2017.
  • AI picks up racial and gender biases when learning from what humans write. by Angela Chen. The Verge, 04.13.2017.
  • Bad News: Artificial Intelligence Is Racist, Too. by Stephanie Pappas. Live Science, 04.13.2017.
  • Surprise! AI can learn gender and race stereotypes, just like us. by Rebecca Ruiz. Mashable, 04.13.2017.
  • Was passiert, wenn KIs unsere schlechten Eigenschaften übernehmen? by Anna Schughart. Wired, 02.24.2017.
  • How to Fix Silicon Valley's Sexist Algorithms by Will Knight. MIT Technology Review, 11.23.2016.
  • Bias in the machine: Internet algorithms reinforce harmful stereotypes by Bennett McIntosh. Discovery: Research at Princeton, 11.15.2016.
  • Bath Researcher Shows Machines Can Be Prejudiced Too by Nick Flaherty. ACM TechNews, 10.21.2016.
  • Artificial Intelligence Will Be as Biased and Prejudiced as Its Human Creators by Nathan Collins. Pacific Standard, 09.01.2016.
  • It's Our Fault That AI Thinks White Names Are More 'Pleasant' Than Black Names by Jordan Pearson. Motherboard, 08.26.2016.
  • CSI: Cyber-Attack Scene Investigation--a Malware Whodunit by Larry Greenemeier. Scientific American, 01.28.2016.
  • Malicious coders will lose anonymity as identity-finding research matures by Joyce P. Brayboy. U.S. Army Research Laboratory and
    Communications of the ACM
  • 32C3: Nicht nur der Quellcode verrät seine Schöpfer by Kristian Kißling. Linux Magazin, 01.06.2016.
  • De-Anonymizing Users from their Coding Styles by Bruce Schneier. Schneier on Security, 01.04.2016.
  • Forget anonymity, we can remember you wholesale with machine intel, hackers warned by Alexander J. Martin. The Register, 12.31.2015.
  • This Drexel researcher can identify you based on how you write code by Juliana Reyes. Technically Philly, 03.24.2015.
  • Drexel researchers track 'cyber fingerprints' to identify programmers by Jessica McDonald. Newsworks, and two versions of interviews
    aired on WHYY Radio (NPR), 03.09.2015.
  • Doctoral candidate uses coding style to identify hackers by Eric Birkhead. The Triangle, 02.28.2015.
  • Dusting for Cyber Fingerprints: Computer Scientists Use Coding Style to Identify Anonymous Programmers
    by Britt Faulstick. Drexel Now, 02.26.2015.
  • CSI Computer Science: Your coding style can give you away by Phil Johnson. ITWorld, 01.28.2015
  • Your anonymous code contributions probably aren't: boffins by Richard Chirgwin. The Register, 01.22.2015.
  • Open Campus initiative brings natural language processing to cyber research by United States Army Research Laboratory, 09.30.2014.
  • Stylometric analysis to track anonymous users in the underground by Pierluigi Paganini. Security Affairs blog, 01.10.2013.
  • Linguistics identifies anonymous users by Darren Pauli. SC Magazine (Australian edition), 01.09.2013.
  • Students release stylometry tools by Helen Nowotnik. The Triangle, 01.13.2012.
  • Software Helps Identify Anonymous Writers or Helps Them Stay That Way by Nicole Perlroth.
    New York Times Bits Blog. 01.03.2012. (An overview of Anonymouth and JStylo tool releases)
  • Wer Hemingway imitiert, schreibt anonym
    by Philipp Elsbrock. Der Spiegel. 12.30.2011.
  • State of Adversarial Stylometry: can you change your prose-style?
    by Cory Doctorow. Boing Boing. 12.29.2011. (An overview of our talk at 28C3, Anonymouth, and using it in fiction writing)

  • Authorship recognition software from Drexel University lab to be released December by Christopher Wink.
    Technically Philly, 11.15.2011. (Announcing our upcoming release of JStylo & Anonymouth)
  • RoboCup teams worldwide strive to bend it like C3PO by Tom Avril. The Philadelphia Inquirer, 06.10.2010.

  • Privacy | Terms