About Me
Hi!👋 My name is Alina Leidinger and I am a 4th year PhD candidate at the University of Amsterdam’s Institute of Logic, Language and Computation (ILLC). I’m advised by Katia Shutova and Robert van Rooij. In my PhD, I work on bias, stereotypes and robustness in large Language Models. My research topic lies at the intersection of Natural Language Processing and Ethical AI. Previously, I obtained a BSc and MSc in Mathematics from Imperial College London and Technical University of Munich.
I am also co-organiser of the Computational Linguistics Seminar at the ILLC and part of the Center for Explainable
Responsible and Theory-driven Artificial Intelligence.
CV
News
- Aug 2024: I’m attending ACL🇹🇭🐘 to present our paper on robust evaluation in reasoning!
- July 2024: Two papers accepted at ACM/AAAI AI, Ethics, and Society! 🥳 Check them out if you’re curious about stereotyping or cultural values!
- June 2024: Our work on CIVICS has been covered by TechCrunch (en) and Les Echos (fr)!
- May 2024: Our paper ‘Are LLMs classical or nonmonotonic reasoners? Lessons from generics’ has been accepted at ACL (main)! 🎉
- Dec 2023: Attending EMNLP🇸🇬 to present our paper on robustness in prompting at GenBench workshop and during the main conference.
- Nov 2023: I’m giving a talk at Comète Inria Politechnique’s workshop on Ethical AI.
- Oct 2023: Our paper “The Language of Prompting: What linguistic properties make a prompt successful?” got accepted to EMNLP findings! [ArXiv]
- Sept 2023: Starting a research visit in Hinrich Schütze’s lab at LMU, Munich!
- Sept 2023: I’m giving a talk at the Workshop on generative AI and search engines at HAW, Hamburg.
- Aug 2023: I gave an interview on the Tech Policy Press podcast, check it out!
- April 2023: Our paper “Which Stereotypes Are Moderated and Under-Moderated in Search Engine Autocompletion?” is accepted to FAccT!
- Feb 2023: I’m giving an invited talk at Civic AI Lab, UvA.
- Feb 2023: I’m giving a flash talk for the deans at UvA.
- Sept 2022: I’m presenting at ELLIS PhD Symposium at the University of Alicante.
Publications
Peer-reviewed
- Giada Pistilli*, Alina Leidinger*, Yacine Jernite, Atoosa Kasirzadeh, Alexandra Sasha Luccioni, Margaret Mitchell. 2024. CIVICS: Building a Dataset for Examining Culturally-Informed Values in Large Language Models. [preprint] ACM/AAAI AI, Ethics, and Society.
- Alina Leidinger and Richard Rogers. 2024. How are LLMs mitigating stereotyping harms? Learning from search engine studies. [preprint] ACM/AAAI AI, Ethics, and Society.
- Irene Solaiman*, Zeerak Talat*, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Canyu Chen, Hal Daumé III, Jesse Dodge, Isabella Duan, Ellie Evans, Felix Friedrich, Avijit Ghosh, Usman Gohar, Sara Hooker, Yacine Jernite, Ria Kalluri, Alberto Lusoli, Alina Leidinger, Michelle Lin, Xiuzhu Lin, Sasha Luccioni, Jennifer Mickel, Margaret Mitchell, Jessica Newman, Anaelia Ovalle, Marie-Therese Png, Shubham Singh, Andrew Strait, Lukas Struppek, Arjun Subramonian. 2024. Evaluating the Social Impact of Generative AI Systems in Systems and Society. [book chapter] To appear in Hacker, Engel, Hammer, Mittelstadt (eds), Oxford Handbook on the Foundations and Regulation of Generative AI. Oxford University Press.
- Alina Leidinger, Robert van Rooij, Ekaterina Shutova. 2024. Are LLMs classical or nonmonotonic reasoners? Lessons from generics. [paper] ACL 2024 (main)
- Alina Leidinger, Robert van Rooij, Ekaterina Shutova. 2023. The Language of Prompting: What linguistic properties make a prompt successful? [paper] Findings of EMNLP 2023
- Giulio Starace, Konstantinos Papakostas, Rochelle Choenni, Apostolos Panagiotopoulos, Matteo Rosati, Alina Leidinger, Ekaterina Shutova. 2023. Probing LLMs for Joint Encoding of Linguistic Categories. [paper] Findings of EMNLP 2023
- Alina Leidinger and Richard Rogers. 2023. Which Stereotypes Are Moderated and Under-Moderated in Search Engine Autocompletion? [paper] ACM FAccT 2023
- Oskar van der Wal*, Dominik Bachmann*, Alina Leidinger, Leendert van Maanden, Jelle Zuidema, Katrin Schulz. Undesirable biases in NLP: Averting a crisis of measurement. [paper] JAIR
- Christos Sagonas, Yannis Panagakis, Alina Leidinger, Stefanos Zafeiriou. 2017. Robust Joint and Individual Variance Explained. [paper] CVPR 2017