About Me
Hi!š My name is Alina Leidinger and I am a 4th year PhD candidate at the University of Amsterdamās Institute of Logic, Language and Computation (ILLC). Iām advised by Katia Shutova and Robert van Rooij. In my PhD, I work on bias, stereotypes and robustness in large Language Models. My research topic lies at the intersection of Natural Language Processing and Ethical AI. Previously, I obtained a BSc and MSc in Mathematics from Imperial College London and Technical University of Munich.
I am also co-organiser of the Computational Linguistics Seminar at the ILLC and part of the Center for Explainable
Responsible and Theory-driven Artificial Intelligence.
CV
News
- Aug 2024: Iām attending ACLš¹šš to present our paper on robust evaluation in reasoning!
- July 2024: Two papers accepted at ACM/AAAI AI, Ethics, and Society! š„³ Check them out if youāre curious about stereotyping or cultural values!
- June 2024: Our work on CIVICS has been covered by TechCrunch (en) and Les Echos (fr)!
- May 2024: Our paper āAre LLMs classical or nonmonotonic reasoners? Lessons from genericsā has been accepted at ACL (main)! š
- Dec 2023: Attending EMNLPšøš¬ to present our paper on robustness in prompting at GenBench workshop and during the main conference.
- Nov 2023: Iām giving a talk at ComĆØte Inria Politechniqueās workshop on Ethical AI.
- Oct 2023: Our paper āThe Language of Prompting: What linguistic properties make a prompt successful?ā got accepted to EMNLP findings! [ArXiv]
- Sept 2023: Starting a research visit in Hinrich SchĆ¼tzeās lab at LMU, Munich!
- Sept 2023: Iām giving a talk at the Workshop on generative AI and search engines at HAW, Hamburg.
- Aug 2023: I gave an interview on the Tech Policy Press podcast, check it out!
- April 2023: Our paper āWhich Stereotypes Are Moderated and Under-Moderated in Search Engine Autocompletion?ā is accepted to FAccT!
- Feb 2023: Iām giving an invited talk at Civic AI Lab, UvA.
- Feb 2023: Iām giving a flash talk for the deans at UvA.
- Sept 2022: Iām presenting at ELLIS PhD Symposium at the University of Alicante.
Publications
Preprints
- Marion Thaler, Abdullatif Kƶksal, Alina Leidinger, Anna Korhonen, Hinrich SchĆ¼tze. 2024. Bias Propagation in LLMs: Tracing Gender Bias from Pre-training Data to Alignment. Under review.
Peer-reviewed
- Giada Pistilli*, Alina Leidinger*, Yacine Jernite, Atoosa Kasirzadeh, Alexandra Sasha Luccioni, Margaret Mitchell. 2024. CIVICS: Building a Dataset for Examining Culturally-Informed Values in Large Language Models. [preprint] ACM/AAAI AI, Ethics, and Society.
- Alina Leidinger and Richard Rogers. 2024. How are LLMs mitigating stereotyping harms? Learning from search engine studies. [preprint] ACM/AAAI AI, Ethics, and Society.
- Irene Solaiman*, Zeerak Talat*, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Canyu Chen, Hal DaumƩ III, Jesse Dodge, Isabella Duan, Ellie Evans, Felix Friedrich, Avijit Ghosh, Usman Gohar, Sara Hooker, Yacine Jernite, Ria Kalluri, Alberto Lusoli, Alina Leidinger, Michelle Lin, Xiuzhu Lin, Sasha Luccioni, Jennifer Mickel, Margaret Mitchell, Jessica Newman, Anaelia Ovalle, Marie-Therese Png, Shubham Singh, Andrew Strait, Lukas Struppek, Arjun Subramonian. 2024. Evaluating the Social Impact of Generative AI Systems in Systems and Society. [book chapter] To appear in Hacker, Engel, Hammer, Mittelstadt (eds), Oxford Handbook on the Foundations and Regulation of Generative AI. Oxford University Press.
- Alina Leidinger, Robert van Rooij, Ekaterina Shutova. 2024. Are LLMs classical or nonmonotonic reasoners? Lessons from generics. [paper] ACL 2024 (main)
- Alina Leidinger, Robert van Rooij, Ekaterina Shutova. 2023. The Language of Prompting: What linguistic properties make a prompt successful? [paper] Findings of EMNLP 2023
- Giulio Starace, Konstantinos Papakostas, Rochelle Choenni, Apostolos Panagiotopoulos, Matteo Rosati, Alina Leidinger, Ekaterina Shutova. 2023. Probing LLMs for Joint Encoding of Linguistic Categories. [paper] Findings of EMNLP 2023
- Alina Leidinger and Richard Rogers. 2023. Which Stereotypes Are Moderated and Under-Moderated in Search Engine Autocompletion? [paper] ACM FAccT 2023
- Oskar van der Wal*, Dominik Bachmann*, Alina Leidinger, Leendert van Maanden, Jelle Zuidema, Katrin Schulz. Undesirable biases in NLP: Averting a crisis of measurement. [paper] JAIR
- Christos Sagonas, Yannis Panagakis, Alina Leidinger, Stefanos Zafeiriou. 2017. Robust Joint and Individual Variance Explained. [paper] CVPR 2017