Hi, my name is Alina Leidinger and I am a PhD candidate at the Institute of Logic, Language and Computation (ILLC) at the University of Amsterdam, supervised by Katia Shutova and Robert van Rooij. In my PhD, I work on implicit bias and stereotypes in large Language Models. My research topic lies at the intersection of Natural Language Processing, Ethical AI and Explainability. Previously, I obtained a MSc in Mathematics in Data Science from Technical University of Munich and a BSc in Mathematics from Imperial College London.
I am also co-organisor of the Computational Linguistics Seminar at the ILLC.
- Nov 2024: I’m giving a talk at Comète Inria Politechnique’s workshop on Ethical AI.
- Oct 2023: Our paper “The Language of Prompting: What linguistic properties make a prompt successful?” got accepted to EMNLP findings! [ArXiv]
- Sept 2023: Starting a research visit in Hinrich Schütze’s lab at LMU, Munich!
- Sept 2023: I’m giving a talk at the Workshop on generative AI and search engines at HAW, Hamburg.
- Aug 2023: I gave an interview on the Tech Policy Press podcast, check it out!
- April 2023: Our paper “Which Stereotypes Are Moderated and Under-Moderated in Search Engine Autocompletion?” is accepted to FAccT!
- Feb 2023: I’m giving an invited talk at Civic AI Lab, UvA.
- Feb 2023: I’m giving a flash talk for the deans at UvA.
- Sept 2022: I’m presenting at ELLIS PhD Symposium at the University of Alicante.
- A. Leidinger, R. van Rooij, E. Shutova. 2023. The Language of Prompting: What linguistic properties make a prompt successful? [ArXiv] Accepted to Findings of EMNLP 2023
- G. Starace, K. Papakostas, R. Choenni, A. Panagiotopoulos, M. Rosati, A. Leidinger, E. Shutova. 2023. Starace Probing LLMs for Joint Encoding of Linguistic Categories. [ArXiv] Accepted to Findings of EMNLP 2023
- Alina Leidinger and Richard Rogers. 2023. Which Stereotypes Are Moderated and Under-Moderated in Search Engine Autocompletion?. In 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’23), June 12–15, 2023, Chicago, IL, USA. ACM, New York, NY, USA, 13 pages. [https://doi.org/10.1145/3593013.3594062]
- van der Wal, O., Bachmann, D., Leidinger, A., Van Maanden, L., Zuidema, J., Schulz, K. Undesirable biases in NLP: Averting a crisis of measurement. [ArXiv] Accepted to JAIR.
- Sagonas, Christos, Yannis Panagakis, Alina Leidinger, and Stefanos Zafeiriou. “Robust Joint and Individual Variance Explained.” In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5739-5748. IEEE Computer Society, 2017. [paper]