My main research area is Computational Linguistics (aka Natural Language Processing). I am interested in how humans convey meaning through language. I work with computational models (distributional semantics, neural networks)
that are able to induce very rich and flexible linguistic representations directly from examples of how people use language.
I am currently working on reference, or how we use language to talk about the world: I just need to utter a bunch of words,
for instance "the smart woman we met the other day at the meeting", and my interlocutor will be able to identify the person
I mean. I am building computational models that are able to do this, like humans do.
I am funded by an ERC Starting Grant.
May 2020: Co-organizing the GeCKo symposium on Integrating Generic and Contextual Kowledge, online, May 18.
May 2020: Two posters accepted at CogSci 2020: Modeling word interpretation with deep language models: The interaction between expectations and lexical information (Aina, Brochhagen, Boleda) and Deep daxes: Mutual exclusivity arises through both learning biases and pragmatic strategies in neural networks (Gulordava, Brochhagen, Boleda).