This repository contains the code and data for the paper "Do Large Language Models Understand Word Senses?" accepted at the EMNLP 2025 main conference.
In our paper, we investigate whether Large Language Models truly understand word senses in context. We evaluate a wide range of models on classic Word Sense Disambiguation benchmarks and novel generative settings, showing that top LLMs match state-of-the-art systems in WSD and achieve up to 98% accuracy in free-form sense explanation tasks.
If you find our paper, code or framework useful, please reference this work in your paper:
@inproceedings{meconi-etal-2025-large,
title = "Do Large Language Models Understand Word Senses?",
author = "Meconi, Domenico and
Stirpe, Simone and
Martelli, Federico and
Lavalle, Leonardo and
Navigli, Roberto",
editor = "Christodoulopoulos, Christos and
Chakraborty, Tanmoy and
Rose, Carolyn and
Peng, Violet",
booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2025",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.emnlp-main.1720/",
pages = "33885--33904",
ISBN = "979-8-89176-332-6"
}
This work is under the Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license.
We welcome contributions! Please feel free to:
- Report bugs and issues
- Suggest new features or improvements
For major changes, please open an issue first to discuss what you would like to change.
For questions about this research, please contact:
- Domenico Meconi: meconi@babelscape.com
- Roberto Navigli: navigli@babelscape.com
