ABSTRACT: Word Sense Disambiguation (WSD) is a critical task in Natural Language Processing (NLP) that focuses on identifying the correct meaning of ambiguous words in a given context. This paper presents a comprehensive survey and comparative analysis of traditional and deep learning approaches to WSD. Traditional methods, including knowledge-based and statistical models, are evaluated alongside deep learning techniques, such as neural networks and transformers, using performance metrics like accuracy, precision, and recall on established datasets. Additionally, this study will also review existing evaluation metrics to better capture the variations of the WSD system. The findings aim to enhance the understanding of WSD techniques and their implications for advancing NLP applications.
[1]
Agirre, Eneko, And Philip Edmonds. Word Sense Disambiguation: Algorithms And Applications. Springer, 2006.
[2]
Navigli, Roberto. "Word Sense Disambiguation: A Survey." Acm Computing Surveys, Vol. 41, No. 2, 2009, Pp. 10:1-10:69,
Https://Doi.Org/10.1145/1459352.1459355.
[3]
Mikolov, Tomas, Kai Chen, Greg Corrado, And Jeffrey Dean. "Efficient Estimation Of Word Representations In Vector Space." Arxiv Preprint, Arxiv:1301.3781, 2013.
[4]
Devlin, Jacob, Ming-Wei Chang, Kenton Lee, And Kristina Toutanova. "Bert: Pre-Training Of Deep Bidirectional Transformers For Language Understanding." Arxiv Preprint, Arxiv:1810.04805, 2018.
[5]
Jurafsky, Daniel, And James H. Martin. Speech And Language Processing. 3rd Ed., Draft Online Version,
Https://Web.Stanford.Edu/~Jurafsky/Slp3/.