Please use this identifier to cite or link to this item: https://ptsldigital.ukm.my/jspui/handle/123456789/476655
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorSabrina Tiun, Dr.
dc.contributor.authorWafaa Riyadh Musa (P75877)
dc.date.accessioned2023-10-06T09:23:22Z-
dc.date.available2023-10-06T09:23:22Z-
dc.date.issued2018-06-19
dc.identifier.otherukmvital:123713
dc.identifier.urihttps://ptsldigital.ukm.my/jspui/handle/123456789/476655-
dc.descriptionWords sense disambiguation (WSD), is a task of assigning the most appropriate sense to words is used in a sentence, when the word has multiple meanings. WSD relies on the context of the target word to identify suitable sense. Selecting semantic evaluator by using an optimization strategy, is the intermediate objective to identify the set of suitable senses. Optimization methods are either based on population of solutions or single solution. However, achieving an effective balance between exploration and exploitation is a challenging task in the optimization process. Hence, this study aims to improve partial disambiguation of a sentence and find global meaning for a given text. Therefore, hybridizes a population based on an algorithm named Particle Swarm Optimization with a local search algorithm called simulated annealing algorithm (SA). PSO provides a global search of the problem space that can find various solutions of different qualities. While, the local search algorithm works on intensifying the search locally, where, promising solution is processed in this algorithm to be improved by searching its neighborhood. The hybridized method evaluates the solutions based on the semantic relation among the words. In this study, the semantic relatedness and similarity methods, which are Extended Lesk’s algorithm(e-Lesk) and Jiang-Conrath algorithm (JCN), are combined. The designed model in this research was experimented based on semantic concordance corpus (SemCor). Specifically, 19 files from this dataset, which have been used in the related works, as a benchmark dataset. Some of the related works presented their results based on only noun part-of-speech, and thus, this study did a comparison on only noun part-of-speech. While, the other comparison was based on all part-of-speeches. The proposed method outperformed other methods regarding the noun part-of-speech with f-measure of 73.36% (with increasing 0.24%). On all part-of-speech, the proposed method outperformed only at the precision metric with the highest result of 67.44% (0.41% improvement). Hence, it can be concluded that the proposed method be able to provides a good WSD solution for noun part-of-speech especially, as well as for other part-of-speech when not all words are required to be disambiguated.,Certification of Master's / Doctoral Thesis" is not available
dc.language.isoeng
dc.publisherUKM, Bangi
dc.relationFaculty of Information Science and Technology / Fakulti Teknologi dan Sains Maklumat
dc.rightsUKM
dc.subjectUniversiti Kebangsaan Malaysia -- Dissertations
dc.subjectDissertations, Academic -- Malaysia
dc.subjectWords sense disambiguation
dc.subjectOptimization methods
dc.subjectAlgorithm
dc.titleHybridizing particle swarm optimization with simulated annealing for word sense disambiguation
dc.typetheses
dc.format.pages73
dc.identifier.barcode005771(2021)(PL2)
Appears in Collections:Faculty of Information Science and Technology / Fakulti Teknologi dan Sains Maklumat

Files in This Item:
File Description SizeFormat 
ukmvital_123713+SOURCE1+SOURCE1.0.PDF
  Restricted Access
350.9 kBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.