Please use this identifier to cite or link to this item: https://ptsldigital.ukm.my/jspui/handle/123456789/772456
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorAbdul Razak Hamdan, Prof. Dr.en_US
dc.contributor.authorMadhoushi, Zohreh (P74491)en_US
dc.date.accessioned2024-01-18T09:40:36Z-
dc.date.available2024-01-18T09:40:36Z-
dc.date.issued2021-11-03-
dc.identifier.urihttps://ptsldigital.ukm.my/jspui/handle/123456789/772456-
dc.descriptionFull-texten_US
dc.description.abstractOnline information and web content are an important source for the safety and stability to economy and society. Online reviews play a key role in a brand’s reputation and conversion outcomes. Aspect- based Sentiment Analysis (ABSA) extracts major aspects for an item or product from customer reviews. Then, it predicts the sentiment for the aspects. Previous methods extracts aspect terms and then categorize the terms using two distinct steps. These methods also need to manually set the model threshold values and seed words for aspects categories. Domain-specific models are often not practical for both tasks. There is a lack of work to detect implicit sentiments with little amount of labeled data in the previous studies. Deep learning techniques automate the process of representation learning. Various deep language Models or LMs were developed, such as Word2Vec and recurrent-based LMs. This study focused on the LMs for the task of aspect category detection and sentiment detection. The first objective of this research is to propose a mechanism to solve the problem of massive labeled data by experimenting to decide the best combinatory architectures of recurrent-based LM and also the best semantic similarity measures for fostering a new aspect category detection model. The second goal is to propose a new model to address drawbacks of previous aspect category detection models in an implicit manner from online reviews as well as utilizing a one-step aspect category detection rather than using the intermediate step. The last objective is to propose a semi-supervised ABSA models for online review to predict explicit and implicit sentiment in three domains. This study developed a similarity method for aspect category detection phase, and a semisupervised deep learning model for sentiment detection phase. Both models work in three different areas of laptop, restaurant and hotel. The datasets of this study, S1 and S2, are from standard SemEval online competition. The developed models outperform the previous baseline models in terms of the F1-score of aspect category detection and accuracy of sentiment detection. This study finds more relevant aspects and more accurate sentiment by developing more stable and robust models. The F1 score of the best model for aspect category detection is 79.03% in restaurant (S1 dataset). The F1- score is 72.65% in laptop domain and 75.11% in restaurant domain (S2 dataset). The accuracy of sentiment detection is 84.87% in restaurant domain on the first dataset and for the second dataset it is 84.43% in laptop domain, 85.21% in restaurant domain and 85.57% in hotel domain.en_US
dc.language.isoenen_US
dc.publisherUKM, Bangien_US
dc.relationFaculty of Information Science and Technology / Fakulti Teknologi dan Sains Maklumaten_US
dc.rightsUKMen_US
dc.subjectUniversiti Kebangsaan Malaysia -- Dissertationsen_US
dc.subjectDissertations, Academic -- Malaysiaen_US
dc.subjectInformation storage and retrieval systemsen_US
dc.titleA similarity auto-score model for aspect category detection and semi-supervised model for sentiment detectionen_US
dc.typeThesesen_US
dc.format.pages226en_US
dc.identifier.barcode005969(2021)(PL2)en_US
dc.format.degreePh.Den_US
Appears in Collections:Faculty of Information Science and Technology / Fakulti Teknologi dan Sains Maklumat

Files in This Item:
File Description SizeFormat 
A similarity auto-score model for aspect category detection and semi-supervised model for sentiment detection.pdf
  Restricted Access
14.26 MBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.