Please use this identifier to cite or link to this item: https://ptsldigital.ukm.my/jspui/handle/123456789/772455
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorSiti Norul Huda Sheikh Abdullah, Assoc. Prof. Dr.en_US
dc.contributor.advisorMasri Ayob, Prof. Dr.en_US
dc.contributor.authorGhazvini, Anahita (P86698)en_US
dc.date.accessioned2024-01-18T09:38:21Z-
dc.date.available2024-01-18T09:38:21Z-
dc.date.issued2022-04-06-
dc.identifier.urihttps://ptsldigital.ukm.my/jspui/handle/123456789/772455-
dc.descriptionFull-texten_US
dc.description.abstractDeepNet techniques such as Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs) have shown remarkable performance in many imaging applications such as deep fake detection. Some piecewise activation functions are nondifferentiable at specific points, impacting the deep network rate and computational time. Reformulating the absolute value equation (AVE) through a parametrized single smooth function can resolve them. However, utilizing the single smoothing function is still less effective in producing a better curve at the breaking points. The S-shape Rectified linear function (SReLU) in the deep network has superior learning convex and non-convex functions that overcome the vanishing gradient difficulty as the nature of this function is piecewise. This function is also non-continuously differentiable, resulting endure updating weights and bias in the backpropagation training and depreciating its classification efficacy. Therefore, in producing more realistic images, the BigGAN architecture in a deep network successfully tackles the vanishing gradient in the generator by employing the spectral normalization (SN) loss function. The usage of SN creates constraints to the Lipschitz constant of the discriminator by restricting the spectral norm of each layer which affects its overall efficacy. To overcome this, the objectives of this thesis are (1) To formulate a new smoothing function by combining of non-singular functions of Aggregation (AGG) and Fischer-Burmeister (FB) (AFB) in piecewise function to overcome the non-differentiability issue, (2) To propose a Regulated AFB SReLU (ReAFBSReLU) by applying the regulation of the proposed AFB smoothing function in both the training and testing CNN model to avoid vanishing gradient issues and increase accuracy, and (3) To design a SmoothBigGAN system based on the modified ReAFBSReLU activation function and proposed AFBHinge loss function to retain the gradient norm during the network training that tackles gradient exploding. To untangle the AVE terms of piecewise function, for each initial value and time evaluation, this research performed one and thirty runs respectively. The experimental results verified that the proposed AFB function by producing the error rate of −3.7E-17, and computational time of 9.2E-05 outperformed the other tested recent functions. This is due to usage of natural logarithm, exponential, and square root. Hence, it yields a promising smooth approximation for AVE with less computational time. In the second objective, the proposed method was evaluated with three famous CNN architecture of NIN, LeNet, and SqueezeNet. The results demonstrate that the proposed ReAFBSReLU outperform the other state of art functions by yielding the accuracy of 97.79%, 95.91%, 53.32%, and 47.77% on MNIST, CIFAR10 (with and without data augmentation), and CIFAR100 datasets respectively. This surpasses indicate that the proposed ReAFBSReLU can affect the update of training parameters of CNN and subsequently improve the accuracy. Finally, this study evaluates the third objective on CIFAR10 dataset on ResNet architecture using two scores of inception score (IS) and Fréchet Inception Distance (FID). The results showed that proposed SmoothBigGAN outperform the other tested methods by producing the IS and FID scores of 6.40 and 49.1 respectively. Consequently, the proposed method provides more stability that leads to produces more realistic images with higher diversity and quality.en_US
dc.language.isoenen_US
dc.publisherUKM, Bangien_US
dc.relationFaculty of Information Science and Technology / Fakulti Teknologi dan Sains Maklumaten_US
dc.rightsUKMen_US
dc.subjectUniversiti Kebangsaan Malaysia -- Dissertationsen_US
dc.subjectDissertations, Academic -- Malaysiaen_US
dc.subjectDigital imagesen_US
dc.subjectElectronic information resourcesen_US
dc.titlePiecewise smoothing and loss functions based on convolutional neural network for image classification and generationen_US
dc.typeThesesen_US
dc.format.pages314en_US
dc.identifier.barcode005968(2021)(PL2)en_US
dc.format.degreePh.Den_US
Appears in Collections:Faculty of Information Science and Technology / Fakulti Teknologi dan Sains Maklumat

Files in This Item:
File Description SizeFormat 
PIECEWISE SMOOTHING AND LOSS FUNCTIONS BASED ON.pdf
  Restricted Access
5.79 MBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.