Please use this identifier to cite or link to this item: https://ptsldigital.ukm.my/jspui/handle/123456789/513264
Title: Camera calibration and video stabilization models based on type-2 fuzzy logic for robot localization
Authors: Farshid Pirahansiah (P61111)
Supervisor: Siti Norul Huda Sheikh Abdullah, Assoc. Prof. Dr.
Keywords: Robot localization
Fuzzy logic
Camera calibration
Video stabilization
Robots
Issue Date: Nov-2016
Description: Two major issues in robot localization are camera calibration (CC) and video stabilization (VS). The effectiveness of CC is highly provisional based on adjusting setting, image quality and image gradient. Recent breakthrough methods employ fixed threshold to calculate pixel difference between frames and preset variables, and neglect slope information causing blurring effect for image frame selection in CC phase. Additionally, contemporary optical flow requires expert manual setting for Gaussian pyramid parameters such as, sigma, down scale factor, and number of level, in which consume a lot of time and efforts to train and measure. Apart from that, the localization key challenges of humanoid stereo vision are large motion, motion blur and defocus blur of image. Though, state of the art approaches used landmark recognition and probabilistic models to overcome those issues, yet localization accuracy is still poor due to image distortion. Therefore, this work proposed CC and VS via optical flow, models based on Type 2 Fuzzy respectively and a framework for robot localization algorithm via two above-proposed methods so called Fuzzy CC (FCC) and Fuzzy Optical Flow (FOF). The FCC employs Type-2 Fuzzy modeling for selecting proper images by using image quality assessment function and optimal slope recognition. Next, an adaptive Gaussian pyramid parameters setting is proposed using Type-2 Fuzzy modeling. In the final step, a proposed framework for robot localization algorithm via two above proposed methods and triangulation concept are also implemented. In the proposed FCC method achieved better results in re-projection error compared to Zhang about 0.85 and 2.62 in pairs based on self-collected dataset, whereas FCC versus Ferstl scored approximately 0.21 and 0.24 in pairs using Time of flight camera dataset. The proposed FOF method achieved second rank compare to the state of the art methods such as Farneback, Brox (GPU), LK (GPU), Farneback (GPU), Dual_TVL1 and SimpleFlow tested on SINTEL benchmark datasets. Finally, our proposed stereo vision localization framework also outperformed Mono Vision method vision about 4.07 cm and 61.07 cm subsequently of distance errors.,Certification of Master's/Doctoral Thesis" is not available
Pages: 171
Call Number: TJ211.419.P537 2016 3 tesis
Publisher: UKM, Bangi
Appears in Collections:Faculty of Information Science and Technology / Fakulti Teknologi dan Sains Maklumat

Files in This Item:
File Description SizeFormat 
ukmvital_96640+SOURCE1+SOURCE1.0.PDF
  Restricted Access
505.37 kBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.