Please use this identifier to cite or link to this item: https://ptsldigital.ukm.my/jspui/handle/123456789/772410
Title: Autonomous driving system based on deep recurrent Q-Network and spatial CNN in uncertain environment
Authors: Monirul Islam Pavel, P104619
Supervisor: Tan Siok Yee, Dr.
Keywords: Universiti Kebangsaan Malaysia -- Dissertations
Dissertations, Academic -- Malaysia
Automobiles
Intelligent control systems
Issue Date: 29-Sep-2022
Abstract: Autonomous vehicle system (AVS) is a trendy science and technology research area, which has significant impact on social as well as economic advancement, road safety and the future of the transportation system. In addition, autonomous vehicles are becoming a reality due to the latest predominant improvements in technology along with ensuring safe driving, reliability, and security. Despite the increasing technological focus on AVS in the era of IR 4.0, it has yet to reach a stage-5 type fully automated system due to a lack of human-like decision making capability for failing to drive in uncertain driving situations and expensive sensor dependency. Besides, existing decision-making algorithms based on data-driven policies cannot be guaranteed to be safe due to changing landscapes and unexpected vehicle behaviour leading to unexpected accidents. In this dissertation, a completely driverless vehicle driving simulation system is proposed by applying a deep recurrent Q-Network (DRQN) algorithm with Spatial Convolutional Neural Network (SCNN) to build a custom model and weight from an uncertain environment for reliable decision-making. The integrated SCNN agent improved CNN which enables explicit and effective spatial information propagation between neurons in the same layer of a CNN, is extremely effective for long continuous structured objects, with strong spatial relationships. On the other hand, DRQN is a reinforcement learning algorithm which integrates the LSTM layer replacing the last fully connected layer in the network. The DRQN combines a replay memory replacing deep Q-network's only first connected layer with a recurrent LSTM layer of the same size where outputs are fed and LSTM outputs become Q values after passing through a fully connected layer. Moreover, SCNN extracts spatial features and deeper representation following an attention filter for DRQN to direct the convolution kernels to the parts of interest, which eventually reduces the dimensionality of the visual frames as well as computational costs. The proposed model concatenating SCNN agent and DRQN algorithm, is introduced in the CARLA simulation environment, to modify and overcome the limitation of traditional deep learning algorithms that are not efficient in unknown driving environments and practical driving decision taking. The integrated SCNN agent improved the architecture of DRQN with less parameter tuning and enhanced the long field of view in conjunction with storing experience and an optimized reward function, action and space state analysing failure of previous stages without depending on pre-trained dataset to improve safety tackling collision rates. The proposed novel architecture has obtained 92.30% average accuracy and 97.14% collision free episodes, surpassing NoCrash Benchmark’s driving score obtaining 65 driving score with stable reward values in best case scenario within 5 Million steps. The proposed DRQN-SCNN hybrid model is believed to have potential for being the baseline of driving decision making to reduce collision rate solving the research gaps of these four driving case (acceleration, lane departure, turning and intersection, roundabout) scenarios without sensor dependency especially in an uncertain driving simulation environment.
Description: Partial
Pages: 151
Publisher: UKM, Bangi
Appears in Collections:Faculty of Information Science and Technology / Fakulti Teknologi dan Sains Maklumat

Files in This Item:
File Description SizeFormat 
MONIRUL ISLAM.pdf
  Restricted Access
912.99 kBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.