咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Pre-training with asynchronous... 收藏

Pre-training with asynchronous supervised learning for reinforcement learning based autonomous driving

面向强化学习自动驾驶模型的异步监督学习预训练方法

作     者:Yunpeng WANG Kunxian ZHENG Daxin TIAN Xuting DUAN Jianshan ZHOU Yunpeng WANG;Kunxian ZHENG;Daxin TIAN;Xuting DUAN;Jianshan ZHOU

作者机构:Beijing Advanced.Innovation Center for Big Data and Brain ComputingSchool of Transportation Science and EngineeringBeihang UniversityBeijing 100191China 

出 版 物:《Frontiers of Information Technology & Electronic Engineering》 (信息与电子工程前沿(英文版))

年 卷 期:2021年第22卷第5期

页      面:673-686页

核心收录:

学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 081104[工学-模式识别与智能系统] 08[工学] 0838[工学-公安技术] 0835[工学-软件工程] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:Project supported by the National Natural Science Foundation of China(Nos.61672082 and 61822101) the Beijing Municipal Natural Science Foundation,China(No.4181002) the Beihang University Innovation and Practice Fund for Graduate,China(No.YCSJ-02-2018-05) 

主  题:Self-driving Autonomous vehicles Reinforcement learning Supervised learning 

摘      要:Rule-based autonomous driving systems may suffer from increased complexity with large-scale intercoupled rules,so many researchers are exploring learning-based *** learning(RL)has been applied in designing autonomous driving systems because of its outstanding performance on a wide variety of sequential control ***,poor initial performance is a major challenge to the practical implementation of an RL-based autonomous driving *** training requires extensive training data before the model achieves reasonable performance,making an RL-based model inapplicable in a real-world setting,particularly when data are *** propose an asynchronous supervised learning(ASL)method for the RL-based end-to-end autonomous driving model to address the problem of poor initial performance before training this RL-based model in real-world ***,prior knowledge is introduced in the ASL pre-training stage by asynchronously executing multiple supervised learning processes in parallel,on multiple driving demonstration data *** pre-training,the model is deployed on a real vehicle to be further trained by RL to adapt to the real environment and continuously break the performance *** presented pre-training method is evaluated on the race car simulator,TORCS(The Open Racing Car Simulator),to verify that it can be sufficiently reliable in improving the initial performance and convergence speed of an end-to-end autonomous driving model in the RL training *** addition,a real-vehicle verification system is built to verify the feasibility of the proposed pre-training method in a real-vehicle *** results show that using some demonstrations during a supervised pre-training stage allows significant improvements in initial performance and convergence speed in the RL training stage.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分