[1]彭云建,梁 进.基于探索-利用权衡优化的 Q 学习路径规划[J].计算机技术与发展,2022,32(04):1-7.[doi:10. 3969 / j. issn. 1673-629X. 2022. 04. 001]
 PENG Yun-jian,LIANG Jin.Q-learning Path Planning Based on Exploration / Exploitation Tradeoff Optimization[J].,2022,32(04):1-7.[doi:10. 3969 / j. issn. 1673-629X. 2022. 04. 001]
点击复制

基于探索-利用权衡优化的 Q 学习路径规划()
分享到:

《计算机技术与发展》[ISSN:1006-6977/CN:61-1281/TN]

卷:
32
期数:
2022年04期
页码:
1-7
栏目:
人工智能
出版日期:
2022-04-10

文章信息/Info

Title:
Q-learning Path Planning Based on Exploration / Exploitation Tradeoff Optimization
文章编号:
1673-629X(2022)04-0001-07
作者:
彭云建梁 进
华南理工大学 自动化科学与工程学院,广东 广州 510640
Author(s):
PENG Yun-jianLIANG Jin
School of Automation Science and Engineering,South China University of Technology,Guangzhou 510640,China
关键词:
强化学习Q 学习探索-利用路径规划未知环境
Keywords:
reinforcement learningQ-learningexploration / exploitationpath planningunknown environment
分类号:
TP391
DOI:
10. 3969 / j. issn. 1673-629X. 2022. 04. 001
摘要:
针对移动智能体在未知环境下的路径规划问题,提出了基于探索-利用权衡优化的 Q 学习路径规划。 对强化学习方法中固有的探索-利用权衡问题,提出了探索贪婪系数?ε 值随学习幕数平滑衰减的 εDBE(ε-decreasing based episodes)方法和根据 Q 表中的状态动作值判断到达状态的陌生 / 熟悉程度、做出探索或利用选择的 AεBS( adaptiveε based state) 方法,这一改进确定了触发探索和触发利用的情况,避免探索过度和利用过度,能加快找到最优路径。 在未知环境下对基于探索-利用权衡优化的 Q 学习路径规划与经典的 Q 学习路径规划进行仿真实验比较,结果表明该方法的智能体在未知障碍环境情况下具有快速学习适应的特性,最优路径步数收敛速度更快,能更高效实现路径规划,验证了该方法的可行性和高效性。
Abstract:
Aiming at the path planning problem of mobile agent in unknown environment, a Q - learning path planning based on exploration/exploitation tradeo ff optimization is proposed. For the inherent problem of exploration / exploitation trade off in reinforcement learning,the εDBE( ε-decreasing based episodes) method of exploring greedy coefficient ε value decreasing smoothly with the number of learning episodes and the AεBS ( adaptiveεbased state ) method of judging strangeness / familiarity of arriving state and making exploration or exploitation selection according to the state action value in Q table are proposed. This improvement determines the situation of triggering exploration or triggering exploitation,avoids over exploration and over exploitation,and can speed up finding the optimal path. In unknown environment,the Q-learning path planning based on exploration / exploitation trade off optimization is compared with the classical Q-learning path planning. The simulation results show that the agent with the proposed method has the characteristics of fast learning and adaptation in the unknown obstacle environment, the optimal path steps converge faster,and can realize the path planning more efficiently. The feasibility and efficiency of the proposed method are verified.

相似文献/References:

[1]冯林 李琛 孙焘.Robocup半场防守中的一种强化学习算法[J].计算机技术与发展,2008,(01):59.
 FENG Lin,LI Chen,SUN Tao.A Reinforcement Learning Method for Robocup Soccer Half Field Defense[J].,2008,(04):59.
[2]汤萍萍 王红兵.基于强化学习的Web服务组合[J].计算机技术与发展,2008,(03):142.
 TANG Ping-ping,WANG Hong-bing.Web Service Composition Based on Reinforcement -Learning[J].,2008,(04):142.
[3]王朝晖 孙惠萍.图像检索中IRRL模型研究[J].计算机技术与发展,2008,(12):35.
 WANG Zhao-hui,SUN Hui-ping.Research of IRRL Model in Image Retrieval[J].,2008,(04):35.
[4]林联明 王浩 王一雄.基于神经网络的Sarsa强化学习算法[J].计算机技术与发展,2006,(01):30.
 LIN Lian-ming,WANG Hao,WANG Yi-xiong.Sarsa Reinforcement Learning Algorithm Based on Neural Networks[J].,2006,(04):30.
[5]刘成健,罗 杰.基于参数融合的 Q 学习交通信号控制方法[J].计算机技术与发展,2018,28(11):48.[doi:10.3969/ j. issn.1673-629X.2018.11.011]
 LIU Cheng-jian,LUO Jie.A Control Method of Traffic Signals Based on Parameter Fusion of Q-learning[J].,2018,28(04):48.[doi:10.3969/ j. issn.1673-629X.2018.11.011]
[6]农汉琦,孙蕴琪,黄 洁,等.基于机器学习的认知无线网络优化策略[J].计算机技术与发展,2020,30(05):125.[doi:10. 3969 / j. issn. 1673-629X. 2020. 05. 024]
 NONG Han-qi,SUN Yun-qi,HUANG Jie,et al.Optimization Strategy of Cognitive Radio Network Based on Machine Learning[J].,2020,30(04):125.[doi:10. 3969 / j. issn. 1673-629X. 2020. 05. 024]
[7]雷 莹,许道云.一种合作 Markov 决策系统[J].计算机技术与发展,2020,30(12):8.[doi:10. 3969 / j. issn. 1673-629X. 2020. 12. 002]
 LEI Ying,XU Dao-yun.A Cooperation Markov Decision Process System[J].,2020,30(04):8.[doi:10. 3969 / j. issn. 1673-629X. 2020. 12. 002]
[8]魏竞毅,赖 俊,陈希亮.基于互信息的智能博弈对抗分层强化学习研究[J].计算机技术与发展,2022,32(09):142.[doi:10. 3969 / j. issn. 1673-629X. 2022. 09. 022]
 WEI Jing-yi,LAI Jun,CHEN Xi-liang.Research on Hierarchical Reinforcement Learning of Intelligent Game Confrontation Based on Mutual Information[J].,2022,32(04):142.[doi:10. 3969 / j. issn. 1673-629X. 2022. 09. 022]
[9]吴 鹏,魏上清,董嘉鹏,等.基于 SARSA 强化学习的审判人力资源调度方法[J].计算机技术与发展,2022,32(09):82.[doi:10. 3969 / j. issn. 1673-629X. 2022. 09. 013]
 WU Peng,WEI Shang-qing,DONG Jia-peng,et al.Trial Human Resources Scheduling Method Based on SARSA Reinforcement Learning[J].,2022,32(04):82.[doi:10. 3969 / j. issn. 1673-629X. 2022. 09. 013]
[10]林泽阳,赖 俊,陈希亮.基于课程学习的深度强化学习研究综述[J].计算机技术与发展,2022,32(11):16.[doi:10. 3969 / j. issn. 1673-629X. 2022. 11. 003]
 LIN Ze-yang,LAI Jun,CHEN Xi-liang.An Overview of Deep Reinforcement Learning Based on Curriculum Learning[J].,2022,32(04):16.[doi:10. 3969 / j. issn. 1673-629X. 2022. 11. 003]
[11]乔 通,周 洲,程 鑫,等.基于 Q-学习的底盘测功机自适应 PID 控制模型[J].计算机技术与发展,2022,32(05):117.[doi:10. 3969 / j. issn. 1673-629X. 2022. 05. 020]
 QIAO Tong,ZHOU Zhou,CHENG Xin,et al.Adaptive PID Control Model of Chassis Dynamometer Based on Q-Learning[J].,2022,32(04):117.[doi:10. 3969 / j. issn. 1673-629X. 2022. 05. 020]
[12]刘晓峰 *,刘智斌,董兆安.基于记忆启发的强化学习方法研究[J].计算机技术与发展,2023,33(06):168.[doi:10. 3969 / j. issn. 1673-629X. 2023. 06. 025]
 LIU Xiao-feng *,LIU Zhi-bin,DONG Zhao-an.Research on Memory Heuristic Reinforcement Learning[J].,2023,33(04):168.[doi:10. 3969 / j. issn. 1673-629X. 2023. 06. 025]

更新日期/Last Update: 2022-04-10