Enhancing Navigation Efficiency in Robotics with PRM-DDPG
Received: 29 March 2025 | Revised: 18 April 2025 | Accepted: 24 April 2025 | Online: 17 May 2025
Corresponding author: Abbas Nadhim Kadhim
Abstract
This study describes a new way to plan the paths of mobile robots. It combines the large-scale, global planning power of Probabilistic Roadmaps (PRM) with the local, flexible decision-making power of Deep Reinforcement Learning (DRL). While PRM focuses on waypoint delineation, Deep Deterministic Policy Gradient (DDPG) focuses on real-time obstacle avoidance. By integrating these two approaches, the proposed PRM-DDPG algorithm significantly enhances the robot's navigation capabilities, allowing it to effectively handle both structured and complex environments. In the performed simulations, PRM-DDPG outperforms sampling-based methods, such as PRM and RRT in terms of path length, time efficiency, and obstacle avoidance, especially in difficult environments. In addition, the PRM-DDPG algorithm produced the shortest path of 27.0182 m with only six corners, while methods, such as ID3QN and Genetic Algorithm (GA), produced longer paths with more corners. Fewer corners indicate a smoother and more direct path. The results show that using both PRM and DDPG together produces paths that are faster and smoother than those produced by classical or pure machine learning methods alone. The proposed PRM-DDPG algorithm will advance mobile robotics by enabling smarter, more flexible, and more effective self-navigation systems for real-world applications.
Keywords:
path planning, mobile robots, PRM, DDPG, DRL, PRM-DDPGDownloads
References
L. Liu, X. Wang, X. Yang, H. Liu, J. Li, and P. Wang, "Path planning techniques for mobile robots: Review and prospect," Expert Systems with Applications, vol. 227, Oct. 2023, Art. no. 120254.
A. Hoang, S. T. Nguyen, T. V. Pham, T. M. Pham, L. V. Trieu, and T. T. Cao, "A Bayesian Neural Network-based Obstacle Avoidance Algorithm for an Educational Autonomous Mobile Robot Platform," Engineering, Technology & Applied Science Research, vol. 13, no. 6, pp. 12183–12189, Dec. 2023.
H. Qin, S. Shao, T. Wang, X. Yu, Y. Jiang, and Z. Cao, "Review of Autonomous Path Planning Algorithms for Mobile Robots," Drones, vol. 7, no. 3, Mar. 2023, Art. no. 211.
K. Zhu and T. Zhang, "Deep reinforcement learning based mobile robot navigation: A review," Tsinghua Science and Technology, vol. 26, no. 5, pp. 674–691, Oct. 2021.
Y. Zhao, Y. Zhang, and S. Wang, "A Review of Mobile Robot Path Planning Based on Deep Reinforcement Learning Algorithm," Journal of Physics: Conference Series, vol. 2138, no. 1, Dec. 2021, Art. no. 012011.
H. Sun, W. Zhang, R. Yu, and Y. Zhang, "Motion Planning for Mobile Robots—Focusing on Deep Reinforcement Learning: A Systematic Review," IEEE Access, vol. 9, pp. 69061–69081, 2021.
L. Chang, L. Shan, C. Jiang, and Y. Dai, "Reinforcement based mobile robot path planning with improved dynamic window approach in unknown environment," Autonomous Robots, vol. 45, no. 1, pp. 51–76, Jan. 2021.
M. Cabezas-Olivenza, E. Zulueta, A. Sanchez-Chica, U. Fernandez-Gamiz, and A. Teso-Fz-Betoño, "Stability Analysis for Autonomous Vehicle Navigation Trained over Deep Deterministic Policy Gradient," Mathematics, vol. 11, no. 1, Jan. 2023, Art. no. 132.
J. Gao, W. Ye, J. Guo, and Z. Li, "Deep Reinforcement Learning for Indoor Mobile Robot Path Planning," Sensors, vol. 20, no. 19, Jan. 2020, Art. no. 5493.
J. Yu, Y. Su, and Y. Liao, "The Path Planning of Mobile Robot by Neural Networks and Hierarchical Reinforcement Learning," Frontiers in Neurorobotics, vol. 14, Oct. 2020.
S. Guo, X. Zhang, Y. Zheng, and Y. Du, "An Autonomous Path Planning Model for Unmanned Ships Based on Deep Reinforcement Learning," Sensors, vol. 20, no. 2, Jan. 2020, Art. no. 426.
L. Lv, S. Zhang, D. Ding, and Y. Wang, "Path Planning via an Improved DQN-Based Learning Policy," IEEE Access, vol. 7, pp. 67319–67330, 2019.
F. Zhang, C. Gu, and F. Yang, "An Improved Algorithm of Robot Path Planning in Complex Environment Based on Double DQN," in Advances in Guidance, Navigation and Control, 2022, pp. 303–313.
Z. Wu, Y. Yin, J. Liu, D. Zhang, J. Chen, and W. Jiang, "A Novel Path Planning Approach for Mobile Robot in Radioactive Environment Based on Improved Deep Q Network Algorithm," Symmetry, vol. 15, no. 11, Nov. 2023, Art. no. 2048.
T. P. Lillicrap et al., "Continuous control with deep reinforcement learning." arXiv, Jul. 05, 2019.
J. Jermyn, "A Comparison of the Effectiveness of the RRT, PRM, and Novel Hybrid RRT-PRM Path Planners," International Journal for Research in Applied Science and Engineering Technology, vol. 9, no. 12, pp. 600–611, Dec. 2021.
X. Ma, R. Gong, Y. Tan, H. Mei, and C. Li, "Path Planning of Mobile Robot Based on Improved PRM Based on Cubic Spline," Wireless Communications and Mobile Computing, vol. 2022, no. 1, 2022, Art. no. 1632698.
Q. Li, Y. Xu, S. Bu, and J. Yang, "Smart Vehicle Path Planning Based on Modified PRM Algorithm," Sensors, vol. 22, no. 17, Jan. 2022, Art. no. 6581.
A. M. Saeed and K. S. Rijab, "Enhancing Performance of Path Planning PRM Algorithm for Automated Boat Using PID Controller," Journal of Global Scientific Research, vol. 9, no. 11, pp. 3678–3689, Nov. 2024.
L. Qiao, X. Luo, and Q. Luo, "An Optimized Probabilistic Roadmap Algorithm for Path Planning of Mobile Robots in Complex Environments with Narrow Channels," Sensors, vol. 22, no. 22, Jan. 2022, Art. no. 8983.
Z. Ding, Y. Huang, H. Yuan, and H. Dong, "Introduction to Reinforcement Learning," in Deep Reinforcement Learning: Fundamentals, Research and Applications, H. Dong, Z. Ding, and S. Zhang, Eds. Singapore: Springer, 2020, pp. 47–123.
H. Surmann, C. Jestel, R. Marchel, F. Musberg, H. Elhadj, and M. Ardani, "Deep Reinforcement learning for real autonomous mobile robot navigation in indoor environments." arXiv, May 28, 2020.
S. S. Mousavi, M. Schukat, and E. Howley, "Deep Reinforcement Learning: An Overview," in Proceedings of SAI Intelligent Systems Conference (IntelliSys) 2016, Cham, Switzerland, 2018, pp. 426–440.
J. Xin, H. Zhao, D. Liu, and M. Li, "Application of deep reinforcement learning in mobile robot path planning," in 2017 Chinese Automation Congress (CAC), Jinan, China, Oct. 2017, pp. 7112–7116.
C. Zhu, "Intelligent Robot Path Planning and Navigation based on Reinforcement Learning and Adaptive Control," Journal of Logistics, Informatics and Service Science, vol. 10, no. 3, pp. 235–248, 2023.
Q. Yao et al., "Path Planning Method With Improved Artificial Potential Field—A Reinforcement Learning Perspective," IEEE Access, vol. 8, pp. 135513–135523, 2020.
W. Hu, Y. Yang, and Z. Liu, "Deep Deterministic Policy Gradient (DDPG) Agent-Based Sliding Mode Control for Quadrotor Attitudes," Drones, vol. 8, no. 3, Mar. 2024, Art. no. 95.
B. Wang, Z. Liu, Q. Li, and A. Prorok, "Mobile R.obot Path Planning in Dynamic Environments Through Globally Guided Reinforcement Learning," IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 6932–6939, Oct. 2020.
Downloads
How to Cite
License
Copyright (c) 2025 Abbas Nadhim Kadhim, Muhammed Sabri Salim

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain the copyright and grant the journal the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) after its publication in ETASR with an acknowledgement of its initial publication in this journal.