Model-Free Swing-Up and Balance Control of a Rotary Inverted Pendulum using the TD3 Algorithm: Simulation and Experiments

Authors

  • Trong-Nguyen Ho Faculty of Electrical and Electronics Engineering, Ho Chi Minh City University of Technology and Education, Ho Chi Minh City, Vietnam
  • Van-Dong-Hai Nguyen Department of Automation and Control, Faculty of Electrical and Electronics Engineering, Ho Chi Minh City University of Technology and Education, Ho Chi Minh City, Vietnam
Volume: 15 | Issue: 1 | Pages: 19316-19323 | February 2025 | https://doi.org/10.48084/etasr.9335

Abstract

The Rotary Inverted Pendulum (RIP) system is a highly nonlinear and under-actuated mechanical system, which presents significant challenges for traditional control techniques. In recent years, Reinforcement Learning (RL) has emerged as a prominent nonlinear control technique, demonstrating efficacy in regulating systems exhibiting intricate dynamics and pronounced nonlinearity. This research presents a novel approach to the swing-up and balance control of the RIP system, employing a RL algorithm, Twin Delayed (TD3) Deep Deterministic Policy Gradient (DDPG), obviating the necessity for a predefined mathematical model. The physical model of the RIP was designed in SolidWorks and subsequently transferred to MATLAB Simscape and Simulink for the purpose of training the RL agent. The system was successfully trained to perform both swing-up and balance control using a single algorithm for both tasks, representing a significant innovation that eliminates the need for two or more separate algorithms. Additionally, the trained agent was successfully deployed onto an experimental model, with the results demonstrating the feasibility and effectiveness of the model-free TD3 approach in controlling under-actuated mechanical systems with complex dynamics, such as the RIP. Furthermore, the results highlight the sim-to-real transfer capability of this method.

Keywords:

Rotary Inverted Pendulum (RIP), Reinforcement Learning (RL), twin delayed deep deterministic policy gradient, model-free control, swing-up control, balance control, solidworks, matlab, simscape, simulink

Downloads

Download data is not yet available.

References

B. S. Cazzolato and Z. Prime, "On the Dynamics of the Furuta Pendulum," Journal of Control Science and Engineering, vol. 2011, no. 1, 2011, Art. no. 528341.

Z.-P. Jiang, "Controlling Underactuated Mechanical Systems: A Review and Open Problems," in Advances in the Theory of Control, Signals and Systems with Physical Modeling, J. Lévine and P. Müllhaupt, Eds. Berlin, Heidelberg: Springer, 2011, pp. 77–88.

C. Sravan Bharadwaj, T. Sudhakar Babu, and N. Rajasekar, "Tuning PID Controller for Inverted Pendulum Using Genetic Algorithm," in Advances in Systems, Control and Automation: ETAEERE-2016, A. Konkani, R. Bera, and S. Paul, Eds. Singapore: Springer, 2018, pp. 395–404.

K. Chhabra and Mohd. Rihan, "Design of linear quadratic regulator for rotary inverted pendulum using LabVIEW," in 2016 International Conference on Advances in Computing, Communication, & Automation (ICACCA) (Spring), Dehradun, India, Apr. 2016, pp. 1–5.

G. Rigatos, P. Siano, M. Abbaszadeh, and S. Ademi, "Nonlinear H-infinity control for the rotary pendulum," in 2017 11th International Workshop on Robot Motion and Control (RoMoCo), Wasowo Palace, Poland, Jul. 2017, pp. 217–222.

I. S. Trenev and D. D. Devyatkin, "Feedback Linearization Control of Nonlinear System," Engineering Proceedings, vol. 33, no. 1, 2023, Art. no. 36.

T. V. A. Nguyen and N. H. Tran, "A T-S Fuzzy Approach with Extended LMI Conditions for Inverted Pendulum on a Cart," Engineering, Technology & Applied Science Research, vol. 14, no. 1, pp. 12670–12675, Feb. 2024.

P. Dev, S. Jain, P. Kumar Arora, and H. Kumar, "Machine learning and its impact on control systems: A review," Materials Today: Proceedings, vol. 47, pp. 3744–3749, Jan. 2021.

M. Jin and J. Lavaei, "Stability-certified reinforcement learning: A control-theoretic perspective." arXiv, Oct. 26, 2018.

F. L. Lewis and D. Vrabie, "Reinforcement learning and adaptive dynamic programming for feedback control," IEEE Circuits and Systems Magazine, vol. 9, no. 3, pp. 32–50, 2009.

J. Moos, K. Hansel, H. Abdulsamad, S. Stark, D. Clever, and J. Peters, "Robust Reinforcement Learning: A Review of Foundations and Recent Advances," Machine Learning and Knowledge Extraction, vol. 4, no. 1, pp. 276–315, Mar. 2022.

S. Mosharafian, S. Afzali, Y. Bao, and J. M. Velni, "A Deep Reinforcement Learning-based Sliding Mode Control Design for Partially-known Nonlinear Systems," in 2022 European Control Conference (ECC), London, UK, Jul. 2022, pp. 2241–2246.

M. Ran, J. Li, and L. Xie, "Reinforcement-Learning-Based Disturbance Rejection Control for Uncertain Nonlinear Systems," IEEE Transactions on Cybernetics, vol. 52, no. 9, pp. 9621–9633, Sep. 2022.

A. F. ud Din et al., "Deep Reinforcement Learning for Integrated Non-Linear Control of Autonomous UAVs," Processes, vol. 10, no. 7, Jul. 2022, Art. no. 1307.

Z. Zhang, Z. Mo, Y. Chen, and J. Huang, "Reinforcement Learning Behavioral Control for Nonlinear Autonomous System," IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 9, pp. 1561–1573, Sep. 2022.

R. Özalp, N. K. Varol, B. Taşci, and A. Uçar, "A Review of Deep Reinforcement Learning Algorithms and Comparative Results on Inverted Pendulum System," in Machine Learning Paradigms: Advances in Deep Learning-based Technological Applications, G. A. Tsihrintzis and L. C. Jain, Eds. Cham, Switzerland: Springer International Publishing, 2020, pp. 237–256.

N. Mellatshahi, S. Mozaffari, M. Saif, and S. Alirezaee, "Inverted Pendulum Control with a Robotic Arm using Deep Reinforcement Learning," in 2021 International Symposium on Signals, Circuits and Systems (ISSCS), Iasi, Romania, Jul. 2021, pp. 1–6.

Z. Ben Hazem, "Study of Q-learning and deep Q-network learning control for a rotary inverted pendulum system," Discover Applied Sciences, vol. 6, no. 2, Feb. 2024, Art. no. 49.

R. S. Bhourji, S. Mozaffari, and S. Alirezaee, "Reinforcement Learning DDPG–PPO Agent-Based Control System for Rotary Inverted Pendulum,"Arabian Journal for Science and Engineering, vol. 49, no. 2, pp. 1683–1696, Feb. 2024.

M. Safeea and P. Neto, "A Q-learning approach to the continuous control problem of robot inverted pendulum balancing, " Intelligent Systems with Applications, vol. 21, Mar. 2024, Art. no. 200313.

R. S. Sutton and A. G. Barto, "Reinforcement Learning: An Introduction," IEEE Transactions on Neural Networks, vol. 9, no. 5, pp. 1054–1054, Sep. 1998.

S. Fujimoto, H. van Hoof, and D. Meger, "Addressing Function Approximation Error in Actor-Critic Methods." arXiv, Oct. 22, 2018.

T. P. Lillicrap et al., "Continuous control with deep reinforcement learning." arXiv, Jul. 05, 2019.

Downloads

How to Cite

[1]
Ho, T.-N. and Nguyen, V.-D.-H. 2025. Model-Free Swing-Up and Balance Control of a Rotary Inverted Pendulum using the TD3 Algorithm: Simulation and Experiments. Engineering, Technology & Applied Science Research. 15, 1 (Feb. 2025), 19316–19323. DOI:https://doi.org/10.48084/etasr.9335.

Metrics

Abstract Views: 76
PDF Downloads: 54

Metrics Information