Leveraging Deep Reinforcement Learning for Effective PI Controller Tuning in Industrial Water Tank Systems
Received: 13 November 2024 | Revised: 9 December 2024 and 27 December 2024 | Accepted: 29 December 2024 | Online: 2 February 2025
Corresponding author: Muthukumarasamy Manimozhi
Abstract
This paper addresses the level control problem in water tank systems by proposing a Deep Deterministic Policy Gradient (DDPG) algorithm to automatically tune the parameters of a Proportional-Integral (PI) controller. The integration of the PI controller with the DDPG algorithm leverages the strengths of both methods, enabling the algorithm to learn optimal controller gains through the exploration of the state-action space and reward feedback from the system. The proposed approach eliminates manual tuning, automates gain adaptation to varying system states, and ensures a robust performance under uncertainties and disturbances. The validation results demonstrate that the DDPG-tuned PI controller outperforms the manually tuned controller using the PID Tuner app in Simulink, achieving no overshoot, faster settling times, and enhanced robustness. These findings highlight the potential of Reinforcement Learning (RL) for adaptive control in industrial applications, particularly for systems with dynamic and uncertain environments.
Keywords:
DDPG algorithm, level control, PI controller, tuning, reinforcement learning, water tank systemDownloads
References
C. Urrea and F. Páez, "Design and Comparison of Strategies for Level Control in a Nonlinear Tank," Processes, vol. 9, no. 5, May 2021, Art. no. 735.
M. J. Blondin, J. Sanchis Sáez, and P. M. Pardalos, "Control Engineering from Classical to Intelligent Control Theory—An Overview," in Computational Intelligence and Optimization Methods for Control Engineering, M. J. Blondin, P. M. Pardalos, and J. Sanchis Sáez, Eds. Cham, Switzerland: Springer International Publishing, 2019, pp. 1–30.
M. Abdelkader, M. Mabrok, and A. Koubaa, "OCTUNE: Optimal Control Tuning Using Real-Time Data with Algorithm and Experimental Results," Sensors, vol. 22, no. 23, Jan. 2022, Art. no. 9240.
"Water Level Control in a Tank," Mathworks. [Online]. Available: https://www.mathworks.com/help/fuzzy/water-level-control-in-a-tank.html.
M. Khairudin, A. Hastutiningsih, T. Maryadi, and H. Pramono, "Water level control based fuzzy logic controller: simulation and experimental works," IOP Conference Series: Materials Science and Engineering, vol. 535, no. 1, May 2019, Art. no. 012021.
K. T. Sundari, R. Giri, M. G. Umamaheswari, S. Durgadevi, and C. Komathi, "Fuzzy Logic Control of Liquid Level in a Single Tank with IoT-Based Monitoring System," in ICDSMLA 2021, A. Kumar, S. Senatore, and V. K. Gunjan, Eds. Singapore: Springer Nature, 2023, pp. 401–414.
Muhlasin, Budiman, M. Ali, A. Parwanti, A. A. Firdaus, and Iswinarti, "Optimization of Water Level Control Systems Using ANFIS and Fuzzy-PID Model," in 2020 Third International Conference on Vocational Education and Electrical Engineering, Surabaya, Indonesia, 2020, pp. 1–5.
S. Bhadra et al., "Implementation of Neural Network Based Control Scheme on the Benchmark Conical Tank Level System," in 2019 IEEE 9th Annual Computing and Communication Workshop and Conference, Las Vegas, NV, USA, 2019, pp. 0556–0560.
B. S. Sousa, F. V. Silva, and A. M. F. Fileti, "Level Control of Coupled Tank System Based on Neural Network Techniques," Chemical Product and Process Modeling, vol. 15, no. 3, Sep. 2020.
M. M. Noel and B. J. Pandian, "Control of a nonlinear liquid level system using a new artificial neural network based reinforcement learning approach," Applied Soft Computing, vol. 23, pp. 444–451, Oct. 2014.
D. Dutta and S. R. Upreti, "A multiple neural network and reinforcement learning-based strategy for process control," Journal of Process Control, vol. 121, pp. 103–118, Jan. 2023.
O. Dogru et al., "Reinforcement learning approach to autonomous PID tuning," Computers & Chemical Engineering, vol. 161, May 2022, Art. no. 107760.
R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, 2nd ed. Cambridge, MA, USA: MIT Press, 2018.
P. Ramanathan, K. K. Mangla, and S. Satpathy, "Smart controller for conical tank system using reinforcement learning algorithm," Measurement, vol. 116, pp. 422–428, Feb. 2018.
J. Shin, T. A. Badgwell, K.-H. Liu, and J. H. Lee, "Reinforcement Learning – Overview of recent progress and implications for process control," Computers & Chemical Engineering, vol. 127, pp. 282–294, Aug. 2019.
A. Alharin, T.-N. Doan, and M. Sartipi, "Reinforcement Learning Interpretation Methods: A Survey," IEEE Access, vol. 8, pp. 171058–171077, 2020.
Q. J. M. Huys and P. Seriès, "Reward-Based Learning, Model-Based and Model-Free," in Encyclopedia of Computational Neuroscience, D. Jaeger and R. Jung, Eds. New York, NY, USA: Springer, 2022, pp. 3042–3050.
Q. Huang, "Model-Based or Model-Free, a Review of Approaches in Reinforcement Learning," in 2020 International Conference on Computing and Data Science, Stanford, CA, USA, 2020, pp. 219–221.
M. Lapan, Deep Reinforcement Learning Hands-On: Apply modern RL methods, with deep Q-networks, value iteration, policy gradients, TRPO, AlphaGo Zero and more. Birmingham, UK: Packt Publishing, 2018.
T. P. Lillicrap et al., "Continuous control with deep reinforcement learning." arXiv, Jul. 05, 2019.
M. Hossny, J. Iskander, M. Attia, K. Saleh, and A. Abobakr, "Refined Continuous Control of DDPG Actors via Parametrised Activation," AI, vol. 2, no. 4, pp. 464–476, Dec. 2021.
D. Salwan and S. Kant, "DDPG vs PPO in Prosthetics," Journal of Critical Reviews, vol. 7, no. 9, Apr. 2020.
"Deep Deterministic Policy Gradient (DDPG) Agent," Mathworks. [Online]. Available: https://www.mathworks.com/help/reinforcement-learning/ug/ddpg-agents.html.
"Tune PID Controller," Mathworks. [Online]. Available: https://www.mathworks.com/help/control/ref/tunepidcontroller.html.
"Watertank Simulink Model," Mathworks. [Online]. Available: https://www.mathworks.com/help/slcontrol/ug/watertank-simulink-model.html.
W. Koch, R. Mancuso, R. West, and A. Bestavros, "Reinforcement Learning for UAV Attitude Control," ACM Transactions on Cyber-Physical Systems, vol. 3, no. 2, Apr. 2019, Art. no. 22.
T. M. Luu and C. D. Yoo, "Hindsight Goal Ranking on Replay Buffer for Sparse Reward Environment," IEEE Access, vol. 9, pp. 51996–52007, Mar. 2021.
M. Lopez-Martin, B. Carro, and A. Sanchez-Esguevillas, "Application of deep reinforcement learning to intrusion detection for supervised problems," Expert Systems with Applications, vol. 141, Mar. 2020, Art. no. 112963.
R. Wu, F. Gu, H. Liu, and H. Shi, "UAV Path Planning Based on Multicritic-Delayed Deep Deterministic Policy Gradient," Wireless Communications and Mobile Computing, vol. 2022, no. 1, Mar. 2022, Art. no. 9017079.
A. Ajagekar and F. You, "Deep Reinforcement Learning Based Automatic Control in Semi-Closed Greenhouse Systems," IFAC-PapersOnLine, vol. 55, no. 7, pp. 406–411, 2022.
Ó. Pérez-Gil et al., "Deep reinforcement learning based control for Autonomous Vehicles in CARLA," Multimedia Tools and Applications, vol. 81, no. 3, pp. 3553–3576, Jan. 2022.
L. Wang, K. Wang, C. Pan, and N. Aslam, "Joint Trajectory and Passive Beamforming Design for Intelligent Reflecting Surface-Aided UAV Communications: A Deep Reinforcement Learning Approach," IEEE Transactions on Mobile Computing, vol. 22, no. 11, pp. 6543–6553, Nov. 2023.
"Train Reinforcement Learning Agents," Mathworks. [Online]. Available: https://www.mathworks.com/help/reinforcement-learning/ug/train-reinforcement-learning-agents.html.
"Water Tank Reinforcement Learning Environment Model," Mathworks. [Online]. Available: https://www.mathworks.com/help/reinforcement-learning/ug/water-tank-simulink-reinforcement-learning-environment.html.
"Tune PI Controller Using Reinforcement Learning," Mathworks. [Online]. Available: https://www.mathworks.com/help/reinforcement-learning/ug/tune-pi-controller-using-td3.html.
Downloads
How to Cite
License
Copyright (c) 2025 Vijaya Lakshmi Korupu, Muthukumarasamy Manimozhi

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain the copyright and grant the journal the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) after its publication in ETASR with an acknowledgement of its initial publication in this journal.