UN CONTROLADOR ADAPTATIVO ÓPTIMO BASADO EN EL APRENDIZAJE ACTOR-CRÍTICO EN LÍNEA PARA UN MANIPULADOR ROBÓTICO

Autores/as

  • Patrícia Helena Moraes Rêgo
  • Joelson Miller Bezerra de Sousa

DOI:

https://doi.org/10.56238/bocav25n74-031

Palabras clave:

Manipulador Robótico, Control Adaptativo, Control Óptimo, Aprendizaje por Refuerzo, Esquema Actor-Crítico

Resumen

Las incertidumbres en los parámetros de un manipulador robótico pueden afectar significativamente al rendimiento del manipulador, provocando errores de régimen y de seguimiento de la trayectoria. Los controladores adaptativos se presentan como una buena alternativa para estos sistemas, ya que su principal característica es la capacidad de aprender en línea utilizando la estimación de parámetros en tiempo real. Sin embargo, los controladores adaptativos no suelen diseñarse con la calidad de ser óptimos con respecto a los criterios de rendimiento especificados y, por lo tanto, no son viables para aplicaciones en las que es muy deseable el uso óptimo de los recursos, como por ejemplo en robots humanoides y robots de servicio. Este artículo presenta el diseño y la investigación del rendimiento de un controlador que combina características de control adaptativo y control óptimo para un manipulador robótico. En concreto, el esquema de control propuesto se implementa como una estructura actor-crítico, que se inserta en el contexto del aprendizaje por refuerzo, lo que caracteriza a este diseño como un enfoque independiente del modelo de la planta. A diferencia de otros sistemas actor-crítico en los que se utilizan dos redes neuronales independientes, una para aproximar la función de valor y otra para aprender acciones de control, en este esquema se define una única red neuronal, lo que reduce el número de parámetros que deben estimarse. Los resultados de la simulación demuestran el rendimiento deseado del controlador propuesto, que actúa en un manipulador de juntas rotativas con dos grados de libertad.

Referencias

ABBAS, Z. Motion control of robotic arm manipulator using PID and sliding mode technique. 2018. Tese (Doutorado em Engenharia Elétrica) – Capital University of Science and Technology, Islamabad, 2018.

AL-OLIMAT, K. S.; GHANDAKLY, A. A. Multiple model reference adaptive control algorithm using on-line fuzzy logic adjustment and its application to robotic manipulators. In: Conference Record of the 2002 IEEE Industry Applications Conference. 37th IAS Annual Meeting, Pittsburgh, PA, USA, 2002, p. 1463-1466. DOI: https://doi.org/10.1109/IAS.2002.1042748

ALQAUDI, B. et al. Model reference adaptive impedance control for physical human-robot interaction. Control Theory and Technology. v. 14, p. 68-82, fev. 2016. DOI: https://doi.org/10.1007/s11768-016-5138-2

BHATNAGAR, S. et al. Natural actor-critic algorithms. Automatica, v. 45, n. 11, p. 2471-2482, nov. 2009. DOI: https://doi.org/10.1016/j.automatica.2009.07.008

BORASE, R. P. et al. A review of PID control, tuning methods and applications. International Journal of Dynamics and Control. v. 9, p. 818-827, 2021. DOI: https://doi.org/10.1007/s40435-020-00665-4

CAO, S. et al. Reinforcement learning-based fixed-time trajectory tracking control for uncertain robotic manipulators with input saturation. IEEE Transactions on Neural Networks and Learning Systems, v. 34, n. 8, p. 4584-4595, ago. 2023. DOI: https://doi.org/10.1109/TNNLS.2021.3116713

CHEN, L.; DAI, S.-L.; DONG, C. Adaptive optimal tracking control of an underactuated surface vessel using actor–critic reinforcement learning. IEEE Transactions on Neural Networks and Learning Systems, v. 35, n. 6, p. 7520-7533, jun. 2024a. DOI: https://doi.org/10.1109/TNNLS.2022.3214681

CHEN, L.; DONG, C.; DAI, S.-L. Adaptive optimal consensus control of multiagent systems with unknown dynamics and disturbances via reinforcement learning. IEEE Transactions on Artificial Intelligence, v. 5, n. 5, p. 2193-2203, maio 2024b. DOI: https://doi.org/10.1109/TAI.2023.3324895

CHEN, W.-D. Experimental study of robot manipulators based on robust adaptive control. In: International Conference on Machine Learning and Cybernetics, Guangzhou, China, 2005, p. 18-21.

CLEGG, A. C.; DUNNIGAN, M. W.; LANE, D. M. Self-tuning position and force control of an underwater hydraulic manipulator. In: Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation, Seoul, Korea (South), maio. 2001, p. 3226-3231. DOI: https://doi.org/10.1109/ROBOT.2001.933115

CRAIG, J. J. Introduction to robotics: mechanics and control, Ed. 4. Global Edition. São Paulo: Pearson, 2021.

DUBOWSKY, S.; DESFORGES, D. T. The application of model-referenced adaptive control to robotic manipulators. Journal of Dynamic Systems, Measurement, and Control, v. 101, n. 3, p. 193-200, set. 1979. DOI: https://doi.org/10.1115/1.3426424

FATEH, S.; FATEH, M. M. Adaptive fuzzy control of robot manipulators with asymptotic tracking performance. Journal of Control, Automation and Electrical Systems, v. 31, p. 52-61, out. 2019. DOI: https://doi.org/10.1007/s40313-019-00496-5

FERREIRA, E. F. M.; RÊGO, P. H. M.; NETO, J. V. F. Numerical stability improvements of state-value function approximations based on RLS learning for online HDP-DLQR control system design. Engineering Applications of Artificial Intelligence, v. 63, p.1-19, ago. 2017. DOI: https://doi.org/10.1016/j.engappai.2017.04.017

FREIRE, E. O.; ROSSOMANDO, F. G; SORIA, C. M. Self-tuning of a neuro-adaptive PID controller for a SCARA robot based on neural network. IEEE Latin America Transactions, v. 16, n. 5, p. 1364-1374, jul. 2018. DOI: https://doi.org/10.1109/TLA.2018.8408429

GUO, X.; YAN, W.; CUI, R. Reinforcement learning-based nearly optimal control for constrained-input partially unknown systems using differentiator. IEEE Transactions on Neural Networks and Learning Systems, v. 31, n. 11, p. 4713-4725, nov. 2020. DOI: https://doi.org/10.1109/TNNLS.2019.2957287

HE, W. et al. Reinforcement learning control of a flexible two-link manipulator: an experimental investigation. IEEE Transactions on Systems, Man, and Cybernetics: Systems, v. 51, n. 12, p. 7326-7336, dez. 2021. DOI: https://doi.org/10.1109/TSMC.2020.2975232

HU, Q.; XU, L.; ZHANG, A. Adaptive backstepping trajectory tracking control of robot manipulator. Journal of the Franklin Institute. v. 349, n. 3, p. 1087-1105, 2012. DOI: https://doi.org/10.1016/j.jfranklin.2012.01.001

HU, Y.; SI, B. A reinforcement learning neural network for robotic manipulator control. Neural Computation. v. 30, n. 7, p. 1983-2004, jul. 2018. DOI: https://doi.org/10.1162/neco_a_01079

JIANG, Y.; JIANG, Z.-P. Robust adaptive dynamic programming. Hoboken, New Jersey: John Wiley & Sons, Inc., 2017. DOI: https://doi.org/10.1002/9781119132677

KAMBOJ, A. et al. L. Discrete-time Lyapunov based kinematic control of robot manipulator using actor-critic framework. In: 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, United Kingdom, 2020, p. 1-7. DOI: https://doi.org/10.1109/IJCNN48605.2020.9207522

KHAN, S. G. et al. Reinforcement learning based compliance control of a robotic walk assist device, Advanced Robotics, v.33, n. 24, p. 1281-1292, nov. 2019. DOI: https://doi.org/10.1080/01691864.2019.1690574

KHAN, S. G. et al. A Q-learning based Cartesian model reference compliance controller implementation for a humanoid robot arm. In: 2011 IEEE 5th International Conference on Robotics, Automation and Mechatronics (RAM), Qingdao, China, set. 2011, p. 214-219. DOI: https://doi.org/10.1109/RAMECH.2011.6070484

KHAN, S. G. et al. Reinforcement learning and optimal adaptive control: an overview and implementation examples. Annual Reviews in Control. v. 36, n.1, p. 42-59, 2012. DOI: https://doi.org/10.1016/j.arcontrol.2012.03.004

KIUMARSI, B. et al. Optimal and autonomous control using reinforcement learning: a survey. IEEE Transactions on Neural Networks and Learning Systems, v. 29, n. 6, p. 2042-2062, jun. 2018. DOI: https://doi.org/10.1109/TNNLS.2017.2773458

KONSTANTOPOULOS, G. C.; BALDIVIESO-MONASTERIOS, P. R. State-limiting PID controller for a class of nonlinear systems with constant uncertainties. International Journal of Robust and Nonlinear Control, v. 30, p. 1770-1787, 2020. DOI: https://doi.org/10.1002/rnc.4853

MALIOTIS, G. A. Hybrid model reference adaptive control/computed torque control scheme for robotic manipulators. Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, v. 205, n. 3, p. 215-21, 1991. DOI: https://doi.org/10.1243/PIME_PROC_1991_205_334_02

MOOSAVI, S. K. R.; ZAFAR, M. H.; SANFILIPPO, F. Forward kinematic modelling with radial basis function neural network tuned with a novel meta-heuristic algorithm for robotic manipulators. Robotics, v. 11, n. 2, p. 1-17, abr. 2022. DOI: https://doi.org/10.3390/robotics11020043

PANE, Y. P. et al. Reinforcement learning based compensation methods for robot manipulators. Engineering Applications of Artificial Intelligence, v. 78, p. 236-247, fev. 2019. DOI: https://doi.org/10.1016/j.engappai.2018.11.006

PANE, Y. P.; NAGESHRAO, S. P.; BABUŠKA, R. Actor-critic reinforcement learning for tracking control in robotics. In: 2016 IEEE 55th Conference on Decision and Control (CDC), Las Vegas, NV, USA, dez. 2016, p. 5819-5826. DOI: https://doi.org/10.1109/CDC.2016.7799164

PETERS, J.; SCHAAL, S. Learning to control in operational space. International Journal of Robotics Research, v. 27, n. 2, p. 197-212, fev. 2008a. DOI: https://doi.org/10.1177/0278364907087548

PETERS, J.; SCHAAL, S. Natural actor-critic. Neurocomputing, v. 71, n. 7-9, p. 1180-1190, marc. 2008b. DOI: https://doi.org/10.1016/j.neucom.2007.11.026

PLUŠKOSKI, A.; CIGANOVIĆ, I.; JOVANOVIĆ, M. D. Benefits of Residual Networks in Reinforcement Learning using V-Rep Simulator. In: 6th International Conference IcETRAN, Silver Lake, Serbia, jun. 2019, p. 1-6.

QI, R.; TAO, G.; JIANG, B. Adaptive control: a tutorial introduction. In book: Fuzzy system identification and adaptive control. Communications and Control Engineering. Springer, Cham., 2019, p. 55-74. DOI: https://doi.org/10.1007/978-3-030-19882-4_3

QUIGLEY, M.; GERKEY, B.; SMART, W. D. Programming robots with ROS: a practical introduction to the robot operating system. Ed. 1. O'Reilly Media, Inc. 2015.

ROHMER, E.; SINGH, S. P. N.; FREESE, M. V-REP: A versatile and scalable robot simulation framework. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, nov. 2013. DOI: https://doi.org/10.1109/IROS.2013.6696520

SASAKI, M. et al. Self-tuning control of a two-link flexible manipulator using neural networks. In: 2009 ICCAS-SICE, Fukuoka, Japan, ago. 2009, p. 2468-2473.

SHAH, H.; GOPAL, M. Reinforcement learning control of robot manipulators in uncertain environments. In: IEEE International Conference on Industrial Technology, Churchill, VIC, Australia, fev. 2009, p. 1-6. DOI: https://doi.org/10.1109/ICIT.2009.4939504

SHAMSHIRI, R. R. et al. Robotic harvesting of fruiting vegetables: A simulation approach in V-REP, ROS and MATLAB. In (Ed.), Automation in Agriculture - Securing Food Supplies for Future Generations. IntechOpen. mar. 2018. DOI: https://doi.org/10.5772/intechopen.73861

SU, Y. et al. Fixed-time optimal trajectory tracking control for an unmanned surface vehicle via reinforcement learning. IEEE/ASME Transactions on Mechatronics, p. 1-12, set. 2025. DOI: https://doi.org/10.1109/TMECH.2025.3602024

SUN, N. et al. Adaptive control for pneumatic artificial muscle systems with parametric uncertainties and unidirectional input constraints. IEEE Transactions on Industrial Informatics, v. 16, n. 2, p. 969-979, fev. 2020. DOI: https://doi.org/10.1109/TII.2019.2923715

SUTTON, R. S.; BARTO, A. G. Reinforcement learning: An Introduction. Ed. 2. Cambridge, Massachusetts: MIT Press, 2018.

VRABIE, D.; VAMVOUDAKIS, K. G.; LEWIS, F. L. Optimal adaptive control and differential games by reinforcement learning principles. London, United Kingdom: The Institution of Engineering and Technology, 2013. DOI: https://doi.org/10.1049/PBCE081E

WANG, Z. et al. Adaptive altitude control for underwater vehicles based on deep reinforcement learning. In: 2025 8th International Conference on Transportation Information and Safety (ICTIS), Granada, Spain, 2025, p. 79-84. DOI: https://doi.org/10.1109/ICTIS68762.2025.11214834

WU, L.; YAN, Q; CAI, J. Neural network-based adaptive learning control for robot manipulators with arbitrary initial errors. IEEE Access, v. 7, p. 180194-180204, dez. 2019. DOI: https://doi.org/10.1109/ACCESS.2019.2958371

YAGHMAIE, F. A.; GUSTAFSSON, F.; LJUNG , L. Linear quadratic control using model-free reinforcement learning. IEEE Transactions on Automatic Control, v. 68, n. 2, p. 737-752, fev. 2023. DOI: https://doi.org/10.1109/TAC.2022.3145632

YILMAZ, B. M. et al. Self-adjusting fuzzy logic based control of robot manipulators in task space. IEEE Transactions on Industrial Electronics, v. 69, n. 2, p. 1620-1629, fev. 2022. DOI: https://doi.org/10.1109/TIE.2021.3063970

ZHANG, D.; WEI, B. Design, analysis and modelling of a hybrid controller for serial robotic manipulators. Robotica, v. 35, n. 9, p. 1888-1905, set. 2017. DOI: https://doi.org/10.1017/S0263574716000564

ZHAO, D. et al. Linear quadratic control of unknown nonlinear systems using model-free reinforcement learning. IEEE Transactions on Industrial Electronics, v. 72, n. 12, p. 13751-13762, dez. 2025. DOI: https://doi.org/10.1109/TIE.2025.3581264

Publicado

2026-01-12

Número

Sección

Artículos

Cómo citar

UN CONTROLADOR ADAPTATIVO ÓPTIMO BASADO EN EL APRENDIZAJE ACTOR-CRÍTICO EN LÍNEA PARA UN MANIPULADOR ROBÓTICO. Boletín de Coyuntura (BOCA), Boa Vista, v. 25, n. 74, p. e8113, 2026. DOI: 10.56238/bocav25n74-031. Disponível em: https://revistaboletimconjuntura.com.br/boca/article/view/8113. Acesso em: 29 jan. 2026.