
تعداد نشریات | 31 |
تعداد شمارهها | 811 |
تعداد مقالات | 7,843 |
تعداد مشاهده مقاله | 35,796,511 |
تعداد دریافت فایل اصل مقاله | 8,102,142 |
Proximal policy optimization with adaptive generalized advantage estimate: critic-aware refinements | ||
Journal of Mathematical Modeling | ||
مقالات آماده انتشار، اصلاح شده برای چاپ، انتشار آنلاین از تاریخ 15 مهر 1404 اصل مقاله (6.08 M) | ||
نوع مقاله: Research Article | ||
شناسه دیجیتال (DOI): 10.22124/jmm.2025.29704.2654 | ||
نویسندگان | ||
Naemeh Mohammadpour* 1؛ Meysam Fozi2؛ Mohammad Mehdi Ebadzadeh2؛ Ali Azimi1؛ Ali Kamali Iglie1 | ||
1Department of Mechanical Engineering, Amirkabir University of Technology, Tehran, Iran | ||
2Department of Computer Engineering, Amirkabir University of Technology, Tehran, Iran | ||
چکیده | ||
Proximal Policy Optimization (PPO) is one of the most widely used methods in reinforcement learning, designed to optimize policy updates while maintaining training stability. However, in complex and high-dimensional environments, maintaining a suitable balance between bias and variance poses a significant challenge. The λ parameter in Generalized Advantage Estimation (GAE) influences this balance by controlling the trade-off between short-term and long-term return estimations. In this study, we propose a method for adaptive adjustment of the λ parameter, where λ is dynamically updated during training instead of remaining fixed. The updates are guided by internal learning signals such as the value function loss and Explained Variance—a statistical measure that reflects how accurately the critic estimates target returns. To further enhance training robustness, we incorporate a Policy Update Delay (PUD) mechanism to mitigate instability from overly frequent policy updates. The main objective of this approach is to reduce dependence on expensive and time-consuming hyperparameter tuning. By leveraging internal indicators from the learning process, the proposed method contributes to the development of more adaptive, stable, and generalizable reinforcement learning algorithms. To assess the effectiveness of the approach, experiments are conducted in four diverse and standard benchmark environments: Ant-v4, HalfCheetah-v4, and Humanoid-v4 from the OpenAI Gym, as well as Quadruped-Walk from the DeepMind Control Suite. The results demonstrate that the proposed method can substantially improve the performance and stability of PPO across these environments. Our implementation is publicly available at https://github.com/naempr/PPO-with-adaptive-GAE. | ||
کلیدواژهها | ||
Reinforcement learning؛ proximal policy optimization؛ generalized advantage estimate؛ bias-variance trade-off | ||
آمار تعداد مشاهده مقاله: 19 تعداد دریافت فایل اصل مقاله: 5 |