China A-Share Market Portfolio Management Based on Deep Reinforcement Learning
Journal: Modern Economics & Management Forum DOI: 10.32629/memf.v5i4.2573
Abstract
This paper uses the 50 constituent stocks in the Shanghai Composite 50 Index from January 1, 2010 to January 1, 2024 as a data set to study the application of investment portfolios in China's A-share market. The innovations of this paper mainly involve the following aspects. First, this paper introduces five algorithms, A2C, DDPG, SAC, TD3, and PPO, and compares the five algorithms by cumulative yield, maximum drawdown, and Sharpe ratio as evaluation indicators. The comparison shows that compared with the other four algorithms, the PPO algorithm is more in line with the specific situation of China's A-share market. Secondly, this paper collects data from the Shanghai Composite 50 Index and compares it with the reinforcement learning method in terms of cumulative yield. The comparison shows that the deep reinforcement learning method has played a huge role in improving the yield of the Shanghai Composite 50 Index investment portfolio.
Keywords
SSE 50 Index, investment portfolio, A2C, DDPG, SAC, TD3, PPO, cumulative rate of return, maximum drawdown, Omega ratio, Sharpe ratio
Full Text
PDF - Viewed/Downloaded: 0 TimesReferences
[1]SHAHI TB, SHRESTHA A, NEUPANE A, et al. Stock price forecasting with deep learning: a comparative study [J]. Mathematics, 2020, 8(9): 1441.
[2]JIY, LIEW WC, YANG L. A novel improved particle swarm optimization with long-short term memory hybrid model for stock indices forecast [J]. IEEE access, 2021(9): 23660-23671.
[3]Chen HQ, Liu YD, Zhou ZT, et al. A2C: Attention-augmented contrastive learning for state representation extraction. Applied Sciences, 2020, 10(17): 5902.
[4]Zhang FJ, Li J, Li Z. A TD3-based multi-agent deep reinforcement learning method in mixed cooperation-competition environment. Neurocomputing, 2020 (411): 206-215.
[5]Cuschieri N, Vella V, Bajada J. TD3-based ensemble reinforcement learning for financial portfolio optimisation∥The 31st International Conference on Automated Planning and Scheduling. Guangzhou, China: The International Conference on Automated Planning and Scheduling, 2021: 6-14.
[6]Haarnoja T, Zhou A, Hartikainen K,et al. Soft Actor-Critic Algorithms and Applications. Available from: https://arxiv.org/abs/1812.05905.
[7]Weng Xiaojian, Lin Xudong, Zhao Shuaibin. Stock price rise and fall prediction model based on short-term memory network based on empirical mode decomposition and investor sentiment[J]. Computer Applications, 2022, 42(z2): 296-301.
[8]Liang Tianxin, Yang Xiaoping, Wang Liang, et al. Research and development of financial trading systems based on reinforcement learning[J]. Journal of Software, 2019, 30(3): 20.
[9]Qi Yue, Huang Shuohua. Portfolio management based on deep reinforcement learning DDPG algorithm [J]. Computers and Modernization, 2018 (5): 93-99.
[10]Fu Feng, Wang Kang. Portfolio management based on deep reinforcement learning SAC algorithm [J]. Modern Computer, 2020 (9): 45-48.
[11]Wang Wuyu, Zhang Ning, Fan Dan, et al. Intelligent portfolio optimization based on dynamic trading and risk constraints [J]. Journal of Central University of Finance and Economics, 2021 (9): 32-47.
[2]JIY, LIEW WC, YANG L. A novel improved particle swarm optimization with long-short term memory hybrid model for stock indices forecast [J]. IEEE access, 2021(9): 23660-23671.
[3]Chen HQ, Liu YD, Zhou ZT, et al. A2C: Attention-augmented contrastive learning for state representation extraction. Applied Sciences, 2020, 10(17): 5902.
[4]Zhang FJ, Li J, Li Z. A TD3-based multi-agent deep reinforcement learning method in mixed cooperation-competition environment. Neurocomputing, 2020 (411): 206-215.
[5]Cuschieri N, Vella V, Bajada J. TD3-based ensemble reinforcement learning for financial portfolio optimisation∥The 31st International Conference on Automated Planning and Scheduling. Guangzhou, China: The International Conference on Automated Planning and Scheduling, 2021: 6-14.
[6]Haarnoja T, Zhou A, Hartikainen K,et al. Soft Actor-Critic Algorithms and Applications. Available from: https://arxiv.org/abs/1812.05905.
[7]Weng Xiaojian, Lin Xudong, Zhao Shuaibin. Stock price rise and fall prediction model based on short-term memory network based on empirical mode decomposition and investor sentiment[J]. Computer Applications, 2022, 42(z2): 296-301.
[8]Liang Tianxin, Yang Xiaoping, Wang Liang, et al. Research and development of financial trading systems based on reinforcement learning[J]. Journal of Software, 2019, 30(3): 20.
[9]Qi Yue, Huang Shuohua. Portfolio management based on deep reinforcement learning DDPG algorithm [J]. Computers and Modernization, 2018 (5): 93-99.
[10]Fu Feng, Wang Kang. Portfolio management based on deep reinforcement learning SAC algorithm [J]. Modern Computer, 2020 (9): 45-48.
[11]Wang Wuyu, Zhang Ning, Fan Dan, et al. Intelligent portfolio optimization based on dynamic trading and risk constraints [J]. Journal of Central University of Finance and Economics, 2021 (9): 32-47.
Copyright © 2024 Junwu Zhou, Kai Jing
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License