Explore chapters and articles related to this topic
Memory Management
Published in Yi Qiu, Puxiang Xiong, Tianlong Zhu, The Design and Implementation of the RT-Thread Operating System, 2020
Yi Qiu, Puxiang Xiong, Tianlong Zhu
Memory pool is a memory allocation method for allocating a large number of small memory blocks of the same size. It can greatly speed up memory allocation and release, and can avoid memory fragmentation as much as possible. In addition, RT-Thread's memory pool supports the thread suspension feature. The idea is that when there is no free memory block in the memory pool, the application thread will be suspended until a new memory block is available. Then the suspended application thread will be awakened.
Configuration and Usage of Open-Source Protocol
Published in Ivan Cibrario Bertolotti, Tingting Hu, Embedded Software Development, 2017
Ivan Cibrario Bertolotti, Tingting Hu
Moreover, dividing the available memory into distinct pools guarantees that, if a certain memory pool is exhausted for any reason, memory is still available for other kinds of data structure, drawn from other pools. In turn, this makes the protocol stack more robust because a memory shortage in one area does not prevent other parts of it from still obtaining dynamic memory and continue to work.
Deep reinforcement learning-based path planning of underactuated surface vessels
Published in Cyber-Physical Systems, 2019
Hongwei Xu, Ning Wang, Hong Zhao, Zhongjiu Zheng
Experience replay refers to the establishment of an experience pool. The system store information such as state, action and reward in the memory pool continuously when the agent is interacting with the environment. In our algorithm, there are two different experience pools, and , which are used to store historical observation , and transition quadruples , respectively.
A multi process value-based reinforcement learning environment framework for adaptive traffic signal control
Published in Journal of Control and Decision, 2023
Jie Cao, Dailin Huang, Liang Hou, Jialin Ma
In the DQN framework, we calculate through the sample in the memory pool. This loss represents the distance between the estimation network and the estimated Q Value of the target network. In fact, DQN achieves independent and identical distribution among samples through the experience replay, thereby meeting the requirements of machine learning. Since two adjacent samples in the memory pool do not need time correlation, we propose a multi-environment shared memory mechanism, as shown in Figure 3.
A Novel Compact Cat Swarm Optimization Based on Differential Method
Published in Enterprise Information Systems, 2020
Firstly, each cat will duplicate its own position five times. These positions will be memorized in a seeking memory pool (SMP). Then each value in SMP will be changed slightly by a mutation operator. A dimension of designed variable could be selected to mutate, and the variation range cannot be out of a fixed value. Another parameter CDC will determine that how many dimensions of the solution will be mutated. The mutation operation will be presented as formula (9):