Explore chapters and articles related to this topic
Internet of Things: A Progressive Case Study
Published in Vijender Kumar Solanki, Vicente García Díaz, J. Paulo Davim, Handbook of IoT and Big Data, 2019
A lightweight programming language is designed to have very small memory footprint, is easy to implement (important when porting a language), and/or has minimalist syntax and features. In computing, the memory footprint of an executable program indicates its runtime memory requirements, while the program executes. This includes all sorts of active memory regions like code segments containing (mostly) program instructions (and occasionally constants), data segments (both initialized and uninitialized), heap memory, and call stack, plus memory required to hold any additional data structures, such as symbol tables, debugging data structures, open files, shared libraries mapped to the current process, etc., that the program always needs while executing and will be loaded at least once during the entire run. NodeMCU is designed to run on all ESP modules and includes general purpose interface modules which require at most two GPIO pins. Lua script is flashed into the ESP8266 processor and compiled on the suitable port. When the connection is made, the code runs through it and displays the output.
Force-System Resultants and Equilibrium
Published in Richard C. Dorf, The Engineering Handbook, 2018
method G uses to store data depends on its type, and it is important to understand the way data will reside in memory. Savvy use of data types decreases the overall memory footprint of an application, minimizes data coercion errors, and improves system performance.
Characterizing Tradeoffs in Memory, Accuracy, and Speed for Chemistry Tabulation Techniques
Published in Combustion Science and Technology, 2023
Elizabeth Armstrong, John C. Hewson, James C. Sutherland
We see in Figure 3 how adding resolution to the laminar training data for the Lagrange interpolants decreases the error in predicting points within the table bounds, as expected. Adding more information for the interpolants will continue to decrease the error up to machine precision. This additional accuracy comes with a cost of increasing the memory footprint. We also see the errors for LP3 decreasing faster than LP1, both at expected order once sufficient resolution is reached. In contrast, increasing resolution of data for training the ANNs will only aid in decreasing the error until the dependent variable behavior is well enough resolved for the architecture to capture it as well as possible. Adding further resolution will not significantly change the ANN results. We see both A2-5 and A3-20 performing as well as the LP3 interpolant for smaller grid sizes, and the increased resolution of the training data has no effect on the memory footprints of the ANNs unlike the interpolants. However, the ANN errors start to stagnate after the 81-point grid while both interpolant errors continue to decrease. Once this happens, a change of architecture is needed to attempt a better representation of the nonlinearities in the data, as seen in the improved errors of A3-20 compared to A2-5. However, as mentioned before, increasing the ANN architecture does not necessarily improve the predictions; training parameters can also impact results significantly. It is much harder to control errors in ANN predictions compared to interpolants.
A Limited-Memory Framework for Conditional Point Sampling for Radiation Transport in 1D Stochastic Media
Published in Nuclear Science and Engineering, 2023
In Fig. 9, we observe that when using CoPS3PO, the number of points remembered per realization continues to increase in an unbounded manner as a function of cohort size with a maximum of about 50 000 points when using a cohort size of 1000. When using CoPS3PO-1 on the other hand, though the material realizations become more saturated as more cohorts are used, the number of points remembered per cohort never exceed this theoretical limit of 1001. Even with a cohort size of 1000, the maximum points remembered was about 800, which is a memory savings factor of about 63. We present the average number of points remembered per cohort as a function of cohort size using CoPS3PO and CoPS3PO-1 in Fig. 10. This figure shows the same behavior, where the memory footprint of CoPS3PO continues to grow as cohort size increases whereas the memory footprint of CoPS3PO-1 approaches the theoretical limit of 1001.
Modeling of the Molten Salt Reactor Experiment with SCALE
Published in Nuclear Technology, 2022
F. Bostelmann, S. E. Skutnik, E. D. Walker, G. Ilas, W. A. Wieselquist
The IFP method has recently been implemented for use with the new Monte Carlo code Shift. In contrast to the KENO-IFP implementation, a parallel version is available that allows the user to run a time-efficient TSUNAMI calculation without the need to determine settings beyond the number of latent generations. The excellent performance, including the parallelization of this Shift-IFP implementation, has been demonstrated, showing an almost linear behavior up to the use of several hundred processors.20 The IFP method has a large memory footprint by nature. However, because of the efficiency of Shift-IFP’s parallel implementation, the memory allocation can be reduced to more reasonable levels: The memory requirement per processor decreases with increasing number of processors, allowing an efficient calculation with many neutrons per history across multiple computing nodes. Because of these benefits, Shift-IFP was applied in this study. Direct perturbation (DP) calculations were also performed to confirm the largest obtained sensitivities and thereby the chosen number of latent generations (the only required TSUNAMI parameter for this approach).