Explore chapters and articles related to this topic
Evaluating the Performance of NT-Based Systems
Published in Steven F. Blanding, Enterprise Operations Management, 2020
When Windows NT uses its virtual memory capability, it transfers data to and from a special file on the hard disk, referred to as a virtual-memory paging file. This file is also commonly referred to as a swap file. The transfer of information to and from disk occurs at electromechanical speed, with the movement of disk read/write heads over an appropriate disk sector contributing to a major portion of the delay in reading from or writing to a disk. Although modern disk drives are relatively fast devices, they still operate at 1/50th to 1/100th the speed of computer memory in terms of data transfer capability. While paging will always adversely affect the performance of a computer, the size of the paging file on the computer can have a more profound impact. If the size of the paging file is too small for the amount of activity on the server, one can experience a “thrashing” condition, with the operating system repetitively reading and writing small portions of RAM to and from disk.
Leveraging Value Stream Resources
Published in Steven Bell, Daniel T. Jones, Charles Betz, Troy DuMoulin, Paul Harmon, Sandra Foster, Mary Poppendieck, John Schmidt, Run Grow Transform, 2017
To continuously improve a value stream it’s necessary to create a small amount of slack capacity so that teams have the time and space to respond to normal variation and to address problems as they occur. However, it’s human nature to try to keep scarce and expensive resources busy and running at 100% capacity while continuing to think that this will lead to improved productivity and efficiency. There’s just one problem: it’s wrong. Mathematical queuing theory proves, and it’s well documented in both Lean manufacturing and Lean-Agile software development practices, that when a resource is overburdened (beyond a planned utilization threshold of approximately 80%) productivity plummets. Interruptions escalate, task switching (thrashing) accelerates, flow ceases, delays and errors increase, and physical and mental stress builds up, all of which causes a loss of concentration, more errors, burnout, and eventually turnover.
Machine Learning, Containers, Cloud Natives, and Microservices
Published in Mazin Gilbert, Artificial Intelligence for Autonomous Networks, 2018
The basic goal of the operating system (OS) is hardware transparency (what is now termed “virtualization”), additionally supporting multiprocessing and multiprogramming. In cloud technology, a process in multiprocessing (or a task in multitasking) is termed a server. It refers to the ability to break the software into processes that may be processed in parallel. Multiprogramming refers to concurrently running several different programs on the same computer. The OS provides hardware reuse and software reuse at the cost of context-switching and the danger of thrashing, where the computer is processing OS code more than it processes application code. These principles are carried to the cloud environment.
Cost aware cache replacement policy in shared last-level cache for hybrid memory based fog computing
Published in Enterprise Information Systems, 2018
Gangyong Jia, Guangjie Han, Hao Wang, Feng Wang
The LRU policy is poor for thrashing applications because the frequently reused cache line will be evicted by the never reused cache lines. The promotion-insertion pseudo-partitioning (PIPP) method (Xie and Loh 2009) proposes an insertion policy for LRU that inserts at the LRU position when a new line comes into the cache. It also proposes a slow promotion policy, which promotes the cache by a single position when hit. This method is helpful for restricting cache pollution by the never reused cache line and improving the cache size for the frequently reused cache line. Similarly, for the applications need great working set, greater than the cache size, the LRU policy is also poor. Most cache lines cannot be reused once they are evicted by the new cache lines, and the new cache lines will be evicted before they can be reused – leading to a vicious cycle. In order to exit this vicious cycle, Qureshi et al. (Qureshi et al. 2007) proposed a dynamic insertion policy (DIP) for cache replacement. The DIP inserts some new cache lines in the LRU position, which is used to attain the cache size for frequently reused data. However, when there is a hit, the cache line in the MRU position is promoted. DIP mainly solves the problem of greater demand for the cache.