Explore chapters and articles related to this topic
Evaluating the Performance of NT-Based Systems
Published in Steven F. Blanding, Enterprise Operations Management, 2020
When Windows NT uses its virtual memory capability, it transfers data to and from a special file on the hard disk, referred to as a virtual-memory paging file. This file is also commonly referred to as a swap file. The transfer of information to and from disk occurs at electromechanical speed, with the movement of disk read/write heads over an appropriate disk sector contributing to a major portion of the delay in reading from or writing to a disk. Although modern disk drives are relatively fast devices, they still operate at 1/50th to 1/100th the speed of computer memory in terms of data transfer capability. While paging will always adversely affect the performance of a computer, the size of the paging file on the computer can have a more profound impact. If the size of the paging file is too small for the amount of activity on the server, one can experience a “thrashing” condition, with the operating system repetitively reading and writing small portions of RAM to and from disk.
Memory Organisation
Published in Pranabananda Chakraborty, Computer Organisation and Architecture, 2020
Since the cache memory is faster in access than that of main memory by a factor of 5−10, it is thus organised as a supporting memory module to the main memory, and hence, (C1, M1) lies in a higher memory hierarchy, and also operates at a higher speed than (M1, M2). A common arrangement of the CPU, cache, main memory, and secondary memory is illustrated in Figure 4.20. With the relentless progress of the VLSI semiconductor memory technology, use of caches indeed have become economically viable, and it then started to be used in various forms with fast access time as one of its basic characteristics. In segmentation and paging scheme, the Translation Lookaside Buffers(TLB) are special-purpose address caches designed to store frequently accessed segment or page tables.
Interprocess Communication Primitives in POSIX/Linux
Published in Ivan Cibrario Bertolotti, Gabriele Manduchi, Real-Time Embedded Systems, 2017
Ivan Cibrario Bertolotti, Gabriele Manduchi
The improvement in execution speed due to multithreading becomes more evident when the program being executed by threads makes I/O operations. In this case, the operating system is free to assign the processor to another thread when the current thread starts an I/O operation and needs to await its termination. For this reason, if the routines executed by threads are I/O intensive, adding new threads still improves performance because this reduces the chance that the processor idles awaiting the termination of some I/O operation. Observe that even if no I/O operation is executed by the thread code, there is a chance that the program blocks itself awaiting the completion of an I/O operation in systems supporting memory paging. When paging in memory, pages of the active memory for processes can be held in secondary memory (i.e., on disk), and are transferred (swapped in) to RAM memory whenever they are accessed by the program, possibly copying back (swapping out) other pages in memory to make room for them. Paging allows handling a memory that is larger than the RAM memory installed in the computer, at the expense of additional I/O operations for transferring memory pages from/to the disk.
Challenges in Design, Data Placement, Migration and Power-Performance Trade-offs in DRAM-NVM-based Hybrid Memory Systems
Published in IETE Technical Review, 2023
Sadhana Rai, Basavaraj Talawar
Access Pattern Prediction aware LRU (APP- LRU) uses the history of actions to take migration and placement decisions. It uses two additional lists, in addition to LRU, to keep track of read-intensive and write-intensive pages. Pages are grouped based on their read-write counts. When a page fault occurs, the history of the page is checked to find out whether it is read-intensive or write-intensive, and then it is directed towards DRAM if it is write-intensive, otherwise to NVM. If the victim page resides in DRAM while the chosen page is read-intensive then the page from the head group list of NVM is migrated to DRAM then the new page is allocated to NVM memory. If there is no history regarding the faulted page, then it is considered that there is no specific space requirement for the page. This technique reduced writes to NVM [6]. Double-LRU maintains two LRUs, one in DRAM and the other in NVM. If a page reaches the top of the NVM LRU list and it exceeds a threshold, it is migrated to DRAM. Counters are maintained to keep track of read-write accesses of the pages residing in the NVM queue. When the counter value exceeds the threshold, pages are migrated to DRAM. In case of page faults, pages are always loaded to DRAM irrespective of the type of request. This approach was able to reduce power consumption up to 79% when compared to DRAM only system [3]. LRU-based algorithms require list manipulation so that on every page access, they are shifted to most recently used (MRU) position in the list, which cannot be handled by paging hardware unit. While LRU-based techniques arrange pages based on the number of recent references, CLOCK-based algorithms monitor whether the pages are accessed recently by utilizing the information present in the page table managed by paging the hardware unit [62].
Cells to Switches Assignment in Cellular Mobile Networks Using Metaheuristics
Published in Applied Artificial Intelligence, 2019
The CSA problem was introduced by Merchant and Sengupta (1995) whereby heuristic approach based upon a greedy strategy (denoted as H) was presented while making the observation that optimal approaches fail even with relatively small problem instances. Later, Bhattacharjee, Saha, and Mukherjee (1999) proposed other versions of CSA heuristics (H-II through H-VI). It was found that there was no single heuristic that performed equally well in terms of cost and execution time. Bhattacharjee, Saha, and Mukherjee (2000) solved the problem of balancing traffic (load) amongst switches when the cluster of cells to be connected to a switch is decided during the design of a personal communication service network. Saha, Mukherjee, and Bhattacharjee (2000) proposed a heuristic which was simpler and faster than earlier published results. Mandal, Saha, and Mahanti (2002) employed block depth first search (BDFS) algorithm in which an admissible heuristic was used in order to minimize the paging, updating, and physical infrastructure costs. The same authors in 2004 proposed another heuristic combining BDFS with iterative deepening A* (IDA*) which gave superior results compared to earlier published results in terms of quality of the solution obtained and the execution time to obtain the optimal solution. Approaches based upon metaheuristic optimization algorithms (MOA) (Chawla and Duhan 2014, 2015, Yang 2014) for resolving the CSA problem can also be found in the literature. Various MOA-based approaches to address the above problem include simulated annealing (Menon and Gupta 2004), tabu search (Pierr and Houéto 2002), memetic algorithm (Quintero and Pierre 2002), ant colony optimization (Shyu, Lin, and Hsiao 2004), and modified binary particle swarm optimization (Udgata et al. 2008). All these algorithms except the last one considered the cabling cost and the handoff cost as the cost for assigning cells to the switches. In our approach, switching cost is also added to the total cost in addition to the cabling cost and handoff cost. This paper experimentally demonstrates the application of three recently introduced MOA, namely, flower pollination algorithm (FPA), hunting search (HuS), and wolf search algorithm (WSA) to efficiently solve CSA problem in the cellular mobile network.
The job sequencing and tool switching problem: state-of-the-art literature review, classification, and trends
Published in International Journal of Production Research, 2019
The SSP appears in numerous industries (see, e.g. Shirazi and Frizelle 2001; Crama et al. 2007), and related problems exist in different fields. An analogy to the SSP in manufacturing industries can be found in the electronics industry for sequencing printed circuit boards (PCB) (see, e.g. Ghrayeb, Phojanamongkolkij, and Finch 2003; Tzur and Altmann 2004; Raduly-Baka, Knuutila, and Nevalainen 2005; Hirvikorpi, Nevalainen, and Knuutila 2006) when different electronic components are to be mounted by component assembly machines on the PCBs. The machines hold a limited capacity of component feeders so that component switches become necessary between assembling different types of PCBs. In this context, the predominant objective is to minimise the number of component switches. Some authors present problems in computer systems, such as caching and paging problems or k-server problems, that are similar to the tooling problem (see, e.g. Djellab, Djellab, and Gourgand 2000; Privault and Finke 2000; Ghiani, Grieco, and Guerriero 2007). For the latter problem, a set of k servers and handle requests are represented as vertices in a complete graph. For a sequence of requests, which server to move to the requested vertices must be decided. As moving a server from one vertex to another implies costs, the objective for this problem is generally to minimise the total cost in response to a sequence of requests, as described by Privault and Finke (2000). The minimisation of the tool switching instants, also known as the ‘machine stop minimisation problem’, is a distantly related problem to the SSP, and is discussed by, among others, Tang and Denardo (1988b); Konak and Kulturel-Konak (2007); Konak, Kulturel-Konak, and Azizoğlu (2008); Adjiashvili, Bosio, and Zemmer (2015) and Furrer and Mütze (2017). It is mentioned here because some authors consider the SSP and the machine stop problem simultaneously. For the machine stop problem, a tool switch is counted any time a machine has to be stopped to remove or insert a tool, regardless of how many tools need to be changed. Notice that, this paper’s attention is restricted to the SSP, therefore, other combinatorial problems in connection with tool management are not considered. Similar problems as mentioned above have only been included if they are directly linked to the SSP. For further details, the interested reader is referred to Gray, Seidmann, and Stecke (1993), as well as Crama and van de Klundert (1999). The next section presents the research approach and conditions of the literature search.