Explore chapters and articles related to this topic
War of Control Hijacking
Published in Uzzal Sharma, Parmanand Astya, Anupam Baliyan, Salah-ddine Krit, Vishal Jain, Mohammad Zubair Khan, Advancing Computational Intelligence Techniques for Security Systems Design, 2023
Ragini Karwayun, Monika Sainger
To perform buffer-overflow attacks, an attacker needs to be aware of the memory layout of the program under attack. The process of finding the memory layout is a complex trial and error procedure. After this, the attacker needs to find a suitable place to inject the carefully designed malicious payload. ASLR works in conjunction with the virtual memory management to implement randomization of memory address space. Address of critical memory components like stack, heap, and dynamic libraries is changed every time the program is executed. As the address changes every time, attackers are not able to get the target address. Initially, it was required that applications are compiled with ASLR support; now this has become a default step. Windows 7 permitted 8 bits of randomness for DLL. It mapped a 64K page onto a 16 MB region. This 8-bit randomness resulted in 256 more possible address space locations. As a result, attackers only had a 1 in 256 chance of getting the hold of the correct location to execute code. Windows 8 allowed 24 bits of randomness on 64-bit processors making 1 in 224 chance of finding the correct location.
Memory Organisation
Published in Pranabananda Chakraborty, Computer Organisation and Architecture, 2020
Each word in the physical memory is identified by a unique physical address and all such memory words in the main memory form a physical address space or memory space. In systems with virtual memory, the address generated during the compilation of a program, and subsequently used by the system is called the virtual address, and the set of such addresses form a virtual address space or simply address space. The users are informed that they have such total address space for their use. During execution, the virtual addresses are issued by the processor, but memory addresses only are required for processing. That is why, only currently executable programs or parts of the programs are brought from virtual memory into a smaller amount of physical memory which are then shared among those programs. The virtual addresses of the executing program while being issued must be then translated into their corresponding physical addresses of the locations where the said program (or part of the program) is placed in the main memory. However, the address translation mechanisms and management policies being used are often affected by the virtual memory model used, and by the organisation of the disk arrays, and certainly also of the main memory. Virtual memory approach also simplifies loading of the programs for execution, and permits necessary relocation of codes and data, allowing the same program to run in any location in physical memory with appropriate address mapping as discussed later.
Computer memory systems
Published in Joseph D. Dumas, Computer Architecture, 2016
In a system using virtual memory, each program has its own virtual address space (sometimes referred to as a logical address space) within which all memory references are contained. This space is not unlimited (no memory system using addresses with a finite number of bits can provide an infinite amount of storage), but the size of the virtual addresses is chosen such that the address space provided exceeds the demands of any application likely to be run on the system. In the past, 32-bit virtual addressing (which provided a virtual address space of 4 GB) was common. More recently, as applications have gotten larger and a number of systems have approached or exceeded 4 GB of RAM, larger virtual address spaces have become common. A 48-bit address allows a program 256 terabytes (TB) of virtual space, and a 64-bit address provides for a currently unimaginable 16 exabytes (EB). For the foreseeable future, 64-bit virtual addressing should be adequate (remember, however, that this was once said of 16- and 32-bit addressing as well). The purpose of this large address space is to give the programmer (and the compiler) the illusion of a huge main memory exclusively “owned” by his or her program and thus free the programmer from the burden of memory management.
DAPR-tree: a distributed spatial data indexing scheme with data access patterns to support Digital Earth initiatives
Published in International Journal of Digital Earth, 2020
Jizhe Xia, Sicheng Huang, Shaobiao Zhang, Xiaoming Li, Jianrong Lyu, Wenqun Xiu, Wei Tu
The advancement of distributed computing offers an emerging computing paradigm for managing spatial Big Data. A substantial amount of research has been conducted to exploit parallel data indexing. The Parallel R-tree extended the classic R-tree index capabilities from single-disk to multi-disk environment (Kamel and Faloutsos 1992). With the support of multi-disk and parallel algorithms, concurrent I/O performance is scientifically improved during the data retrieval process. Hoel and Samet (1994) explored the strategies of using the multi-processors environment to support a variety of operations such as data structure build, polygonization, and join. To extend to data indexing capabilities from single-workstation to multi-workstation, Wang et al.(1999) designed a distributed R-tree in a distributed shared virtual memory (DSVM) environment. This DSVM-based R-tree can utilize both distributed processors and disks and facilitates data management with a global memory address space for data retrieval. The Master-Client R-tree (Schnitzer and Leutenegger 1999) was proposed to index spatial data in a shared-nothing parallel system. Compared to single-workstation index and DSVM index, the shared-nothing index is more scalable and cost-effective. The shared-nothing computing environment can be built on a large number of low-cost commodity computers, each of which has its own computing processors, memory, and disk. The Master-Client R-tree distributes data across to computer clients and utilizes a master server to manage the entire index. Nam and Sussman (2005) improved this Master-Client index schemes by employing a replication protocol to improve indexing scalability. Wan et al. (2019) designed a distributed data index for internet of things (IoT) environment. A Voronoi-based approach was used to provide more efficient routing in distributed IoT applications.
History of personal computers in Japan
Published in International Journal of Parallel, Emergent and Distributed Systems, 2020
Since slots are arranged in the address space separated by the paging technique, the addresses of memories, peripheral devices and others arranged in the multiple slots do not conflict. On the ISA bus, which was the mainstream of the IBM compatible 16-bit personal computer at the time, it was necessary to manipulate a small DIP switch installed on the card so that the addresses of the connected cards do not collide. But such a procedure was not necessary in MSX.