Explore chapters and articles related to this topic
Mobile Internet Devices and the Cloud
Published in John W. Rittinghouse, James F. Ransome, Cloud Computing, 2017
John W. Rittinghouse, James F. Ransome
Kernel-based Virtual Machine (KVM) is open source software that is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). KVM consists of a kernel module, kvm.ko, which provides the core virtualization infrastructure, and a processor-specific module, kvm-intel.ko or kvm-amd.ko, depending on the CPU manufacturer (Intel or AMD). KVM also requires a modified QEMU,7 although work is underway to get the required changes upstream. Multiple virtual machines running unmodified Linux or Windows images can can be run using KVM. . A wide variety of guest operating systems work with KVM, including many versions of Linux, BSD, Solaris, Windows, Haiku, ReactOS, and the AROS Research Operating System. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc. The kernel component of KVM is included in Linux, as of the 2.6.20 kernel version.
High Performance Remote Sensing Data Processing in a Cloud Computing Environment
Published in Lizhe Wang, Jining Yan, Yan Ma, Cloud Computing in Remote Sensing, 2019
Lizhe Wang, Jining Yan, Yan Ma
Cloud Framework employs the most popular but successful open source project OpenStack to form the basic cloud architecture. However, OpenStack mostly only offers virtual machines (VMs) through virtualization technologies. These VMs are run and managed by hypervisors, such as KVM or Xen. Despite the excellent scalability, the performance penalty of virtualization is inevitable. To support the HPC cluster environment in the Cloud, pipsCloud adopts a bare-metal machine provisioning approach which extends OpenStack with a bare-metal hypervisor named xCAT. Following this way, both VMs and bare-metal machines could be scheduled by nova-scheduler and accommodated to users subject to application needs.
Execution Environment
Published in Hamidreza Ahmadian, Roman Obermaisser, Jon Perez, Distributed Real-Time Architecture for Mixed-Criticality Systems, 2018
A. Crespo, P. Balbastre, K. Chappuis, J. Coronel, J. Fanguède, P. Lucas, J. Perez
The Linux KVM is an established system virtualization solution, implemented as a driver running within Linux, which effectively turns the Linux kernel into a hypervisor. This approach takes advantage of the existing infrastructure within the Linux kernel, including the scheduler and memory management. This results in the KVM code base to be very small compared to other hypervisors; this has allowed KVM to evolve with an impressive pace and become one of the most well regarded and feature full virtualization solutions. KVM works by exposing a simple interface to the user-space, through which a regular process can request to be turned into a virtual machine. Usually Quick Emulator (QEMU) is used on the user-space side to emulate I/O devices, with KVM handling vCPUs and memory management. Through this interface, regular Linux processes will be turned into virtual machines, with threads acting as vCPUs. KVM will handle the context switching of the processor when the process of a virtual machine gets scheduled by Linux, using hardware virtualization support in order to virtualize the processor and the memory. To virtualize I/O devices, such as network interfaces and storage, an interface to user-space exists so those can be emulated by the application setting up the virtual machine (usually the QEMU emulator). Therefore, KVM exploits the Virtualization Extensions (e.g., ARM-VE) to execute guest’s instructions directly on the host processor and to provide VMs with an execution environment almost identical to the real hardware. Each guest runs in a different instance of this execution environment, thus isolating the guest operating system. For this reason, this isolation has been used for security purposes [228] in many scientific works. In the ARM architecture, the KVM isolation involves CPU, memory, interrupts and timers [221].
Proposal and evaluation of adjusting resource amount for automatically offloaded applications
Published in Cogent Engineering, 2022
Regarding GPU, I use two boards of NVIDIA Tesla T4 (CUDA core: 2560, Memory: GDDR6 16GB) and NVIDIA Quadro P4000 (CUDA core: 1792, Memory: GDDR5 8GB). I use CUDA Toolkit 10.1 and PGI compiler 19.10 for GPU control. NVIDIA vGPU virtual Compute Server virtualizes GPU resources. Using vGPU, Tesla T4 resources are divided, and resources of 1 board are divided into 1, 2, and 4 parts. Kernel-based VM (KVM) of RHEL7.9 is used for CPU virtualization. The resource of the VM that becomes 1 standard size is 2 cores and 16GB RAM. Half size (1 core), standard size (2 cores), and double size (4 cores) can be selected. For example, when setting the CPU and GPU resources with standard sizes at a time, our implementation virtualizes CPU and GPU resources and links the 2-core CPU and Tesla T4 1 board. Minimum unit sizes are 1-core CPU and 1/4 GPU board. Figure 3 shows the experimental environment and specifications. Here, the application code used by the user is specified from the client notebook PC, tuned using the bare metal verification machine, and then deployed to the virtual running environment for the actual use.