Explore chapters and articles related to this topic
Role of Open Source, Standards, and Public Clouds in Autonomous Networks
Published in Mazin Gilbert, Artificial Intelligence for Autonomous Networks, 2018
As more applications subscribing to the principles of CN are developed and deployed, there will be an even greater need to meet application performance requirements. Improvements in CN container network infrastructure to meet those requirements include: Move container networking functions from the kernel to the user space. Doing so eliminates system call overhead (to the kernel). There is no dependency on the kernel networking community to implement new features. Straightforward to innovate and add new features without touching the kernel. And availability is improved because users space problems will not bring the node down. Figure 6.6 illustrates the differences between the two. On the left, the network stack is implemented in the kernel. On the right, the network functions provided by FD.io/VPP reside in user space and bypass the kernel altogether.Build network functions as cloud native network functions (CNFs). CNFs are VNFs implemented as containerized microservices. The same tooling, orchestration, and management systems used for CN application life cycles can be used for CNFs. In essence, CNFs become first-class citizens in a CN application service topology.
Big Graph Analytics: Techniques, Tools, Challenges, and Applications
Published in Mohiuddin Ahmed, Al-Sakib Khan Pathan, Data Analytics, 2018
Dhananjay Kumar Singh, Pijush Kanti Dutta Pramanik, Prasenjit Choudhury
Analyzing massive-scale graphs requires a cluster of machines. So, the aggregate memory exceeds the graph size. This problem can be solved by utilizing commodity solid-state drives (SSDs) with a minimal performance loss. One of the well-known frameworks that follow this approach is FlashGraph. FlashGraph: FlashGraph [25] is a scalable, semi-external memory graph-processing engine on top of a user-space SSD file system that stores vertex state in the memory and edge lists on the SSDs. To realize both high IOPS (input/output operations per second) and lightweight caching for the SSD arrays on nonuniform memory and the I/O systems, FlashGraph uses SAFS (set-associative file system) and a user-space file system. FlashGraph reduces data access by selectively accessing the edge lists required by the graph algorithms from SSDs. It conservatively merges I/O requests to reduce the CPU consumption and to increase the I/O throughput. To express the wide range of graph algorithms and their optimizations, FlashGraph provides a concise and flexible programming interface.
Message Forwarding Strategies
Published in Yufeng Wang, Athanasios V. Vasilakos, Qun Jin, Hongbo Zhu, Device-to-Device based Proximity Service, 2017
Yufeng Wang, Athanasios V. Vasilakos, Qun Jin, Hongbo Zhu
Haggle itself runs as a user space process with a main thread in which a kernel and a set of managers run. The kernel implements a central event queue, while managers divide responsibility in areas such as security, node management, content dispatching, and integrity. Managers create and consume events and may run tasks in separate threads when they need to do work that require extended processing. This may include sending and receiving data objects, computing checksums, doing neighbor discovery, and so forth. Due to the modularity of our design, managers and task modules can be added with little effort, which makes it easy to extend Haggle with extra functionality.
Analyzing execution path non-determinism of the Linux kernel in different scenarios
Published in Connection Science, 2023
Yucong Chen, Xianzhi Tang, Shuaixin Xu, Fangfang Zhu, Qingguo Zhou, Tien-Hsiung Weng
This section introduces the basic concepts that help understand the performed study. We refer to path as the execution series of different kernel functions that starts with a specific system call invoked from a user-space application.Therefore, we define path non-determinism as the exhibition of different behaviour on the function executions of the same application with the same inputs and the possibility of following different execution paths with the same inputs.A system call follows different execution paths. Each different execution path is called a unique path. The number of unique paths represents how many execution paths a system call can follow.We refer to shared paths as those executed incidentally in different scenarios and are the same.There is always a most frequent path with the highest chance of appearing in execution, which we refer to as common path.There are also some paths with a low probability of appearing, which we refer to as rare path.
Exploration for Software Mitigation to Spectre Attacks of Poisoning Indirect Branches
Published in IETE Technical Review, 2018
Baozi Chen, Qingbo Wu, Yusong Tan, Liu Yang, Peng Zou
Previous report [31] shows that networking-related CPU overheads for a kernel-based TCP stack can be up to 40% for application context switching. According to the benchmark results, there is a significant impact on network I/O but negligible regression on filesystem and storage due to the protocol processing in kernel which use indirect branches more than storage system. Userspace network stack has been proposed to improve the cache performance on multi-core systems [32], meanwhile avoiding extra data copies and boundary crossings. Since we can apply Retpoline to kernel and leave userspace network driver speculatively executing indirect branch as usual, performance can benefit from userspace network stack. Various frameworks such as Netmap provide efficient packet reception and transmission mechanisms to or from user space bypassing kernel stack packet processing. These frameworks reduce or remove various packet processing costs such as per-packet dynamic memory allocations, system call overheads and memory copies to userspace.
An architecture for synchronising cloud file storage and organisation repositories
Published in International Journal of Parallel, Emergent and Distributed Systems, 2019
Gil Andriani, Eduardo Godoy, Guilherme Koslovski, Rafael Obelheiro, Mauricio Pillon
The Cloud4NetOrg storage module is responsible for managing the file system as well as for performing input and output operations. The module combines virtual file system (VFS) with FUSE [48] to create a custom file system in user space with minimum intervention of kernel calls. In short, the Cloud4NetOrg storage module consists of: (i) a local cache space; (ii) an access library which provides functions to retrieve, read, write and remove files; and (iii) a file system, which exports to the operating system a structure of directories and files. The export operation is analogous to the directory structure control performed by traditional sync clients from the perspective of functionality and compatibility with running applications.