Explore chapters and articles related to this topic
The Future of Network Energy
Published in Marcus K. Weldon, The Future X Network, 2018
A classical, hierarchical intra-data-center system architecture can be split in three parts: the racks of physical servers (disaggregated or not) and their top of the rack switches (TOR); the interconnection layer, including some aggregation stages; and the load balancers interconnected to a border router. Since the major part of the energy consumption is in the racks of servers (that could reach up to 80 percent of the total energy consumption of the IT equipment), maximizing the utilization of the racks of servers is a critical objective. Figure 8 illustrates a future architecture using low-cost wavelength division multiplexing (WDM) approaches and modular, photonic cross-connects (PXC) (discussed in chapter 4, The future of wide area networks). This architecture optimizes energy utilization and lowers energy cost per bit with inter-server defragmentation, smart placement of workloads, regrouping of data processing to enable server sleep mode strategies, and establishment of optical bypass to manage elephant flows and preserve QoS on the interconnection layer.
Categorization of Virtual Machine in Cloud SDN Environment Using ELM – A Discriminative Classifier
Published in Durgesh Kumar Mishra, Nilanjan Dey, Bharat Singh Deora, Amit Joshi, ICT for Competitive Strategies, 2020
The problem of Virtual Machine (VM) placement has been viewed in different perspectives like Bin packing and Linear Programming which can be solved using many algorithms, some of which are discussed in the section 2. Jungmin Son et al. (Son. J, 2018) proposed Priority Aware Virtual Machine Allocation (PAVA) and Bandwidth Allocation Algorithm (BWA) for placing the high priority VM on the closely connected host to avoid network congestion. The applications are categorized as critical or normal along with the VM specification and flow specification. VM specification includes the number of processing cores and the processing capacity of each core. The flow specification includes bandwidth requirements for the source and destination VMs. PAVA aims at placing the critical VMs on the closest single host group or multiple host group with the closest proximity to avoid network congestion. BWA makes use of the SDN controller that guarantees bandwidth by configuring priority queues based on First Fit Decreasing (FFD) algorithm in the switches. Thuan Duong- Ba et al. (Duong-Ba, 2018) propose a Multi-level Join VM Placement and Migration (MJPM) algorithm which aims at minimizing the resource usage and power consumption in a data centre. The authors mainly focus on the energy consumption by the host and the networking elements like data links, switches etc. The virtual machines are categorized into running VM (for which migration decisions are to be taken) and new VM (for which placement decisions are to be taken. The energy consumption of such VMs is formulated as a multi-objective function in terms of the energy consumption by the physical hosts, inter-server communication load and the cost of migration.
Introduction
Published in Tetsuzo Yoshimura, Self-Organized Lightwave Networks, 2018
The principal application of SOLNETs is the optical solder for the self-aligned optical couplings in advanced optoelectronic (OE) systems such as optical interconnects within computers and optical switching systems. The optical interconnects have already been implemented into data centers and super computers as inter-server connections. They are also going to be implemented within boxes of the servers.
QoS-aware energy-efficient workload routing and server speed control policy in data centers: A robust queueing theoretic approach
Published in IISE Transactions, 2023
Seung Min Baik, Young Myoung Ko
However, Ko and Cho (2014) employed the solution of an optimization problem for workload routing and server speed scaling while considering sojourn time-related probabilistic QoS constraints. Their algorithm successfully reflected the real-time status of resources using an iterative method converging to the optimal solution of the problem. One of the most crucial advantages of this approach was that it only required communication between the load balancer and each server, not inter-server communication, which enabled distributed control. Their dynamic algorithm satisfied the response time constraints regarding the service level agreements; however, the solution appeared significantly conservative, as it used a loose upper bound for the constraints. For instance, as we will see later in Section 4, the average delay probability with given threshold 5 from the simulation is 0.000034 (i.e., ) when the response time constraint is set to be —delay time threshold 5 with violation probability 0.05.
A geo-aware server assignment problem for mobile edge computing
Published in International Journal of Parallel, Emergent and Distributed Systems, 2020
The assignment problem in [16] applies to a MEC network supporting multiple applications of known request load and latency expectation and the challenge is to determine which edge servers to run the required virtual machines (VMs), constrained by server capacity and inter-server communication delay. In [17,18], where only one application is being considered and consists of multiple inter-related components organisable into a graph, the challenge is how to place this component graph on top of the physical graph of edge servers to minimize the cost to run the application. In the case that edge servers must be bound to certain geographic locations, a challenge is to decide which among these locations we should place the servers and inter-connect them for optimal routing and installation costs [15].
Proxy re-encryption architect for storing and sharing of cloud contents
Published in International Journal of Parallel, Emergent and Distributed Systems, 2020
In general, the design of the server supports N number of different tasks (functions). In DIP, we disintegrate and distribute these N tasks among M homogeneous different servers [34]. The DIP architecture used for securing data at storage is described in previous works and is shown in Figure 5. Communication among M server done through inter-server communication is outlined in previous work of Split-protocol [36–38]. This results in a seamless inter-connectivity between such clouds. With re-encryption functionally of the resource allocator server (RA), facilities are continuously streaming data upload and download across different geographic locations worldwide. Also, RA act as a PRE server and facilitate seamless file sharing technique among different clouds without sharing an encryption key.