Explore chapters and articles related to this topic
Mapping Network Device Functions to the OSI Reference Model
Published in James Aweya, Designing Switch/Routers, 2023
Layer 4+ switches provide very high scalability to IP-based applications and server farms in a cost-effective manner. They allow the use of multiple servers with load balancing and failover, eliminating complete overhauls of the server farms and disruption to applications. The switches provide high return on investment (ROI) for server and application infrastructure in a short timeframe, and support significantly higher application traffic and user loads on existing infrastructure by maximizing server resource utilization.
Middleware—A New Frontier for Building Systems and Analytics
Published in John J. “Jack” Mc Gowan, Energy and Analytics, 2020
Cloud computing is the last topic that will be discussed under this heading of networking. This is also a great segue to expand beyond the infrastructure that is used to move data around, to the applications or engines of analysis that are being deployed under the heading of energy and analytics. Cloud computing is computing in which large groups of remote servers, often physically located in data centers or server farms, are networked to allow centralized data storage and online access to computer services or resources. Cloud computing is the term used to describe the use of interconnected business applications over the internet. As depicted in Figure 12-3, the applications are interconnected via web services, and the end user accesses the required service using a web browser or dashboard. Therefore, the application and the infrastructure do not reside on end users’ premises. The end user accesses the application on demand, and can concentrate on using the application for its purpose, without investing capital expenditures, thereby avoiding the overhead of installation, networking and maintenance.
Developing Solutions that Improve Architectures and Designs
Published in Chiang H. Ren, The Fundamentals of Developing Operational Solutions for the Government, 2018
Finally, scaling an application is based on how the application can operate across multiple servers (server farms) in a load-balanced way. Load balancing permits the support of a vast community of users with no appearance of separation. Alternatively, if a single user needs to conduct complex high-computational capacity operations, this is at times better done by a single high-capacity server than by chopping apart the operations for server farms. Thus, server technology is still very important in architectural design. The introduction of blade servers dramatically increased server capacity [5]. The introduction of technologies in the future, such as superconducting quantum computing with near-zero resistance energy flow [6], will again change the concept of scale in IT architectures.
Elasticity management for capacity planning in software as a service cloud computing
Published in IISE Transactions, 2021
Jon M. Stauffer, Aly Megahed, Chelliah Sriskandarajah
We use a mixed integer program for our optimization model, Problem DQ1, as the number of instances can only be integer values, but the penalty terms are continuous. Problem DQ1 minimizes the resource cost of deploying an instance and any penalty costs for delayed query execution as specified in each SLA between the cloud provider and client. This model focuses on the number of client-specific instances to deploy and not the overall optimal size of the server farm or the number of server farms required across various regions. We present the deterministic model first. This allows us to understand the structure of the problem and develop structural properties to efficiently solve small to medium-sized problems. This model also provides the basis for comparison to evaluate the performance of our Offline Dynamic Algorithms. In Section 6, we build on the foundation of our deterministic model to develop a stochastic model to deal with uncertainty in query arrivals. Iyoob et al. (2013) suggested that these types of programming models are good approaches for determining capacity requirements in cloud computing applications.
Optimization analysis of management operation for a server farm
Published in Quality Technology & Quantitative Management, 2020
Server farm containing a large number of server machines is crucial for data storage and computation in various network applications. Growing demand of cloud computing makes the number of server farms increasing significantly, which results in the amount of power consumption of server farms extremely huge. Schwartz, Pries, and Tran-Gia (2012) indicated that a server farm consumes about 65% of the maximum power consumption even with low load. The efficient way to keep power consumption low is to turn off the unused server. This study considered a simple management operation applying to a server farm, which a block of available servers is designated as ‘reserves’. Depending on the number of jobs staying in the system, the state of the reserves is controlled by power-up and power-down thresholds. Noted that the process of power-up is not immediately. During the power-up period, the servers cannot serve jobs but consume power. In particular, we consider the server may be subject to be breakdown. Regarding to such a server farm with simple management policy, we employ an unreliable multi-server queueing system with queue-dependent servers to model such a system. Mangers or decision makers may be interesting on how to deploy the number of permanent servers and the power-down thresholds which minimizes the average cost. To do this, a cost function is formulated to search the optimum number of permanent servers and the optimum power-down threshold which minimizes the average cost.
Opportunistic forwarding for user-provided networks
Published in International Journal of Parallel, Emergent and Distributed Systems, 2018
Efthymios Koutsogiannis, Lefteris Mamatas, Vassilis Tsaoussidis
Furthermore, we extended the ONE simulator with better support for wired networks and two new types of nodes: (i) the home- or office-user that owns a SAP, and (ii) a server node, which hosts data for the mobile users (e.g. social profile photos). We assume that all home-users are permanently connected to a server node (e.g. a server farm) with a wireless interface too, situated in Gower Street (see Figure 6). We designed and implemented the discussed DTN routing protocol, along with additional functionality on statistics. Several modifications were made in ONE in order to operate realistically, such as enabling deletion of duplicate, already delivered messages in SAPs for all algorithms. Moreover, it should be noted that in the current work we focus on the uplink and leave the downlink as a future work. Depending on the characteristics of the data (e.g. volume, urgency) the proposed scheme can also be applied to the downlink using the mechanism of contact prediction and message replication, in order to reach the mobile destination through intermediate mobile nodes. However, further SAP exploitation can be accomplished in order to send the data directly to the mobile recipient. Contact prediction is needed in the latter case in order to send the data through the wired infrastructure to the nearest SAP to the mobile destination and avoid intermediate mobile nodes.