Explore chapters and articles related to this topic
Green Cloud Computing
Published in Matthew N. O. Sadiku, Emerging Green Technologies, 2020
Cloud computing faces several challenging issues related to security, load balancing, quality of service, standardization, and energy consumption. Perhaps the biggest challenge to GCC is related to security. Security is an important factor enabling GCC infrastructure to be deployed. Security issues include sensitive data access, privacy, data recovery, and multi-tenancy issues. Customers should be able to trust that cloud service providers will not misuse their sensitive data [18]. Energy consumption is another main obstacle to GCC. Load balancing in achieving GCC is another challenge. Load balancing is required to distribute the dynamic workload across multiple nodes and avoid overwhelming single node while other nodes are idle [21]. Another limitation is the high cost of purchase of components that are required to make the cloud computing more efficient (such as cooling equipment). The maintenance of the devices included in data centers is also a major concern.
An Approach for Energy-Efficient Task Scheduling in Cloud Environment
Published in Asis Kumar Tripathy, Chiranji Lal Chowdhary, Mahasweta Sarkar, Sanjaya Kumar Panda, Cognitive Computing Using Green Technologies, 2021
Mohapatra Subasish, Hota Arunima, Mohanty Subhadarshini, Dash Jijnasee
In the twenty-first century, the advancement of innovation has brought about significant usage of distributed computing as a pay-per-use model. This is because of its highlight features like multi-tenancy, scalability, agility, mobility, and resource utilization using the virtualization procedure. In this pay-per-use model, end-users do not require the purchase of any software to perform a task—the only requirement is a paid internet connection for the duration of its use. This type of facility reduces cost and encourages the use of cloud resources dynamically. Cloud provides diverse types of services depending on the demand of end-users [1]. Among all the advantages that the cloud offers, there still arise some complexities and obstacles with the supply of virtual machines—these must be removed by the cloud service provider. In cloud facilities, we must consider response time, execution time, makespan, power consumption, effective resource utilization, cost, and—most importantly—load balance. Load balancing is a technique to distribute workloads among the servers, networks, etc. virtually [2]. It aims to minimize response time, cost; maximize throughput, and resource utilization by eliminating the overuse of resources, which enhances the reliability of the system. It follows two types of approaches: static and dynamic load balancing. In static load balancing, it does not check the current status of the task—the tasks are not pre-empted. Hence, this approach is not used widely. In dynamic load balancing, the tasks always come in a pre-empted order, so it is easy to regulate which systems are underutilized, over-utilized, or remain idle. The main problem in dynamic load balancing is the effective utilization of resources, energy consumption, and performance of the system. Most approaches would focus on the equal distribution of load across multiple servers, which ultimately increases response time rather than energy consumption in the system. Data centers are the main part of the cloud that consists of many servers. The virtualization technique is implemented on these physical servers to make them function in a virtualized manner. Type 2 hypervisors are used on these from where many virtual machines are made according to the use of the system. These virtual machines and their corresponding servers and data centers consume more energy, which leads to an increase in cost and carbon dioxide emission. The government put emphasis on the use of green technology by reducing the higher consumption of energy. Considering the requirements, the authors have proposed an enhanced load balancing algorithm that minimizes response time, maximizes resource utilization, and minimizes energy consumption.
Dynamic load balancing algorithm to minimize the makespan time and utilize the resources effectively in cloud environment
Published in International Journal of Computers and Applications, 2020
The main objective of load balancing algorithm is to reduce the makespan time of upcoming request and increase the average utilization the cloud resource. Load balancing is achieved in two steps: first one is to distribute the task among the nodes (that is called task scheduling); second one is to monitor the virtual machine and perform the load balancing operation using task migration or virtual machine migration approach. The aim of task scheduling is to create a schedule and assign each task to node (virtual machine) for specific time period so that all tasks are executed in minimum timespan. Task scheduling is one of the best known optimization problems; number of task and length of task change very rapidly in cloud environment. It is difficult to calculate all possible task-resource mapping in cloud environment, and finding an optimal mapping is not easy task. Therefore, we need an efficient task scheduling algorithm that can distribute the task in effective way, so that less number of virtual machines are in overloaded and underloaded condition. After allocating the task to virtual machine, cloud task scheduler starts to perform load balancing operation, so that task can be transferred from overloaded virtual machine to underloaded virtual machine and all virtual machines should be in balanced condition. Cloud infrastructure contains infinite resources and large number of user requests comes on cloud. When number of request increase in linear mode then complexity of task scheduling algorithm increase in non linear (exponentially).
Fault tolerance based load balancing approach for web resources
Published in Journal of the Chinese Institute of Engineers, 2019
Anju Shukla, Shishir Kumar, Harikesh Singh
Load balancing is the technique to distribute the load among servers optimally. The key concern of optimization is to minimize response time, execution time, overhead, and increase throughput. In a load balancing environment, the grid broker acts as middleware between user and resources. It acts as a single point entry to receive user requests. Broadly, load balancing techniques can be arranged in two classifications (Hajlaoui, Omri, and Ben 2017): Static Load Balancing (SLB) and Dynamic Load Balancing (DLB). SLB algorithm uses earlier information about the network state for assigning tasks to any resource located on a distributed network. In DLB, workload distribution is done at runtime among the available resources. A load balancer is a module which receives tasks from users and routes them to suitable resources for execution. In distributed algorithms, communication overhead is more than in non distributed algorithms, because each node in the cluster needs to interact with another node.
A QoS-based technique for load balancing in green cloud computing using an artificial bee colony algorithm
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2023
Sara Tabagchi Milan, Nima Jafari Navimipour, Hamed Lohi Bavil, Senay Yalcin
With the fast development of the fifth-generation (5 G) and next-generation communications, the Internet of Things (IoT), cloud/edge, and wireless computing provide an online connection for all the people of the world, which can be seen in the high on-chip installation and operation costs of data centres and environmental pollution (Cao, Sun, et al., 2021; Cao, Wang, et al., 2021; Du et al., 2021; Guo et al., 2020; H. Kong et al., 2020; Zong & Wang, 2022). Therefore, an alternative under green cloud computing should be developed to reduce power consumption and operational/executive costs (Issa et al., 2020; M. Li et al., 2023). Green cloud computing is a powerful paradigm for changing customer behaviour by presenting a wide range of services such as IT and healthcare services (Vahdat, 2021). This expansion causes high-energy consumption and significantly influences the environment in terms of carbon emissions. According to United States research, the energy consumption of IT resources is nearly 8% of total energy, and it will grow 50% within a decade (Ceuppens et al., 2008; Etoh et al., 2008). According to Gartner’s estimation, the ICT (Information and Communications Technology) equipment will have the maximum CO2 emissions in the future (C. Pettey). There is considerable anxiety about the increasing electricity demand and related carbon emissions that will be affected by a large data centre. In 2010, electricity utilisation in the global data centre was estimated at 1.5% of the whole electricity usage in the world (Peng et al., 2017). Based on McKinsey’s report, the total electricity bill was $11.5 billion for a data centre in 2010. In a distinctive data centre, energy costs are doubled every five years. Also, carbon emissions were 80 billion kWh from data centres around the whole world in 2007. They will be approximately equal to 340 billion kWh in 2020; in 2030, data centres will consume about 1 to 13% of global electricity compared to 2010. The causal relation between carbon emission and energy consumption causes energy management issues to achieve green computing. Minimising data centres’ energy usage has become a complex and challenging issue by increasing computing applications and related data. The objective of green computing is to offer low-power and high-performance computing infrastructure. Load balancing and schedulers distribute workload evenly across nodes by improving overhead, throughput, scalability, performance, migration time, response time, fault tolerance, energy consumption, and carbon emission factors (Mou et al., 2022). The dynamic Voltage Frequency Scaling (DVFS) (Huang et al., 2012) method decreases the IT equipment’s power consumption. It allows processors to run at diverse frequencies to reduce the processor’s energy consumption. The processor has low performance due to the low processor frequency and voltage. However, the processor voltage and frequency should be reduced to have a low power consumption of the processor (Hong et al., 2021; Niu et al., 2022).