Explore chapters and articles related to this topic
Artificial Intelligence for Network Operations
Published in Mazin Gilbert, Artificial Intelligence for Autonomous Networks, 2018
We focus here on network provisioning and assume that the provisioning of customer connectivity and services is outside the scope of this chapter. Network provisioning is the process by which network operators decide where and when to deploy new software and hardware into the network (network capacity planning), and then execute on this deployment—turning the capacity up and configuring it so that it becomes part of the network. The network provisioning process assumes as input network technology choices and a basic network design; these are typically decided by network engineers outside of the network operations team.
GPU PaaS Computation Model in Aneka Cloud Computing Environments
Published in Kuan-Ching Li, Beniamino DiMartino, Laurence T. Yang, Qingchen Zhang, Smart Data, 2019
Shashikant Ilager, Rajeev Wankar, Raghavendra Kune, Rajkumar Buyya
The Aneka framework has been designed based on Service-Oriented Architecture (SOA). Services are the basic elements of Aneka platform that allows to incorporate new functionalities or replace the existing one by overriding the current implementation. The abstract description of these services is as follows: Scheduling–the job of scheduling is to map the tasks to the available resources. Provisioning–this service can be used to acquire the resources (computing elements in terms of virtual or physical machines).
Resource Management in Cloud
Published in Sunilkumar Manvi, Gopal K. Shyam, Cloud Computing, 2021
Sunilkumar Manvi, Gopal K. Shyam
Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, applications and services) that can be rapidly provisioned and released. Resource provisioning means the selection, deployment and run-time management of software (e.g., database server management systems, load balancers) and hardware resources (e.g., CPU, storage, and network) for ensuring guaranteed performance for applications.
Explaining Digital Technology: Digital Artifact Delineation and Coalescence
Published in Journal of Computer Information Systems, 2023
Artifact coalescence, another characteristic of digital technology theorized in this paper, is the capacity to virtually integrate multiple IT architectural layers of complex information systems, typically to conceal the complexities. It is inspired by the encapsulation concept found in Computer Science literature. For instance, cloud computing combines hardware, operating system, and software(s) to present a complete product to consumers. Cloud computing enables the rapid provisioning of an entire virtual machine (VM). A user is not required to purchase hardware or install an operating system or middleware when creating a VM instance in the cloud. Cloud computing thus combines multiple architectural levels of an IT artifact to display the item in its entirety, providing the user with access and utility. Cloud computing’s serverless architecture is another example of artifact coalescence. Google BigQuery, Google BigTable, Amazon SageMaker, Amazon RedShift, and others are serverless cloud services.
A secure and efficient data deduplication framework for the internet of things via edge computing and blockchain
Published in Connection Science, 2022
Zeng Wu, Hui Huang, Yuping Zhou, Chenhuang Wu
In a cloud storage system, we should not only pay attention to data security but also to the efficiency and resource allocation of the system. Centralised cloud computing technology has attracted large numbers of users. Accordingly, if the cloud platform is not handled well, it may face the problems of over-configuration or insufficient configuration (Shahidinejad, Ghobaei-Arani, & Esmaeili, 2020). Shahidinejad, Ghobaei-Arani, and Masdari (2020) used workload analysis to solve the resource provisioning issue in cloud computing. In order to improve the service quality of cloud computing, we should not only optimise its resource provisioning but also use edge computing to improve the efficiency of the system. Abdellatif et al. (2021) used edge computing to improve the efficiency of medical data collection. Lang et al. (2020) designed an edge-IoT encrypted data deduplication scheme that supports dynamic ownership management and privacy protection. The scheme realises fine-grained access control and dramatically reduces the communication overhead. Ming et al. (2022) stored file labels on a blockchain to realise the cross-domain data deduplication of edge nodes. However, the scheme does not consider the security threats of edge nodes, and the data labels are stored in the blockchain, which may affect the system's efficiency. Moreover, when the IoT, edge computing, and blockchain are combined, rules for storing data on the blockchain (Shaikh et al., 2021) and the authentication of IoT devices (Shahidinejad et al., 2021) are critical.
An Evolutionary Multi-objective Optimization Technique to Deploy the IoT Services in Fog-enabled Networks: An Autonomous Approach
Published in Applied Artificial Intelligence, 2022
Mahboubeh Salimian, Mostafa Ghobaei-Arani, Ali Shahidinejad
Fog or edge computing has been introduced as a new computing model for hosting IoT applications in order to overcome the limitations of using cloud data centers (Souza et al. 2018). Society has not yet converged on clear definitions of these terms (Chen et al. 2020; Forouzandeh, Rostami, and Berahmand 2021; Skarlat et al. 2017; Souza et al. 2018). Fog computing involves a large number of heterogeneous nodes that allow IoT services to execute in the vicinity of resources without involving the cloud. This technology extends services to the network edge in a distributed manner and brings storage, analysis and processing closer to where data are created and end users. Fog computing is considered as an intermediate layer between cloud servers and IoT devices, and turns the network into an edge network (Khosroabadi, Fotouhi-Ghazvini, and Fotouhi 2021). In this layer, fog and cloud resources work together to provide services. Significant advantages of fog calculations include the reduction of latency and computational costs, as well as its ability to meet the response time required in applications sensitive to latency and real-time (Taneja and Davy 2016). Currently, resources management is one of the main challenges in research based on fog computing (Puliafito et al. 2019; Xavier et al. 2020). There are many issues related to resources management in fog and cloud computing, such as resources forecasting, resources provisioning, services provisioning, scheduling, dispatching, and services migration (Khosroabadi, Fotouhi-Ghazvini, and Fotouhi 2021).