Explore chapters and articles related to this topic
Cloud-Based RLaaS-Frame Framework for Rapid Deployment of Remote Laboratory Systems
Published in Ning Wang, Qianlong Lan, Xuemin Chen, Gangbing Song, Hamid Parsaei, Development of a Remote Laboratory for Engineering Education, 2020
Ning Wang, Qianlong Lan, Xuemin Chen, Gangbing Song, Hamid Parsaei
Based on the EAaaS layer of the RLaaS-Frame, some different remote experimental applications provide different RL services for academic, industrial, and research activities. Most of the applications are deployed in Docker containers managed by Kubernetes. Kubernetes can facilitate automated container deployment, scaling, and management. The applications consist of three components: web server (Apache web server, Nginx web server, etc.), database (MySQL database, MongoDB, etc.), and experiment service package (Node.js, HTML5, RESTful API, etc.). The applications can run on most of the popular browsers and mobile platforms, and are built on experiment service package. A cluster of applications can be created with the simple drag-and-drop interface with OpenStack Horizon. OpenStack orchestration Heat creates all the resources, networking, and computing needed for the applications. All applications are containerized as Kubernetes Pods which are generated in the Kubernetes cluster environment.
Role of Open Source, Standards, and Public Clouds in Autonomous Networks
Published in Mazin Gilbert, Artificial Intelligence for Autonomous Networks, 2018
A Kubernetes cluster is defined as a master node and the one or more worker nodes under its control. A Kubernetes namespace value can be assigned to a virtual cluster, thus providing a form of multitenancy. For example, a test group could exist in one namespace and the production deployment group in another. However, inside a namespace, each group could run the exact same applications, services, pod configurations, IP addressing scheme, and so on without knowing about or interfering with the other.
Containers and Microservices
Published in Haishi Bai, Zen of Cloud, 2019
Kubernetes deploys containers in pods. A pod is a group of containers that share the same storage and network context. Containers in the same pod share the same IP. They can find each other by localhost, and they have access to shared volumes. The preceding deployment creates two containers, each running in its own pod placed on a separate node. The following command lists all the pods you have on your cluster:
Towards Digital Forensics Investigation of WordPress Applications Running Over Kubernetes
Published in IETE Journal of Research, 2023
Muhammad Faraz Hyder, Syeda Hafsa Ahmed, Mustafa Latif, Kehkashan Aslam, Ata. U. Rab, Mussab T. Siddiqui
Kubernetes is a container orchestration system that helps automate software deployment, scaling, and management. Figure 2 shows the architecture diagram of Kubernetes. Kubernetes greatly impacts how the applications are built and deployed in the cloud. Since its release in 2014, this has gained immense popularity and has become the most widely known and the largest container orchestrator. A set of building blocks called primitives is defined, which altogether creates the mechanism to deploy, maintain, and scale applications depending on CPU, memory usage, and other metrics [15, 16]. Kubernetes is defined as loosely coupled and extensible, catering to varied workloads. Kubernetes is built on the primary/replica architecture. Kubernetes components can be classified into two categories, one that manages an individual node and the other which are included in the control plane [17, 18]. A worker or minion node is where the workload or containers are deployed. The main controlling unit of the cluster, known as Kubernetes master, manages the workload and directs the communication throughout the system.
Cloud-based storage and computing for remote sensing big data: a technical review
Published in International Journal of Digital Earth, 2022
Chen Xu, Xiaoping Du, Xiangtao Fan, Gregory Giuliani, Zhongyang Hu, Wei Wang, Jie Liu, Teng Wang, Zhenzhen Yan, Junjie Zhu, Tianyang Jiang, Huadong Guo
Containerization is one of the core concepts of cloud-native computing (Li 2019; Pelle et al. 2019). It is widely used in cloud-based processing such as FaaS and serverless applications. Containerization is a virtualization technology that packages algorithms with lightweight containers and provides the runtime environments needed for algorithms, e.g. Docker (Merkel 2014). This technology allows the stable execution of various remote sensing algorithms in different host environments, which improves the portability of remote sensing algorithms by decoupling the algorithms from the host machine (Xu et al. 2022). The technology is essential for cloud-based RSBD because it can port remote sensing algorithms from the local environment to the cloud (Wang et al. 2015). Containers can be leveraged jointly with the big data processing technologies mentioned above (e.g. batch processing). In addition, they can be managed by container orchestration platforms in the cloud. Kubernetes is one of the most famous open-source container orchestration platforms; it was developed by Google and contributed to the Cloud Native Computing Foundation in 2015 (Bernstein 2014). Borg (Verma et al. 2015), the internal Google version of Kubernetes, is used for resource scheduling and load balancing within Earth Engine.
Identification of multi-zone grey-box building models for use in model predictive control
Published in Journal of Building Performance Simulation, 2020
Javier Arroyo, Fred Spiessens, Lieve Helsen
One of the main advantages of the approach followed by Algorithm 1 is that the process of identifying each building zone can be completely decoupled. This allows to run these processes in parallel, removing the computational time dependency on the number of zones. For this purpose, the functionality of identifying a single zone is encapsulated within a Docker container image which is a standard for environment specification to integrate all software dependencies in a lightweight fashion, without the overhead of a virtual machine. Kubernetes is a system that enables orchestration among deployed Docker images. In the envisaged application, one Kubernetes pod is a deployed Docker image assigned to each building zone. An http request sent to each pod with a zone identifier triggers the identification of each building zone. The IPOPT software library (Wächter and Biegler 2006) for large-scale non-linear optimization is used to solve the parameter estimation problem at each pod. IPOPT implements an interior point line search filter method leading to a sparse linear system. The linear solver used to obtain the solution to the latter system is key for the overall computational performance. For this, the MA57 (Duff 2004) solver from the HSL library (HSL 2011) is used since it is the most advanced in-core serial solver provided by Tasseff et al. (2019). For an in-depth comparison between linear solvers for IPOPT, see Kelman (2015).