Explore chapters and articles related to this topic
Edge-Based Blockchain Design for IoT Security
Published in Sudhir Kumar Sharma, Bharat Bhushan, Aditya Khamparia, Parma Nand Astya, Narayan C. Debnath, Blockchain Technology for Data Privacy Management, 2021
Pao Ann Hsiung, Wei-Shan Lee, Thi Thanh Dao, I. Chien, Yong-Hong Liu
Docker is an open source project that mainly provides the deployment and automated management of containerized applications. Deploying the Docker engine on an operating system provides a software abstraction layer, such that applications can be automatically deployed in containers through Docker images. It is very lightweight. Compared to traditional virtual machine technology, a Docker container has the following advantages:A high-performance virtualized environment,Easy migration and extension services,Simplified management, andA more efficient use of physical host resources
A Novel Online Programmable Platform for Remote Programmable Control Experiment Development
Published in Ning Wang, Qianlong Lan, Xuemin Chen, Gangbing Song, Hamid Parsaei, Development of a Remote Laboratory for Engineering Education, 2020
Ning Wang, Qianlong Lan, Xuemin Chen, Gangbing Song, Hamid Parsaei
In the platform kernel layer, there are four core modules: Cloud9 IDE kernel, MediaWiki engine, real-time data transmission module, and real-time video transmission module. To support MediaWiki engine and two real-time communication modules, a combined solution of both Apache web engine and Node.js web engine has been used [26]. Cloud9 IDE is the key module to build a server-side programmable environment, and it is supported by system-support APIs. Moreover, with the code parsing, building, and inline queries of Cloud9 IDE, the user-developed program running in server side can communicate with the experimental device in a real-time manner. To combine the MediaWiki engine and Cloud9 IDE kernel together, all of the modules in the platform kernel layer need to run in Docker Daemon. Docker has been deployed as the lightweight container to package the web-based IDE and other software modules [168]. As Docker container does not replicate the full operating system (OS), it is not the same as the traditional virtual machine, and it only virtualizes the libraries and binaries of the application [169]. Thus, Docker offers the ability to deploy scalable services, such as web-based IDE and MediaWiki engine, on a wide variety of platforms. Moreover, the container package with Cloud9 can be distributed into the Wiki-based platform without worrying about inconsistencies between development and complicated environments.
The Role of Artificial Intelligence for Network Automation and Security
Published in Mazin Gilbert, Artificial Intelligence for Autonomous Networks, 2018
Containers are used to build microservices. They help to isolate, transport, and run software on physical and VMs. Containers are different from VMs in that applications running on containers can share the same operating system. Microservices typically deploy faster on containers than VMs. This can be an advantage when needing horizontal scaling. A popular use of containers is through Docker [33]. Docker is an open source project that aims to automate the deployment of applications inside portable containers independent of the hardware or the operating system used. Managing a group of Docker containers requires new technologies and tools. The most popular one being Kubernetes [34], which is an open source project for automating deployment, scaling, and management of containerized applications. Kubernetes, along with other technologies, as will be described in later chapters, are critical for driving services to become cloud native: resiliency, scalability, self-healing.
Client profile prediction using convolutional neural networks for efficient recommendation systems in the context of smart factories
Published in Enterprise Information Systems, 2022
Nadia Nedjah, Victor Ribeiro Azevedo, Luiza De Macedo Mourelle
For the development of the training model, a virtual environment is created with the container docker. A container docker is a standard encapsulated environment that runs applications through virtualisation at the operating system level. An image of docker is a file used to execute code in a docker container. An image is an immutable file that is essentially the representation of a container. Images are created with the compilation command and they will produce a container when started with execution. The (Maksimov 2019) image of a docker container is used with the following tools: Jupyter, Matplotlib, Pandas, Tensorflow, Keras and OpenCV. In this work, the Tensorflow and the Keras are used with the language Python to perform the training. Note that Tensorflow is an open source software library for machine intelligence created by Google in 2015. With Tensorflow, it is possible to define machine learning models, train them with data, and export them. Tensorflow operates with tensors that are multidimensional vectors running through the nodes of a graph.
World robot challenge 2020 – partner robot: a data-driven approach for room tidying with mobile manipulator
Published in Advanced Robotics, 2022
Tatsuya Matsushima, Yuki Noguchi, Jumpei Arima, Toshiki Aoki, Yuki Okita, Yuya Ikeda, Koki Ishimoto, Shohei Taniguchi, Yuki Yamashita, Shoichi Seto, Shixiang Shane Gu, Yusuke Iwasawa, Yutaka Matsuo
We adopted the method proposed by L. El Hafi et al. [20, 21] that configures a software development environment in a Docker container. Using Docker has several advantages, such as easy management of module versions and the ability to rapidly develop the same environment on different machines. In addition, unlike VirtualBox and other virtual machines, Docker operates on the kernel of the host machine, so it is as fast as running on it. It is crucial when executing systems that require computation, such as deep learning inference. The proposed software development environment is based on Robot Operating System (ROS) [22] on Ubuntu. Owing to the asynchronous communication feature provided by ROS, multiple modules can be processed efficiently in parallel. ROS also makes it easier to develop distributed systems, thus enhancing the stability of the system.
A container-based approach for sharing environmental models as web services
Published in International Journal of Digital Earth, 2021
Xiaohui Qiao, Zhiyu Li, Fengyuan Zhang, Daniel P. Ames, Min Chen, E. James Nelson, Rohit Khattar
Docker is an open source project designed to develop, deploy, and run applications using isolated containers (Merkel 2014). Containers allow users to bundle an application with all of the parts it needs as one package. Containers are more lightweight than virtual machines because they share the operating system kernel with the host machine. A typical desktop computer could run no more than a few virtual machines at once but would have no trouble running 100 Docker containers (Boettiger 2014). Docker was initially designed to package applications in Linux systems. Docker images (a container is an instance of an image) share the Linux kernel of the host machine, which means that Docker images must be based on Linux system with Linux-compatible software. However, Docker Linux containers can be installed and run on both Linux and other platforms that are not based on Linux Kernel (like macOS and Windows), which is accomplished through the use of a small VirtualBox-based VM running on the host OS. In recent years, the Docker developer team released Windows containers for packaging Windows applications. Docker Windows containers share the Windows kernel with the host machine and currently only support deploying in Windows 10 system.