Explore chapters and articles related to this topic
Adapting Our Reference Framework to Your Environment
Published in James F. Ransome, Anmol, Mark S. Merkow, Practical Core Software Security, 2023
James F. Ransome, Anmol, Mark S. Merkow
Each of these business functions is described below: Governance is centered on the processes and activities related to how an organization manages overall software development activities. More specifically, this includes concerns confronting cross-cut groups involved in development as well as business processes that are established at the organization level.Construction concerns the processes and activities related to how an organization defines goals and creates software within development projects. In general, this includes product management, requirements gathering, high-level architecture specifications, detailed design, and implementation.Verification is focused on the processes and activities related to how an organization checks and tests artifacts produced throughout software development. This typically includes quality assurance (QA) work such as testing, but it can also include other review and evaluation activities.Deployment entails the processes and activities related to how an organization manages the release of software that has been created. This can involve shipping products to end users, deploying products to internal or external hosts, and normal operations of software in the runtime environment.
Role of Open Source, Standards, and Public Clouds in Autonomous Networks
Published in Mazin Gilbert, Artificial Intelligence for Autonomous Networks, 2018
Continuous delivery extends the CI process by preparing and testing the new build for production deployment. This involves pushing the CI-tested build to a staging environment where additional tests (e.g., API, load, and reliability) prior to deployment can be performed. If everything checks out, then the developer manually signs off and the code is ready for deployment to a live production environment. Continuous deployment removes the manual developer sign-off, and the entire release process beginning with code all the way through to production deployment is automated.
DevOps and Software Factories
Published in Yves Caseau, The Lean Approach to Digital Transformation, 2022
Continuous deployment is as automated as possible; it is reversible and most often broken down into successive stages (staged) that allow risk to be reduced and better controlled. Indeed, any change represents a risk—which testing will seek to reduce, which is the subject of the next section—and the residual risk produced by the software development pipeline materializes at the time of release. Continuous deployment uses the same tools as build and release for the different test phases. The same care must be taken to automate installation, de-installation, and reverting to a previous version (see Chapter 4). The “phased” deployment consists of breaking down the deployment so that it can be rolled back more quickly in the event of difficulties. The most common approach, called blue-green deployment, is to have two production environments, blue and green, which allow the new version to be installed on the blue environment while the green environment is active, and then to switch usage to the blue environment while keeping the green environment for a rollback in case of problems. The canary deployment approach is a form of A/B testing in which a few users test the new version before deploying it more widely to all users. This approach is appropriate when there are concerns about difficulties with real data from production environments that would not be apparent with data from test environments. If the possible difficulties are related to the volume, this approach can be refined in the form of progressive deployment, made famous by Facebook, in which the new version is deployed on increasingly large perimeters of users. In all cases, progressive deployments are accompanied by post-deployment tests, production tests whose purpose is to find out, before the users, if there is a problem.
Financial Data Security Management Method and Edge Computing Platform Based on Intelligent Edge Computing and Big Data
Published in IETE Journal of Research, 2021
The operation management system is responsible for handling some work necessary for the interaction between the platform and the external system, such as edge computing service state management, DNS system configuration, container mirroring and binding of corresponding domain name relationship, etc. [23]. The Operations Management System provides a Web console for system administration. Deployment architecture EC Master, Docker private image warehouse and domain name resolution server are mainly deployed in the data center. EC Master exposes Web services externally, and administrators manage remote computing services through the Web console. In addition, mirror warehouses are used to store images built by developers, and DNS servers are used for global load balancing based on DNS resolution [24]. EC between the Master and the edge nodes connected via the Internet, the operation of Kubernetes Restful API containers cluster management. All edge calculations of specific service information, such as domain name binding, service deployment status, and so on, are stored in the Mysql database. Working process The workflow of docker-based CDN edge computing platform can be expanded from the application deployment process and user request process.
A lean-TOC approach for improving Emergency Medical Services (EMS) transport and logistics operations
Published in International Journal of Logistics Research and Applications, 2019
Jose Arturo Garza-Reyes, Bernardo Villarreal, Vikas Kumar, Jenny Diaz-Ramirez
The previously described information set the general context required to guide the determination of ambulance capacity and location. The ambulance location problem has been exhaustively treated in the Operations Research area. For example, Brotcorne, Laporte, and Semet (2003) conducted a review of ambulance location and relocation models. Leigh, Dunnett, and Jackson (2016) illustrated a scheme in which a variation of the double standard model used for ambulance dispatching by Gendreau, Laporte, and Semet (1997). Maghfiroh, Hossain, and Hanaoka (2018) applied a two-stage modelling approach for locating-allocating ambulances in a case study developed in Dhaka, Bangladesh. However, in this work, a similar scheme to the ones suggested by Ong et al. (2010) and Peleg and Pliskin (2004) was employed to derive such strategies. In this line, an ambulance deployment scheme with the support of geospatial analyses and the use of the ESRI Software System was performed in this study.
A Lean transportation approach for improving emergency medical operations
Published in Production Planning & Control, 2018
Bernardo Villarreal, Jose Arturo Garza-Reyes, Edgar Granda-Gutiérrez, Vikas Kumar, Samantha Lankenau-Delgado
The above-mentioned information sets the general context required to guide the determination of ambulance capacity and location. The ambulance location problem has been exhaustively treated in the operations research area. An excellent review of ambulance location and relocation models is presented by Brotcorne, Laporte, and Semet (2003). Leigh, Dunnett, and Jackson (2016) illustrates a scheme in which a variation of the double standard model is used for ambulance dispatching by Gendreau, Laporte, and Semet (1997). However, in this work, a similar scheme to the ones suggested by Peleg and Pliskin (2004) and Leigh, Dunnett, and Jackson (2016) is used to derive such strategies. An ambulance deployment scheme with the support of geospatial analyses and the use of the ESRI Software System were performed during this study. The ESRI system contains an option for determining the optimal number and location of ambulance bases to cover certain percentage of emergency calls with a transport time from the bases to the patients in less than a certain time level.