Explore chapters and articles related to this topic
DevOps and Software Factories
Published in Yves Caseau, The Lean Approach to Digital Transformation, 2022
The rigorous and regular practice of continuous integration requires automation. The practice and the various tools have been developed over the last twenty years, and today there are remarkable solutions to automate the build9 process. Automation and the associated tools must offer three things: simplicity (the goal being to be able to do the entire build with a single command), speed (since the build will be done in a repetitive way—we can consider that what made the practice of continuous integration possible is the acceleration of compilation and link editing times thanks to the power of machines), and ease of reverting. It is not just the commit that needs to be automated but also the rollback. Modern tools make it possible to produce builds with readable and relatively elegant scripts (we are talking here about conciseness and degree of abstraction; cf. the remarks in Chapter 4 on the importance of integration as code). The scripts written with these tools are declarative: they describe the target state and let the tool decide on the best construction path (we find here the concept of idempotent script presented in Chapter 4).
Overall Architecture of an Intent-Driven Campus Network
Published in Ningguo Shen, Bin Yu, Mingxiang Huang, Hailin Xu, Campus Network Architectures and Technologies, 2021
Ningguo Shen, Bin Yu, Mingxiang Huang, Hailin Xu
NETCONF uses the client/server network architecture. The client and server communicate with each other using the remote procedure call (RPC) mechanism, with XML-encoded messages. NETCONF supports the industry’s mature secure transport protocols and allows equipment vendors to extend it with exclusive functions, therefore achieving flexibility, reliability, scalability, and security. NETCONF can work with YANG to implement model-driven network management and automated network configuration with programmability, simplifying O&M and accelerating service provisioning. In addition to this, NETCONF allows users to commit configuration transactions, import and export configurations, and flexibly switch between predeployment testing, configuration, and configuration rollback. These functions make NETCONF an ideal protocol in SDN, Network Functions Virtualization (NFV), and other cloud-based scenarios.
Test and Verification
Published in Miroslav Popovic, Communication Protocol Engineering, 2018
Two-phase commit protocol (2PC) is one of the most widely used atomic commitment protocols (ACPs). It coordinates all the processes participating in a distributed atomic transaction on whether they should commit or abort (or rollback) the transaction. In the theory of distributed computing, 2PC is viewed as a specialized consensus protocol. The 2PC advantages are simplicity and resilience to many temporary system failures, such as process, network node, or communication failures. However, in some rare cases, system administrators must perform manual failure recovery procedures. To enable failure recovery, which is automatic in most of the cases, participating processes must maintain logs of the protocol’s states. Many existing 2PC variants use different logging strategies and recovery procedures.
Adaptive load forecasting using reinforcement learning with database technology
Published in Journal of Information and Telecommunication, 2019
This experiment is to test for the HA setup through simulating a catastrophic failure on one node and verify if the adaptive forecasting system can failover to the second node and maintain its services. For this setup, two servers have been configured each with its own storage, with one node functioning as a primary server and the other as a standby. Before the switchover test, the adaptive forecasting system must be verified to run normally on node 1. We forced the switchover through Data-Guard manager console, then bring up the dbfs_client service to mount the DBFS manually on the standby node, followed by verifying the state of the mounted filesystem. A series of trial runs are conducted against the system to check if the switchover is successful or not. The result showed that the transactions in progress had been forced to rollback by the database which is required to maintain transaction consistency. The forecasting system can be restarted to continue with the service.
Backward chaining inference as a database stored procedure – the experiments on real-world knowledge bases
Published in Journal of Information and Telecommunication, 2018
Tomasz Xie¸ski, Roman Simiński
If data safety is the biggest concern, the InnoDB storage engine should be taken into account because it has commit, rollback and crash-recovery capabilities. Unfortunately, as the results presented in Table 3 indicate, it also performs worst in the backward chaining inference task. The differences between the previously analysed storage engines for the smaller knowledge bases (eval416, eval1119 and bud4438) may be still considered as acceptable (as they are in the range of 4–9 s). However, results for the bud22190 knowledge base indicate a noticeable difference. The InnoDB engine needs 36 more seconds on average to return the outcome of inference. Therefore, for larger and more complex knowledge bases the usage of the InnoDB storage engine may not be even applicable.