Explore chapters and articles related to this topic
Semantic-based decomposition of long-lived optimistic transactions in advanced collaborative environments
Published in Manuel Martínez, Raimar Scherer, eWork and eBusiness in Architecture, Engineering and Construction, 2020
These benefits, however, come at a cost, since there is a trade-off between data availability and consistency common for all distributed systems (Anderson et al. 1998). Optimistic replication faces the challenges of diverging replicas, conflicts between concurrent operations and disturbing consistency. These issues are especially critical for advanced collaborative environments that seem to be built and deployed in both academy and industry in the near future (Semenov et al. 2004a, Weise & Katranuschkov 2005). Enabling long-lived transactions, supporting semantically rich operations and managing complex multidisciplinary scientific and engineering data like those defined by STEP application protocols (ISO 1994), emerging IFC standard for architecture, engineering and construction domain (IAI 1999) and MDA models (OMG 2006), make implementation of optimistic replication approach for such purposes a nontrivial task.
Big Data Techniques and Security
Published in Rakesh M. Verma, David J. Marchette, Cybersecurity Analytics, 2019
Rakesh M. Verma, David J. Marchette
Broadly speaking, there are two approaches to reliability: replication and encoding. Replication involves making copies of sensors, data, or, in general, any component or unit of a system that can fail and whose correct functioning is critical to performance and/or safety. NASA is famous for its triple modular redundancy, which means using three units (e.g., sensors), instead of one, and taking “majority” vote of their outputs. Encoding refers to redundancy in the form of error-correcting codes. For example, computing parity bits on a binary file can help to detect and correct a certain number of errors. Encoding can be more efficient than replication; the trade-off is that replication is applicable more generally, whereas encoding applies to more limited situations such as data.
Lotus Notes
Published in Paul W. Ross, The Handbook of Software for Engineers and Scientists, 2018
Replication is the process of keeping multiple copies of a database in synchronization with each other. Users can make a replica of a database before disconnecting their portable device from the network. This allows users to continue to work on relevant documents while away from easy access to the server. Once the user is able to connect again by dialing in to the server, all changes made on the user’s documents are replicated back onto the server database, and all changes that have been made on the server database are replicated to the user’s portable. In those cases in which changes have occurred to the same documents on both ends of the “transaction,” the user is sent a replication conflict notification.
Development of the information system for the Kazakh language preprocessing
Published in Cogent Engineering, 2021
Darkhan Akhmed-Zaki, Madina Mansurova, Gulmira Madiyeva, Nurgali Kadyrbek, Marzhan Kyrgyzbayeva
Therefore, the architecture you are developing must take into account the important properties described earlier. At the moment, the low-level storage architecture has been developed using NoSQL database management system MongoDB with application programming interface (API). The API allows to interact with databases without connecting directly to them, eliminating problems with low-level interaction with MongoDB database nodes. Fault tolerance and scalability can be achieved using replication. Replication is a mechanism for synchronizing databases and providing read scalability. Scalability on a read operation implies that data can be obtained not only from the main database with a record in it, but also from database replicas. As a result, the total load on data reading is distributed between these databases.
Approximation algorithms in partitioning real-time tasks with replications
Published in International Journal of Parallel, Emergent and Distributed Systems, 2018
Jian (Denny) Lin, Albert M. K. Cheng, Gokhan Gercek
Real-time systems in which critical tasks are required to have a timely completion are one of the most important applications of computers. In these systems, a failure or a late completion of the critical tasks can cause catastrophic consequences. Major benefits are expected from executing the tasks using multiprocessor technology. When a set of real-time tasks is running on a multiprocessor system, not only the system offers extra computing power to expedite the executions, it also provides ways to enhance reliability. A scheme, called replication, is to execute multiple copies of critical tasks on a multiprocessor system. Replication is a widely used solution to resist late completions, core failures or computing faults. A replica is an identical copy of execution of a critical task. If a replica’s completion is not successful, other replicas’ results can be used. That is, it uses space-redundancy to provide fault-tolerance.
The role of an ant colony optimisation algorithm in solving the major issues of the cloud computing
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2023
Saied Asghari, Nima Jafari Navimipour
Replication is a strategy that creates multiple copies of some data and stores them at multiple sites (Goel & Buyya, 2006). It is considered as one of the important phenomena in distributed environments. It stores several copies of some data at multiple sites in which creating, maintaining, and updating the replicas are noticed as important and challenging issues (Dayyani & Khayyambashi, 2013). Data replication in the cloud decreases the user waiting time, improves data availability, and minimises bandwidth consumption (Ahmad et al., 2010). In the following, some of the important techniques in this category are reviewed.