Explore chapters and articles related to this topic
Software design for building model servers: Concurrency aspects
Published in Manuel Martínez, Raimar Scherer, eWork and eBusiness in Architecture, Engineering and Construction, 2020
The functional requirements prescribe that old versions of all data must be stored for later retrieval. This matches one of the requirements of multi-version concurrency control (MVCC), a family of techniques for handling concurrent access to data that has been shown to provide more concurrency than single-version techniques (Bernstein and Goodman 1983, Carey and Muhanna 1986). In MVCC, atomicity and isolation between competing readers/writers is based on the principle that each write access to a resource results in a new version of that resource, while older versions remain available for read access.
Systems Management
Published in Paul J. Fortier, Handbook of Local Area Network Software, 1991
Concurrency control methods using locking require transactions to first acquire all locks they need during their operation; then, as they commit, release the locks to allow them to be available to others. This is referred to as two-phase locking, where the first phase is the growing phase (lock acquisition) and the second phase is the shrinking phase (lock releasing). If all transactions follow the rule that they do not process their transactions until they have acquired the necessary locks and upon completion release them, then there will not be a problem.
Databases
Published in Ian Foster, Rayid Ghani, Ron S. Jarmin, Frauke Kreuter, Julia Lane, Big Data and Social Science, 2020
Transactions are also key to supporting multi-user access. The concurrency control mechanisms in a DBMS allow multiple users to operate on a database concurrently, as if each were the only user of the system: transactions from multiple users can be interleaved to ensure fast response times, while the DBMS ensures that the database remains consistent. While entire books could be (and have been) written on concurrency in databases, the key point is that read operations can proceed concurrently, while update operations are typically serialized.
Optimal pricing and service capacity management for a matching queue problem with loss-averse customers
Published in Optimization, 2021
Tao Jiang, Xudong Chai, Lu Liu, Jun Lv, Sherif I. Ammar
In service industries, queuing problems, known as double-sided matching queues, are considered a highly effective and prevalent method for managing congestion. Many studies have extensively investigated double-sided matching queues due to their wide range of applications, including network routing [1], organ transplant systems [2,3], transportation systems [4–6], passenger-taxi systems [7,8], modern financial markets [9], database concurrency control, load balancing for communication protocols, perishable inventory management, and online services (e.g. taxi-hailing applications, online dating, job search applications, etc.). Taking transportation systems as an example, at a transportation station, new arriving passengers form a queue at the passenger waiting area, observe the delay information and the fee imposed by the decision-maker, and then determine whether to take the facility according to their individual utility. Meanwhile, the arrival of transportation facilities (e.g. buses or trains) to pick up the waiting passengers forms the other queue. Thus, this situation represents a matching queue problem between passengers and the transportation facilities. As both of the queues are non-empty, the matching process will be initiated immediately, with the matched passengers and transportation facilities leaving the station together. By combining a real-world scenario and the queuing approach, the decision-maker could make the optimal decision to decrease the passengers' waiting time and improve his or her revenue as well as social utility.