Explore chapters and articles related to this topic
Blockchain User, Network and System-Level Attacks and Mitigation
Published in Shaun Aghili, The Auditor's Guide to Blockchain Technology, 2023
Nishtha Baria, Dharmil Parmar, Vidhi Panchal
Hackers have leveraged system race conditions to rob money from online banks, brokerage firms, cryptocurrency exchanges, and even free Starbucks coffee! In computing and blockchain language, a race condition is any condition in which any two parts of code, which were initially to be executed one after another, get executed out of order. To further explain, in this type of attack the scheduling algorithm is able to change between the execution of threads, so it becomes difficult to predict what the execution sequence will be like. Race conditions are considered a vulnerability when they affect a security component used by an attacker to create a situation in which a delicate event is executed before the proper completion of the controls. Race condition vulnerabilities are sometimes known as time of check/time of use vulnerabilities due to this fact. Explained differently, in order to get around access constraints, race conditions attacks can be employed. Often, hackers employ race conditions attacks on financial institution websites. If a race condition could be discovered on a crucial function like fund transfer, cash withdrawal or credit card payment, the hacker may get access to large amounts of money [19].
Digital Simulation
Published in Louis Scheffer, Luciano Lavagno, Grant Martin, EDA for IC System Design, Verification, and Testing, 2018
A race condition can occur in a concurrent system when the behavior of the system depends on the order of execution of two events that are logically unordered. The most common cause of this is when one process modifies a variable and another reads the same variable at the same simulated time. This will not happen with state variables when delayed assignment is used, but it can happen with combinational variables, or with state variables if delayed assignment is not used. VHDL took the approach of making all assignments to state variables delayed, while Verilog did not. Thus, it is easier to write a model with race conditions in Verilog than in VHDL. There is an efficiency cost to delayed assignment of course, which is one of the reasons that VHDL simulators are typically slower than Verilog simulators.
High-Performance Computing and Its Requirements in Deep Learning
Published in Sanjay Saxena, Sudip Paul, High-Performance Medical Image Processing, 2022
Biswajit Jena, Gopal Krishna Nayak, Sanjay Saxena
Deadlock and race conditions are the main concern for the kind of program under the shared memory concept [15, 16]. Deadlock is a situation where the two more processes cannot progress further as they are in turn waiting for some other processes to finish, which in turn are again waiting for some other processes to finish and so on. Furthermore, the race condition is a situation where two or more processes try to access and modify the same data almost simultaneously. The final results might not be the desired results in this case. However, multi-core processors support shared memory concepts with languages and libraries like OpenMP.
Algorithmic Improvements to MCNP5 for High-Resolution Fusion Neutronics Analyses
Published in Fusion Science and Technology, 2018
Scott W. Mosher, Stephen C. Wilson
In a multithreaded application, each executing thread has the same view of memory except for the thread’s local call stack and any data that are explicitly declared to be private to each thread. This is a key advantage of multithreading over MPI-based parallelism. Multithreading enables memory-efficient algorithms where large data structures are shared by all threads. In MCNP, for example, all threads access a shared copy of the continuous-energy cross sections and space- and energy-dependent weight-window parameters. When using shared memory for mutable data structures, such as mesh tally data, care must be taken to avoid race conditions. A race condition occurs when two or more threads access the same memory location concurrently and at least one of the accesses changes the stored value. The result of reading the data is then dependent on the order in which the instructions of the various operating threads happen to be executed. This type of condition can produce unexpected and incorrect results and can even lead to memory corruption. Care must also be taken to avoid thread synchronization errors, which can cause the program to deadlock when two or more threads are blocked waiting for each other to perform some action.
Parallel computing in railway research
Published in International Journal of Rail Transportation, 2020
Qing Wu, Maksym Spiryagin, Colin Cole, Tim McSweeney
The challenges for conducting parallel computing in railway research is to formulate highly independent parallelisable computing tasks and assign balanced computing loads to the computing units. The race condition exists in almost all parallel computing programming, but it can be well controlled by using good synchronisation methodologies and special programming techniques. Iterative optimisations, data and signal processing and power supply simulations are three good examples of applications that have highly independent computing tasks that can have good scalability and flexibility when using parallel computing.