Explore chapters and articles related to this topic
Fault tolerance and ultimate physical limits of nanocomputation
Published in David Crawley, Konstantin Nikolić, Michael Forshaw, 3D Nanoelectronic Computer Architecture and Implementation, 2020
A S Sadek, K Nikolić, M Forshaw
If a computation can be solved in polynomial time, then it is classified as belonging to the complexity class P. This complexity class is thought to be a subset of a larger class of decisional problems NP, where the solution requires additional information in the form of a certificate (factoring is an example of such a problem). P and NP are classes of deterministic algorithms, which means they perform the same set of operations every time they are executed on the same input. Once noise becomes a factor (i.e. ϵ > 0), such algorithms are no longer deterministic but random. Such random algorithms, implemented by design or as a result of noise, are part of the greater complexity class BPP (bounded-probabilistic polynomial time). The noise-tolerant architectures that we will explore for nanocomputation will necessarily belong to this complexity class. A corresponding class exists encompassing fault tolerant quantum computation called BQP.
Mathematical Background
Published in Alfred J. Menezes, Paul C. van Oorschot, Scott A. Vanstone, Handbook of Applied Cryptography, 2018
Alfred J. Menezes, Paul C. van Oorschot, Scott A. Vanstone
2.77 Definition (randomized complexity classes) The complexity class ZPP (“zero-sided probabilistic polynomial time”) is the set of all decision problems for which there is a randomized algorithm with 0-sided error which runs in expected polynomial time.The complexity class RP (“randomized polynomial time”) is the set of all decision problems for which there is a randomized algorithm with 1-sided error which runs in (worst-case) polynomial time.The complexity class BPP (“bounded error probabilistic polynomial time”) is the set of all decision problems for which there is a randomized algorithm with 2-sided error which runs in (worst-case) polynomial time.
An improved risk and reliability framework-based maintenance planning for food processing systems
Published in Quality Technology & Quantitative Management, 2023
Hamzeh Soltanali, Mehdi Khojastehpour, José Torres Farinha
This study addresses a framework of reliability modeling-based maintenance plans for the food processing industries. In the proposed framework, the potential failures were identified through the risk-based approaches such as the Failure Mode and Effect Analysis (FMEA) and Fault Tree Analysis (FTA) techniques and, the validity of the independent and identically distributed nature of the process was investigated by the trend and correlation tests. Besides, the capability of the RP, NHPP, HPP, and BPP models through the MLE method was examined to find the best-fit model of failure data. Finally, the failure rate and reliability functions were calculated, and their outcomes were carried out to recommend the maintenance intervals. The results of the proposed framework appointed that to achieve a high level of reliability, the main bottlenecks of the process should be considered as a priority. To suggest the maintenance intervals, the various levels of reliability focusing on the plant policy were conducted. Furthermore, due to the series configuration of equipment in the edible oil purification, the opportunistic maintenance intervals were undertaken to reduce the intervals times as much as possible. The proposed maintenance intervals could be useful for improving the availability and safety of critical equipment in edible oil industries. As a consequence, the results of the present study could be beneficial for other food and agro-industries to achieve sustainable production lines.
Reflections on probabilistic compared to quantum computational devices
Published in International Journal of Parallel, Emergent and Distributed Systems, 2021
Indeed if we reason in a probabilistic way, the constant functions among 2n possible functions on variables are only two, so, as n increases, we probabilistically do an error trending to 0, guessing f is not constant. If we make calculation of t random distinct values on f, and there is among them almost one value different from others, we can affirm with error 0 that f is not constant. In a similar way in [5], we can affirm that after t distinct evaluations of a function f of n variables, the probability that we have not distinguished correctly if f is balanced or constant is So in this case we can have a PTM that simulates efficiently Deutsch–Jozsa quantum algorithm if we admit a probabilistic behaviour and relative error . Of course, we demonstrate that this algorithm belongs to BPP complexity class (Polynomial time if run on PTM), but do not demonstrate that it is already in P (Polynomial time if run on DTM).
The robust machine availability problem – bin packing under uncertainty
Published in IISE Transactions, 2018
Guopeng Song, Daniel Kowalczyk, Roel Leus
A large body of literature exists on scheduling identical machines in parallel. Dell’Amico and Martello (1995) propose a Branch-and-Bound (B&B) algorithm for , using tight bounds based on the relationship with the BPP. Mokotoff (2004) uses a MIP formulation with a cutting-plane method to optimally solve . Comparative experiments are carried out by Dell’Amico and Martello (2005), showing that their 1995-algorithm consistently outperforms the method of Mokotoff (2004). Dell’Amico et al. (2008) propose a heuristic method for solving based on scatter search, followed by an exact binary search procedure with integrated B&P scheme that iteratively solves BPP instances as a subproblem. Van den Akker et al. (1999) investigate parallel machine scheduling to minimize the total weighted completion time. They propose a B&P algorithm based on a set partitioning formulation with the pricing problem solved by a Dynamic Programming (DP) procedure, and branch on the job completion times such that the structure of the pricing problem is preserved. Chen and Powell (1999) independently propose a Column Generation (CG) procedure for the same problem, but they branch on the jobs’ pairwise precedence relations in order to maintain the structure of the pricing problem. Pessoa et al. (2010) develop an arc-time-indexed formulation for the total weighted tardiness problem on parallel machines, which is solved using a branch-cut-and-price procedure.