Explore chapters and articles related to this topic
Convex Analysis and Duality Theory
Published in Fabio Silva Botelho, Functional Analysis, Calculus of Variations and Numerical Methods for Models in Physics and Engineering, 2020
where M ∈ N, hm∈Lq (Ω, ℝN), η ∈ℝ+. By hypothesis, there exists a partition of Ω into a negligible set Ω0 and open subspaces Δi, 1 ≤ i ≤ r, over which ∇u(x) is constant. From standard results of convex analysis in ℝN, for each i ∈ {1,… r} we can obtain {αk ≥ 0}1≤ k≤ N+1, and ξk, such that ∑N+1k=1 αk= 1 and ∑k=1N+1αkξk=∇u,∀x∈Δi,
Two Level Fractional Designs
Published in Thomas J. Lorenzen, Virgil L. Anderson, Design of Experiments, 2018
Thomas J. Lorenzen, Virgil L. Anderson
The final method for creating fractional designs is through the use of computers. As is to be expected, computer packages range in sophistication. Many are basic tabulations similar to Appendices 14 through 16. These packages require, as input, the number of factors and the desired resolution. The most sophisticated program we have seen to date is PROC FACTEX in the SAS system. This program requires, as input, the factors, a list of estimable terms, and a list of non-negligible terms. The program then performs a comprehensive search using a method like Franklin (1985), much like the basic factor technique given earlier, to find the smallest design that does not confound any of the estimable terms with themselves, does not confound the estimable terms with the non-negligible terms, but can confound non-negligible terms with themselves. This is the natural extension of the resolution principle given earlier. For example, a resolution III design would have all main effects in the estimable list and the non-negligible list empty. A resolution IV design would have all main effects in the estimable set and all two factor interactions in the non-negligible set. The Franklin method is more general since the sets can contain specific terms, not necessarily all terms of a certain order.
BK-CONWIP Adaptive Control Strategies in a Multi-Product Manufacturing System
Published in Khojasteh Yacob, Production Management, 2017
The effect of the policies to determine the number of cards in the case of multi-products has been studied in recent works. Gurgur and Altiok (2008) examined the implementation of a two-card Kanban control policy in a multi-stage, multi-product-system. They proposed an approximation algorithm based on: (1) characterization of the delay by a product type before receiving the processor’s attention at each stage and (2) creation of subsystems for all the storage activity and phase-type modeling of the remaining system’s behavior. Olaitan and Geraghty (2013) investigated Kanban-like production control strategies operating dedicated and, where applicable, shared Kanban card allocation policies in a multi-product system with negligible set-up times and with consideration for robustness to uncertainty. The control parameters are optimized by discrete event simulation and a genetic algorithm. Park and Lee (2013) studied a multi-product CONWIP assembly system in which individual components are made to stock, but a final product is assembled from different components to meet a customer order. They developed an approximation algorithm based on a decomposition method and an iterative procedure was then used to determine the unknown parameters of each subsystem. Ajorlou and Shams (2013) considered a multi-product, multi-machine serial production line operated under a CONWIP protocol. A mathematical model for the system was first presented, and then an artificial bee colony optimization algorithm was applied to simultaneously find the optimal work-in-process inventory level as well as job sequence order, in order to minimize the overall makespan time.
Optimal decision-making for a single-stage manufacturing system with rework options
Published in International Journal of Systems Science: Operations & Logistics, 2020
Amir Hossein Nobil, Erfan Nobil, Bhaba R. Sarker
As discussed above, there are two types of set-up times for the rework process in EPQ models for defective items, namely zero (or negligible) set-up time and non-zero set-up time. EPQ models with zero set-up time can be treated as a special case by setting set-up time zero in more general ones. Also, there are three policies for the rework process. In the first one, it is assumed that rework starts after the regular production process (immediate rework). In the second case, rework is postponed to the time that the inventory level of perfect items reaches zero (delayed rework). Finally, some studies assume that the rework process is not performed for several consecutive production periods (accumulated rework). To the best of our knowledge, all previous studies did not consider the delayed rework. Thus, in this paper, we consider two defective production systems dealing with non-zero set-up times for two rework processes, immediate and delayed rework. Also, we compare these two methods of rework to represent a comprehensive model. The contributions of the paper compared to previous studies are shown in Table 1.
Homogenization of elastic materials reinforced by rigid notched fibres
Published in Applicable Analysis, 2018
M. El Jarroudi, M. Er-Riani, A. Lahrouz, A. Settati
As , the sequence is bounded in . Thus, up to some subsequence, converges to a measure in . Let and be such that . According to Besicovitch’s derivation Theorem, there exists a m-negligible set , such that, ,
Loading and scheduling for flexible manufacturing systems with controllable processing times
Published in Engineering Optimization, 2019
Yi-Dong Zhou, Jeong-Hoon Shin, Dong-Ho Lee
Each machine has an automatic tool changer and a tool magazine of limited tool slot capacity, and hence it can process different operations with negligible set-up times if tooled differently. A part is processed by one or more operations with precedence relations, where an operation can be processed on any machine with the required tools. To perform an operation, one or more tools are required and each tool requires one or more slots in the tool magazine. A tool type may be used by two or more operations, i.e. tool sharing, and has multiple copies with a limited life. The operation processing times are controllable with different processing costs. The central buffer with a sufficient capacity is used to store in-process parts.