Explore chapters and articles related to this topic
Big Data Computing Using Cloud-Based Technologies
Published in Mahmoud Elkhodr, Qusay F. Hassan, Seyed Shahrestani, Networks of the Future, 2017
Samiya Khan, Kashish A. Shakil, Mansaf Alam
Theoretically, batch processing is a processing mode in which a series of jobs are performed on a batch of inputs. The MapReduce programming paradigm is the most effective and efficient solution for batch processing of big data. Hadoop, a MapReduce implementation, is identified as the most popular big data processing platform. Therefore, most of the tools described in Table 19.2 are either Hadoop-based or tools that run on top of Hadoop.
The impact of Additive Manufacturing on the product-process matrix
Published in Production Planning & Control, 2022
Daniel R. Eyers, Andrew T. Potter, Jonathan Gosling, Mohamed M. Naim
Traditionally batch processing leads to the production of multiple identical products and is normally employed where the repetition in production can lead to scale economies compared to job processes, but where demand is not adequate to set up a line process. In this study three cases demonstrated characteristics typical of batch manufacture, whereby general-purpose equipment was used in the production of multiple parts, though notably these parts are not identical – they are individually customised to the requirements of the customer. In Case 9, lamps are produced with customer-chosen text embedded into an otherwise standard lampshade design. This is an example of an Adapted Design, where the core product design and rules for manufacturing exist, but where some customisation can be made by the customer. In Case 11, customised assembly fixtures are produced, with the geometry of the fixture surface being customised to match the product it is intended to hold. Case 15 concerns the production of plastic figurines for model collectors and hobbyists, with some geometric attributes of models customisable by the consumer. In all three cases, multiple products are produced during the same production build.
A method combining rules with genetic algorithm for minimizing makespan on a batch processing machine with preventive maintenance
Published in International Journal of Production Research, 2020
Jingying Huang, Liya Wang, Zhibin Jiang
Batch processing is implemented in many manufacturing industries, such as semiconductor wafer fabrication industry (Chakhlevitch, Glass, and Kellerer 2011), casting industry (Mathirajan, Sivakumar, and Chandru 2004), aircraft industry (Van De RZee et al. 2010) and so on. The advantages of batch processing are the avoidance of setups, facilitation of material handling and reduction of processing time (Xu, Chen, and Li 2013). Batch processing machine (BPM) problem combines two sub-problems, which are batching jobs and scheduling batches on the batch processing machine (Jia and Leung 2015). BPM problems can be divided into two parts, compatible job families and incompatible job families (Yao, Jiang, and Li 2012). For compatible job families, jobs from different families can be processed together. Jobs in a batch start and complete processing at the same time. The processing time of a batch is equal to the longest processing time among the jobs in the batch. Uzsoy (1994) firstly proved that both minimising makespan and total completion time on a single batch processing machine with compatible job families are NP-hard problems and also proposed several heuristics to solve these complex problems. Dupont and Jolai Ghazvini (1998) researched the same problem, proposed a BFLPT (Best fit-longest processing time) heuristics which based on Best-fit algorithm. Zhou et al. (2014) addressed the problem of minimising makespan on a single batch-processing machine with dynamic job arrivals as well as arbitrary job sizes and developed a number of efficient construction heuristics.
Real time drone detection by moving camera using COROLA and CNN algorithm
Published in Journal of the Chinese Institute of Engineers, 2021
Aamish Sharjeel, Syed Abbas Zilqurnain Naqvi, Muhammad Ahsan
Another approach introduced for background modeling is low-rank matrix approximation (Cui et al. 2012) in which background is modeled using Eigen-background subtraction, employing principal component analysis, to detect moving objects. In this approach, the background model is developed using those pixels that are linearly co-related in an image sequence (Candès et al. 2011). Next, the background model of an image is mathematically represented by the low-rank approximation of a matrix whose columns are formed by vectorized images. Hence, the background model problem is mapped to a low-rank approximation problem. Many algorithms have been proposed to detect foreground objects by low-rank approximation like Robust Principal Component Analysis (RPCA) (Bouwmans and Zahzah 2014), Augmented Lagrangian multiplier (ALM) (Shen, Wen, and Zhang 2012), singular value threshold (SVT) (Cai, Candès, and Shen 2010), DECOLOR (Zhou, Yang, and Yu 2013), spatiotemporal structured sparse RPCA, saliency map for moving object detection (Rozantsev, Lepetit, and Fua 2015) and incremental gradient on the Grassmannian (Cui et al. 2012). These techniques require all the data for batch optimization to compute both low-rank matrices as well as sparse outliers. There are two problems associated with batch processing, one is memory storage and the other is time complexity. To improve computational efficiency, robust principal component analysis via stochastic optimization (OR-PCA) (Feng, Xu, and Yan 2013) have been proposed, and memory storage requirements are reduced in the DECOLOR (Zhou, Yang, and Yu 2013) method. Another method, contiguous outlier representation via low-rank approximation (COROLA) (Shakeri and Zhang 2016) addresses both issues: the sparsity and connectedness of DECOLOR is used to reduce memory requirements and the time efficiency is increased by estimating the background model with the help of OR-PCA using the sequential low-rank approximation. This method is useful for online applications as it takes only a single image as input rather than employing batch processing. This methodology is more robust than existing methods as it can successfully handle dynamic background or the noisy environment in the scenes.