Explore chapters and articles related to this topic
Big Data in Medical Image Processing
Published in R. Suganya, S. Rajaram, A. Sheik Abdullah, Big Data in Medical Image Processing, 2018
R. Suganya, S. Rajaram, A. Sheik Abdullah
Some key benefits of columnar databases include: Compression. Column stores are very efficient at data compression and/or partitioning.Aggregation queries. Due to their structure, columnar databases perform particularly well with aggregation queries (such as SUM, COUNT, AVG, etc.).Scalability. Columnar databases are very scalable. They are well suited to massively parallel processing (MPP), which involves having data spread across a large cluster of machines—often thousands of machines.Fast to load and query. Columnar stores can be loaded extremely fast.A billion row table could be loaded within a few seconds. One can start querying and analysing almost immediately.
The Evolution of Cloud Computing
Published in John W. Rittinghouse, James F. Ransome, Cloud Computing, 2017
John W. Rittinghouse, James F. Ransome
Data propagation time increases in proportion to the number of processors added to SMP systems. After a certain number (usually somewhere around 40 to 50 processors), performance benefits gained by using even more processors do not justify the additional expense of adding such processors. To solve the problem of long data propagation times, message passing systems were created. In these systems, programs that share data send messages to each other to announce that particular operands have been assigned a new value. Instead of a global message announcing an operand’s new value, the message is communicated only to those areas that need to know the change. There is a network designed to support the transfer of messages between applications. This allows a great number processors (as many as several thousand) to work in tandem in a system. These systems are highly scalable and are called massively parallel processing (MPP) systems.
Big Data Computing and Graph Databases
Published in Vivek Kale, Agile Network Businesses, 2017
In contrast, in the shared-nothing approach, each processor has its own dedicated disk storage. This approach, which maps nicely to (massively parallel processing MPP) architecture, is not only more suitable to the discrete allocation and distribution of data but also enables more effective parallelization, and consequently, does not introduce the same kind of bus bottlenecks from which the SMP/shared-memory and shared-disk approaches suffer. Most big data appliances use a collection of computing resources, typically a combination of processing nodes and storage nodes.
Effectiveness evaluation of DS-InSAR method fused PS points in surface deformation monitoring: a case study of Hongta District, Yuxi City, China
Published in Geomatics, Natural Hazards and Risk, 2023
Yongfa Li, Xiaoqing Zuo, Fang Yang, Jinwei Bu, Wenhao Wu, Xinyu Liu
Compared with PS-InSAR and SBAS-InSAR methods, the DS-InSAR method combining PS points can obtain more measurement points and better monitoring results. However, the pre-processing of this method is quite time-consuming and the computation time cost is about 4-5 times of other time-series InSAR methods, which is the main bottleneck of its application. For surface deformation monitoring over large areas, the time cost of DS-InSAR analysis may increase to a very high level. Therefore, how to utilize modern massively parallel computing techniques, such as graphics processing units (GPUs), to improve computational efficiency becomes a critical issue to be addressed. Another key issue worth investigating is how to optimize the DS-InSAR algorithm, especially the DS preprocessing process, in order to significantly reduce the storage space consumption.
Attention Classification and Lecture Video Recommendation Based on Captured EEG Signal in Flipped Learning Pedagogy
Published in International Journal of Human–Computer Interaction, 2023
Rabi Shaw, Bidyut Kr. Patra, Animesh Pradhan, Swayam Purna Mishra
D. Szafir et al. proposed an experimental setup for monitoring the attention of students predicated on an adaptive content review in a flipped learning scenario (Szafir & Mutlu, 2013). Adaptive reviews were found to improve the recall of students more efficiently than baseline systems. In Liang et al. (2006), the authors utilized an Extreme Learning Machine with EEG signals to classify five mental tasks. The experiment involved fifteen students who were asked to repeat each word fifty times randomly. Euclidean distance was used to discriminate between classes, and contrastive loss function was used. Fenu et al. (2018) developed a multi-biometric device to authenticate students in the e-learning environment. This device performs the score-level fusion of five types of biometric, namely, face, voice, touch, mouse, and keystroke. However, they did not consider the student monitoring issue. Finally, Kim et al. (2018) proposed an approach in which multiple sources of data were used, which includes audio, visual, and cognitive load devices. Using this, the authors attempted to identify the practical state of students while in a smart classroom. As the sources of data were numerous, massively parallel computing was used for data analysis. Though multiple data sources lead to better judgment and data quality, they involve costly deployment, more resources for calculations, and multiple devices attached to the students. This might impair the reading concentration and attention during the experiment. So, using a single sensor is preferred in many cases (Lee et al., 2020).
Geometry Extension and Assemblywise Domain Decomposition of nTRACER for Direct Whole-Core Calculation of VVERs
Published in Nuclear Science and Engineering, 2023
The ADD lets a thread perform ray tracing only within an assembly so that the geometry information of the assembly only needs to be known to the thread. In this way, the memory requirement for parallel execution can be significantly reduced. The ADD scheme was introduced originally in the DWCC codes for Cartesian geometry as a means of effectively increasing the number of parallel processors for massively parallel computing involving thousands of processors. The drawback of ADD is that the convergence characteristics of the ray-tracing calculation can deteriorate because the continuous tracing of a ray across the core cannot be done with the ADD. Note that the ray tracing for each assembly has to be started simultaneously so that the incoming angular flux at the assembly periphery can be determined from the outgoing angular flux of the neighbor assembly determined in the previous ray tracing, not in the current tracing. This problem of convergence deterioration, however, can be solved by using a partial current update scheme based on the CMFD solution. The application of ADD to the hexagonal geometry, however, is quite cumbersome because of the complexity of the nonrectangular assembly boundary cells, as will be shown later.