Explore chapters and articles related to this topic
Big data analytics
Published in Catherine Dawson, A–Z of Digital Research Methods, 2019
Big data analytics refers to the process of examining extremely large and complex data sets. These are referred to as ‘big data’ (‘small data’ refers to datasets of a manageable volume that are accessible, informative and actionable and ‘open data’ refers to datasets that are free to use, re-use, build on and redistribute, subject to stated conditions and licence). Some big data are structured: they are well-organised, with defined length and format that fit in rows and columns within a database. Other big data are unstructured, with no particular organisation or internal structure (plain text or streaming from social media, mobiles or digital sensors, for example). Some big data are semi-structured in that they combine features from both the above (email text combined with metadata, for example). All three types of data can be human or machine-generated.
Leak Detection – Static Sensors and Acoustic Inspections
Published in Justin Starr, Water and Wastewater Pipeline Assessment Technologies, 2021
Further, there is value in using these systems as inputs into “Big Data” schemes. In contrast to “Small Data” – datasets that can be digested by an individual using manual tools – Big Data is a field of study that focuses on extremely large data sets that are beyond the scope of an individual’s comprehension. These often constitute billions of records and require intensive computing systems for analysis – something enabled by the spread of elastic cloud technologies, where servers can be provisioned and scaled up on demand. Municipalities can pay for a few minutes on a supercomputer, rather than the infrastructure of physically constructing and supporting such a computer.
Current and Future Biometrics: Technology and Applications
Published in Ricardo A. Ramirez-Mendoza, Jorge de J. Lozoya-Santos, Ricardo Zavala-Yoé, Luz María Alonso-Valerdi, Ruben Morales-Menendez, Belinda Carrión, Pedro Ponce Cruz, Hugo G. Gonzalez-Hernandez, Biometry, 2022
Jorge de J Lozoya-Santos, Mauricio A Ramírez-Moreno, Gladys G Diaz-Armas, Luis F Acosta-Soto, Milton O Candela Leal, Rafael Abrego-Ramos, Ricardo A Ramirez-Mendoza
One main consideration is the need of data to construct the virtual models. Two approaches can be considered: Big and Small Data. While Big Data focuses on gathering information from many databases and records that help to create more reliable and general models; Small Data focuses on personalized models, fed with information from the same patient [458]. As a summary, the use of biometrics in the development of virtual representations of the human have a lot of potential across different fields, and will continue to arise in the next years.
Guest editorial
Published in Quality Engineering, 2020
Rong Pan, Xiao Liu, Zhaojun Li
To an engineer, product designer, or process optimizer, reliability concern is ultimately a quality concern, but spanning in a longer period and genuinely taking from system perspective. Reliability Engineering (RE) tackles many traditional Quality Engineering (QE) challenges. For example, the statistical models and data analysis techniques, such as acceptance sampling, statistical process control, experimental designs and response surface regression, developed for product and process quality improvement have been widely adopted and adapted for improving product and process reliability, which, as many practitioners would say, is the quality with a time dimension. Correspondingly, RE researchers look for innovative reliability sampling, non-normal process monitoring, life and accelerated life testing and lifetime regression. In addition, RE researchers pay more attentions to systems, as they are interested in system reliability assessment, reliability-based design optimization, failure mode and effects analysis, repairable system and maintenance optimization, etc. The challenges faced by a reliability analyst are typically the small data and/or incomplete data problems, because the available lifetime data are often of small sample size and they are often censored. Of course, the distributions of reliability data are generally apart from normal distribution, as they need to catch the data asymmetricity and skewness observed from specific applications. These challenges have made reliability research interesting and intriguing, for both intellectual pursuits and solving real-world problems.