Explore chapters and articles related to this topic
Cloud-Based SMART Manufacturing
Published in Kris MY Law, Andrew WH Ip, Brij B Gupta, Shuang Geng, Managing IoT and Mobile Technologies with Innovation, Trust, and Sustainable Computing, 2021
As data are growing and accumulating expeditiously over time, the massive volume of data collected may overwhelm the cloud computing system. Therefore, the cloud computing system faces the challenge in maintaining its stability in processing the massive data as well as the accuracy in computation. Besides, data fragmentation occurs when massive data collected are broken down into small pieces. Since data are collected from numerous businesses with complex structure and characteristics, different underlying systems and other multiterminal such as PC, wireless, OTT, and IOT, it is formidably difficult for the cloud computing system to locate and gather the correlated data. Moreover, massive data collected are existing in different structures and different business standards, leading to the rising difficulties in standardizing the data and come up with a fair analysis. Ultimately, the sources and quality of data collected are uncertain. The existence of dirty data, including duplicated data, insecure data, and inaccurate data, etc., may deteriorate the accuracy of analysis results.
Low Power Wide Area (LPWA) Networks for loT Applications
Published in Hongjian Sun, Chao Wang, Bashar I. Ahmad, From Internet of Things to Smart Cities, 2017
Kan. Zheng, Zhe. Yang, Xiong. Xiong, Wei. Xiang
Raw data cannot be directly utilized in the presence of the possible transmission errors or machine failures. Some pre-processing procedures are needed to detect and clean the “dirty data” so as to ensure the integrity and reliability of the dataset. These procedures are carried out in the data processing server;Storage server
The future of footwear biomechanics research
Published in Footwear Science, 2023
Steffen Willwacher, Gillian Weir
Ultimately, part of the issue with unequal datasets can be resolved by researchers providing open access raw data, thus allowing researchers to perform their preferred methodology and coming to conclusions on a heterogeneously treated biomechanical dataset. As we move into an era of ‘big data’ it is critical that we design and employ standardised ongoing data hygiene methods individually, and as a field. Best practices for data hygiene include identifying misplaced, missing, duplicate and inconsistent/erroneous data and using this information to inform standardisation of data entry, handling of errors and prevention of ‘dirty data’ occurring.
Recent advances in smart water technology of drainage systems in China
Published in Water International, 2023
Data acquisition is the foundation of the development of smart water, and data analysis and processing are the core of the development of smart water. Due to sensor problems, data transmission problems, human error and other reasons, ‘dirty data’ and data missing are inevitable. Uncleaned data cannot be directly used for data analysis and data mining, which will lead to a distortion of the simulation results and affect decision-making. Therefore, it is important to choose an appropriate data cleaning method.