Explore chapters and articles related to this topic
Improving the Security of Data in the Internet of Things by Performing Data Aggregation Using Neural Network-Based Autoencoders
Published in Syed Rameem Zahra, Mohammad Ahsan Chishti, Security and Privacy in the Internet of Things, 2020
Ab Rouf Khan, Mohammad Khalid Pandit, Shoaib Amin Banday
The IoT can be beheld as a paradigm where the Internet-capacitate devices (actuators, sensors etc.) interconnect with each other to turn the physical world things into the digital world things. Data aggregation aims at collecting the data from various source nodes and sending only the summarized data. Aggregating the data not only increases the efficiency of the network on a whole but also improves security as well. We have used neural networks based autoencoders to perform the data aggregation. FFNN-based autoencoder, CNN based autoencoder and denoising autoencoder techniques have been implemented on MNIST and LFW datasets on code dimensions 32, 64, 128 and 256. Code dimension 32 has the highest MSE, and the MSE decreases as the code size increases. The code size of 256 has the minimum MSE and produces the best-reconstructed input.
Trauma Outcome Prediction in the Era of Big Data: From Data Collection to Analytics
Published in Ervin Sejdić, Tiago H. Falk, Signal Processing and Machine Learning for Biomedical Big Data, 2018
Shiming Yang, Peter F. Hu, Colin F. Mackenzie
An unprecedented volume of data is generated daily in trauma patient care. However, one cruel fact is that healthcare resources are still very limited in both the field and the hospital. Matched blood product, operation rooms, and experienced healthcare providers such as surgeons, anesthesiologists, and nurses are always scarce. The ultimate goal of Big Data in trauma patient care is to intelligently optimize the allocation of limited healthcare resources, by reliable prediction of needs for lifesaving interventions and early decision on therapeutic plans. With automated data processing and informative data aggregation, useful knowledge from massive data can be used by clinicians in a simple way for decision making or prioritizing care in the busy hospital environment.
Internet Working of Vehicles and Relevant Issues in IoT Environment
Published in Sudan Jha, Usman Tariq, Gyanendra Prasad Joshi, Vijender Kumar Solanki, Industrial Internet of Things, 2022
Rajeev Kumar Patial, Deepak Prashar
The main goal of data aggregation algorithms is to gather and aggregate data in anefficient manner so that the validity of the information is ensured for a longer duration. In [14], authors define the in-network aggregation process as follows: In-network aggregation is the global process of gathering and routing information through a multihop network, processing data at intermediate nodes with the objective of reducing resource consumption (in particular energy), thereby increasing network lifetime.Tree-based approach is the simplest way to aggregate data is to organize the nodes in a hierarchical manner and then select some nodes as the aggregation point or aggregators. The tree-based approach performs aggregation by constructing an aggregation tree, which could be a minimum spanning tree, rooted at sink and source nodes are considered as leaves.The cluster-based approach is about the hierarchical organization of the network in tree-based approach. Another important technique to organize the network in a hierarchical manner is a cluster-based approach. In the cluster-based approach, the whole network is divided into several clusters. Each cluster has a CH which is selected among cluster members. CHs do the role of aggregator which aggregate data received from cluster members locally and then transmit the result to the sink. The advantages and disadvantages of the cluster-based approaches are very much similar to tree-based approaches.
YOLOv5-based weapon detection systems with data augmentation
Published in International Journal of Computers and Applications, 2023
Data augmentation is the process of modifying training images by generating a synthetic dataset that is larger than the original dataset. This helps boosts the performance of models and reduces the risk of overfitting [68]. In contrast, data aggregation is a method of gathering large amounts of data from multiple sources and compiling it in a more organized, and consumable fashion. It processes information and further presents it in a summarized form which is generally useful in analyzing data statistically. Whereas the main motive of data augmentation is to artificially increase the size and quality of machine learning training datasets to ensure models to be trained better using many data augmenting techniques (such as flipping, rotation, cropping, adding noise, and occlusions, etc.) [69].
Using aggregated data under time pressure: a mechanism for coping with information overload
Published in Journal of Decision Systems, 2019
Data aggregation is an integral aspect of information presentation used to facilitate managerial decision-making. The over-abundance of data generated by information systems (Ackoff, 1967; Laker et al., 2017) often requires that decision makers receive some aggregated data in order to present decision makers some meaningful information as opposed to hundreds of data points. Thus, data aggregation is a strategy to counteract information overload (Eppler & Mengis, 2004). However, empirical examinations of decision performance when using detailed as compared to aggregate information indicate that detailed data results in more accurate decisions (Abdel-Khalik, 1973; Barefield, 1972).
Novel FNN-based machine deep learning approach for image aggregation in application of the IoT
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2022
De-Gan Zhang, Peng Yang, Jie Chen, Xiao-dan Zhang, Ting Zhang
In the application of the IoT (Internet of Things), image aggregation as an important branch of multiple sensors data aggregation (Gong & Zhang, 2021; J. X. Wang & Fan, 2020; Zhang & Liu, 2019a). At present, the main image aggregation algorithms are Intensity Hue Saturation (IHS) transform, Principal Component Analysis (PCA), high-pass filtering, wavelet transform, as well as intelligent image processing methods based on NN, fuzzy theory, and rough set theory (S. Liu, 2020; Zhang & Zhang, 2019b).