Explore chapters and articles related to this topic
Recent Advances and Future Trends of IoT-Based Devices
Published in Deepti Agarwal, Kimmi Verma, Shabana Urooj, Energy Harvesting, 2023
Punit Kumar Singh, Sudhakar Singh, Ashish, Hassan Usman, Shabana Urooj
Despite their enormous potential to transform the healthcare sector, emerging technologies face unique challenges in their actual application. The most common difficulties center around the development of massive data by a large number of devices connected to the system, as well as the possibility of cyberattacks and data breaches. The fusion of IoT-based frameworks into all pieces of human existence just as the various advancements associated with information moved between implanted devices complicated matters and brought about a large number of difficulties and barriers. In a sophisticated smart tech society, these concerns are also a difficulty for IoT developers. As technology advances, so do the difficulties and need for sophisticated IoT systems. As a result, IoT developers must anticipate new challenges and propose solutions to them.
Overview of Internet of Things
Published in Sandeep Misra, Chandana Roy, Anandarup Mukherjee, Introduction to Industrial Internet of Things and Industry 4.0, 2021
Sandeep Misra, Chandana Roy, Anandarup Mukherjee
To handle these massive amounts of data, a software framework, Hadoop, is used for distributed processing. This software provides the platform for storing the data, management of data in both structured and unstructured formats. Moreover, Hadoop technology allows end-users to collect, process, and analyze data locally.
Modern Predictive Analytics and Big Data Systems Engineering
Published in Anna M. Doro-on, Handbook of Systems Engineering and Risk Management in Control Systems, Communication, Space Technology, Missile, Security and Defense Operations, 2023
Predictive analytics is utilized to support in creating preferences on probable alternatives and forecast the potential outcomes of an event or a situation currently happening and trends. It is employed to quickly provide analysis massive data with various methods such as regression, neural network, decision tree, text mining, etc. Moreover, it integrates the mission, business process, and statistical analytical methods to produce insights. These insights support missions and organizations perceive how individuals or society behave as integrated human elements in the operations, negotiators (in terms of crisis management), customers, informants, receivers, buyers, etc., whereas the key purpose of data mining is to create a model that can be applied to predict an event or circumstance. Research scientists will obtain knowledge from historical data to develop analytical models that can be utilized to proposed, existing, or new situations. The methods of assessing data sets determines essential information applicable for one or several data mining techniques to explore patterns, identify trends, or produce preferences in the data to discover behaviors and events. Hence, data mining has two categories: descriptive (unsupervised) and predictive (supervised). Data is used to discover patterns from training data within the data set in predictive learning to determine value based on given parameters. Prescriptive learning is a method in generating predictive models from chosen historical data, which have the potential outcomes about to predict. The type of data finds whether this is done using a regression (refer to Chapter 9, Section 9.12.2 regarding regression for intuitive prediction) or a classification algorithm. Regression analysis can be applied effectively to model the connection of one or more predictor variables versus a dependent variable. Linear regression is the method of using a straight-line equation, y = mx + c and determines the suitable values for m and c to determine the value of y from a given parameter, x. Advance methodologies, such as multiple regression, permit the application of several input variables and allow complicated models. Classification is the set of data mining processes employed to qualify discrete data to a known structure to create predictions for the class label of unlabeled data. Classification algorithms are achieved in three phases: training (phase 1), testing (phase 2), and deployment (phase 3). Training and testing utilize known class labels. For phase 1, training applies a fraction of the data to prepare a classifying model. For phase 3, testing uses the models for predictions of the class labels and for validations based on the actual values for accuracy. Prior to phase 3, feedback is used to discover the effectiveness of the models for application, for further enhancement, or for creating additional models. After an acceptable model is created and established, phase 3 is performed by deploying the classifier on unlabeled data. Examples of classification algorithms include decision trees (random forests), quadratic classifiers, neural networks, kernel estimation, and support vector machines.
Statistical engineering: Synergies with established engineering disciplines
Published in Quality Engineering, 2022
We live in a stochastic world. Good data are needed to understand the world so we can design robust solutions that deal with variation. Otherwise, solutions are suboptimal or hit-and-miss. Thanks to technology, sensors and systems exist to collect and use more data than any other time in human history. The size of data sets will only grow as we continue to transition into Industry 4.0. The statistical methods and tools of sampling, exploratory data analysis (EDA), linear models, time series models, hypothesis testing, and designed experiments are ever more important. More advanced tools that take advantage of massive data sets, such as machine learning, are growing in importance. Unfortunately, many engineers (except for industrial and systems engineers) graduate with at most basic statistical knowledge, as probability and statistics are no longer curricular requirements of ABET (ABET 2022). Statistical engineering, with its focus on using good data for design and problem-solving, would benefit all engineers as we strive to solve larger and more complex challenges.
Monitoring Transient Stability in Electric Power Systems Using a Hybrid Model of Convolutional Neural Network and Random Forest
Published in Electric Power Components and Systems, 2023
Hafsa Ahmad, Muhammad Yousif, Maliha Shah, Najeeb Ullah
Random Forest is a relatively fast and efficient supervised learning technique formed by a decision tree algorithm to solve regression and classification problems by ensemble learning [27]. It can process massive data quickly. This is important for monitoring power system stability where large data is generated in real-time. RF can identify potential instability issues in a timely manner, allowing operators to take corrective actions before the system becomes unstable.