Explore chapters and articles related to this topic
Virtual Power Plant with Demand Response Control in Aggregated Distributed Energy Resources of Microgrid
Published in Ehsan Heydarian-Forushani, Hassan Haes Alhelou, Seifeddine Ben Elghali, Virtual Power Plant Solution for Future Smart Energy Communities, 2023
Elutunji Buraimoh, Innocent E. Davidson
In the proposed architecture, RESs, various microgrid local loads and resident energy storage are treated as DER in the microgrid. The microgrid consumers utilize these DERs supervised by an array of sensors and managed by microgrid automatic controllers. The microgrid monitoring equipment also measures nonelectrical data such as climatic conditions and carbon emissions. In addition, the VPP gathers information from other sources such as power pricing and grid condition. The VPP's intelligence digests all of this information to govern energy resources using predictive modelling and control algorithms. Thus, predictive modelling and control algorithms are integral parts of the VPP system, and they are placed within the VPP. Predictive modelling is a statistical approach that uses machine learning and data mining to forecast and predict possible future occurrences with past and existing data. Predictive models make assumptions based on what has occurred and is now occurring. A controller's control algorithm is a form of logic that analyzes the difference between a measured value and a set point. This logic aids the controller in processing field input and making decisions. Conclusively, the VPP interrelates with different microgrid customers by offering services or imposing consumption performance limits and with central microgrid operators by trading DR bids. However, in the present-day grid systems, the role of the aggregator is still unclear. The following parts delve deeper into each of the project's examined areas.
Supervised Machine Learning: Application Example Using Random Forest in R
Published in Mangey Ram, S. B. Singh, Mathematics Applied to Engineering and Management, 2020
Predictive modeling in the presence of a large number of exploratory variables requires the use of methods that support feature selection. The random forest algorithm is a popular machine-learning method that automatically calculates variable importance measure as a by-product and has been successfully used by various researchers [1–2]. Variable importance measures that include mean decrease accuracy (MDA) and mean decrease Gini (MDG) provided by random forest has also been studied for stability. Studies involving simulations indicate that ranks based on MDA are unstable to small perturbations of the data set, whereas ranks based on MDG provide more stable results [3]. At the same time, in situations where there are strong within-predictor correlations, MDA rankings are found to be more stable than MDG [4]. It is also known that having too many features can not only slow down algorithms, but many machine-learning algorithms also exhibit a decrease in accuracy in such situations [5].
Predictive modelling
Published in Catherine Dawson, A–Z of Digital Research Methods, 2019
Predictive modelling refers to the process of creating a model that will forecast or predict outcomes, using techniques and methods such as data mining (Chapter 12), probability, mathematics and statistics. Models tend to predict future outcomes, although they can be used to make predictions about events that take place at any time. Applications of predictive modelling include weather forecasting; financial forecasting; predicting or detecting crimes; identifying suspects; identifying, targeting and interacting with customers or potential customers; identifying and hiring the ‘best’ employees; assessing risk in the insurance industry; and improving health care, for example. Predictive modelling is closely aligned to the fields of machine learning (Chapter 29), predictive analytics (Chapter 10) and computer modelling and simulation (Chapter 7).
Artificial intelligence and statistics for quality technology: an introduction to the special issue
Published in Journal of Quality Technology, 2021
Bianca Maria Colosimo, Enrique del Castillo, L. Allison Jones-Farmer, Kamran Paynabar
Predictive modeling in ML focuses on forecasting a quantity or an event by analyzing past behavior and discovering nonrandom patterns in the data. In principle, this is nothing different than the goals in classical predictive statistical inference, which has been widely used in quality and reliability modeling. Examples, of predictive modeling applications in quality and reliability include monitoring autocorrelated processes using ARMA models (Alwan and Roberts 1988), modeling and control of variation propagation in multistage processes using state space models (Shi 2014), offline prediction of mean time to failure of a component using various reliability models (Meeker and Escobar 1998), online prediction of residual useful lifetime of a component by modeling its degradation signals (Gebraeel et al. 2005), process optimization by learning the relationship between the output and process variables and finding their optimal setpoints (Del Castillo 2007). Despite the common goals, some of the methods and the language used in ML differ, creating barriers for cross-communication. An even bigger difference exists in that some of the types of data sets and volume of data used in ML are not common in traditional industrial statistics.
Forecasting road accidental deaths in India: an explicit comparison between ARIMA and exponential smoothing method
Published in International Journal of Injury Control and Safety Promotion, 2023
Prafulla Kumar Swain, Manas Ranjan Tripathy, Khushi Agrawal
To define predictive modelling, it is the process of using familiar results to generate, process, and validate a model that is used to forecast future events and outcomes. Regression (Bamel et al., 2021) and neural networks (Zhang & Yan, 2023) are two of the most widely used predictive modelling techniques. Other techniques include hidden Markov modeling (Laverty et al., 2002) time series data mining, decision trees, and Bayesian analysis.