Explore chapters and articles related to this topic
Power Curve Modeling and Analysis
Published in Yu Ding, Data Science for Wind Energy, 2019
The kernel regression is considered a nonparametric approach not because the kernel function does not have parameters or not use a functional form; in fact it does have a parameter, which is the bandwidth. Being a nonparametric approach, the kernel function is different from the target function, y(x), that it aims at estimating. While the target functions may vary drastically from one application to another, the kernel function used in the estimation can remain more or less the same. The non-changing kernel regression is able to adapt to the ever-changing target functions, as long as there are enough data. The parameter in the kernel function serves a role differing substantially from the coefficients in a linear regression. The coefficients in a linear regression play a direct role in connecting an input x to the response y, while the bandwidth parameter in kernel functions, by contrast, defines the neighborhood and plays a rather indirect role in connecting x to y.
Radial basis function (RBF) neural networks
Published in A. W. Jayawardena, Environmental and Hydrological Systems Modelling, 2013
Unlike in linear regression where the attempt is to find a function that fits all the data, the objective in kernel regression is to find a function that fits the data locally. No prior assumption about the distribution of the data is necessary. Kernel regression is closely related to nearest-neighbour search.
The Temporal Overfitting Problem with Applications in Wind Power Curve Modeling
Published in Technometrics, 2023
Abhinav Prakash, Rui Tuo, Yu Ding
Research reported in Bessa et al. (2012) and Lee et al. (2015) identifies that wind power production is not limited to the effect of wind speed, but also depends on other factors such as wind direction, air density, etc. Bessa et al. (2012) and Lee et al. (2015) both used the kernel regression and kernel density estimation approaches. But Bessa et al. (2012) handle up to three input variables, while Lee et al. (2015) use a new additive-multiplicative model structure, referred to as AMK, that can in principle take in as many inputs as possible. The actual number of inputs used in Lee et al. (2015) is up to seven. In Chapter 5 of Ding (2019) and its companion DSWE R package (Kumar, Prakash, and Ding 2021), more nonparametric regression methods are provided, including those based on the smoothing splines (SSANOVA) (Gu 2013, 2014), Bayesian trees (BART) (Chipman, George, and McCulloch 2010), k-nearest neighbors (kNN) (Hastie, Tibshirani, and Friedman 2009), and support vector machines (SVM) (Vapnik 2000). Thus, the nature of power curve modeling falls squarely under the umbrella of nonparametric regression.
Impact of data partitioning in groundwater level prediction using artificial neural network for multiple wells
Published in International Journal of River Basin Management, 2022
Jamel Seidu, Anthony Ewusi, Jerry Samuel Yaw Kuma, Yao Yevenyo Ziggah, Hans-Jurgen Voigt
The GRNN was proposed by Specht (1991) is on kernel regression networks. The main difference between the GRNN and backpropagation learning algorithms is that the GRNN does not require iterative training procedures because is approximates any random function between the input and output datasets directly from the data used for training the network. A GRNN consist of an input layer, pattern layer, summation layer, and output layer (Fig. 2c). The input layer receives information and transfer it to the pattern layer, where each unit represents a training pattern and its output is a measure of the distance of the input from the stored patterns (Cakir and Konakoglu, 2019). Each pattern layer is connected to the S-summation which computes the sum of the weighted outputs of the pattern layer and the D-summation neuron which calculates the unweighted outputs of the pattern neurons. The output layer divides the output of each S-summation neuron by that of each D-summation neuron, which results in the predicted output value as expressed in Equation (3) (Cakir and Konakoglu, 2019).
Understanding and optimization of hard magnetic compounds from first principles
Published in Science and Technology of Advanced Materials, 2021
Takashi Miyake, Yosuke Harashima, Taro Fukazawa, Hisazumi Akai
There are several papers published on machine learning of magnetic materials. One is kernel-ridge regression of the experimental s in about 100 rare-earth transition-metal compounds, as mentioned in the previous section [67]. The distance between materials can be measured by the kernel regression. Using this property, Nguyen et al. proposed dissimilarity voting machine based on ensemble learning, and applied the method to the rare-earth transition-metal compounds (Figure 5) [93]. Nelson and Sanvito compared the performance of several different regression models using experimental data for about 2500 known ferromagnetic compounds, and showed that the best model predicts with an accuracy of 50 K [94]. Long et al. applied random forrest for classifying ferromagnetic and antiferromagnetic compounds and predicting the Curie temperature [95].