Explore chapters and articles related to this topic
Machine Learning for Disease Classification: A Perspective
Published in Kayvan Najarian, Delaram Kahrobaei, Enrique Domínguez, Reza Soroushmehr, Artificial Intelligence in Healthcare and Medicine, 2022
Jonathan Parkinson, Jonalyn H. DeCastro, Brett Goldsmith, Kiana Aran
The insight in the previous example is that relationships that are nonlinear in low-dimensional spaces may be approximated by a linear model in a higher-dimensional space. Finding the appropriate basis expansion, however, may be nontrivial and may incur an increased risk of overfitting. Computing similarity between datapoints post-basis expansion may also be computationally expensive. Kernel methods use a kernel function to implicitly map the data into a higher dimensional (or even infinite-dimensional) space, achieving the same end effect as an explicit mapping without the computational cost – an approach sometimes called the “kernel trick”. The kernel function assesses the similarity between any two datapoints. A variety of kernel functions including the squared exponential kernel, the Matern kernel, and the spectral mixture kernel have been described in the literature (Genton, 2001; Wilson & Adams, 2013).
Predictive modeling, machine learning, and statistical issues
Published in Ruijiang Li, Lei Xing, Sandy Napel, Daniel L. Rubin, Radiomics and Radiogenomics, 2019
Panagiotis Korfiatis, Timothy L. Kline, Zeynettin Akkus, Kenneth Philbrick, Bradley J. Erickson
Kernel methods owe their name to the use of kernel functions, which enable them to operate in a high-dimensional space. A kernel computes the dot product of two vectors X and Y in a different and usually higher-dimensional feature space. This mapping of data to some other dimension is called the “kernel trick.” Kernel methods include kernel perceptrons, support vector machines (SVMs) (Cortes and Vapnik 1995), Gaussian processes, and principal components analysis (PCA) (Hotelling 1933).
Computer-Aided Diagnosis Systems for Prostate Cancer Detection
Published in Ayman El-Baz, Gyan Pareek, Jasjit S. Suri, Prostate Cancer Imaging, 2018
Guillaume Lemaître, Robert Martí, Fabrice Meriaudeau
SVM is a sparse kernel method aimed at finding the best linear hyper-plane—nonlinear separation is discussed further—which separates two classes such that the margin between the two classes is maximized [288]. The margin is in fact the region defined by two hyper-planes splitting the two classes, such that there are no points lying in between. The distance between these two hyper-planes is equal to where is the normal vector of the hyper-plane splitting the classes. Thus, maximizing the margin is equivalent to minimizing the norm . Hence, this problem is solved by an optimization approach and formalized as
Prediction models with graph kernel regularization for network data
Published in Journal of Applied Statistics, 2023
Jie Liu, Haojie Chen, Yang Yang
This paper introduces a novel method based on the graph regularization formulation of the kernel method for learning from both covariates and graph link structure. The method is applicable to regression problems for graph data. The objective function of our model is a well-formed convex optimization problem, which means that a globally optimal solution can be computed efficiently. In addition, imposing a penalty on both covariates and the graph Laplacian not only leads to a reduction of covariates but also provides a clear trade-off between the bias and variance. Moreover, the proposed RGK model is somewhat similar to a standard kernel method, with an appropriately defined kernel based on the underlying graph. Experimental results on both simulated graph data and real networks in various situations indicate that the RGK model can lead to better performance in graph regression tasks.
Exploring Klebsiella pneumoniae capsule polysaccharide proteins to design multiepitope subunit vaccine to fight against pneumonia
Published in Expert Review of Vaccines, 2022
Jyotirmayee Dey, Soumya Ranjan Mahapatra, S Lata, Shubhransu Patro, Namrata Misra, Mrutyunjay Suar
Receptors present in the B-lymphocytes surface recognize and bind B cell epitopes. B-cell epitopes are essential in adaptive immunity and thus may be a key component in vaccine development. There are two types of B-cell epitopes, namely, linear, and conformational epitopes [22]. ABCpred (http://www.imtech.res.in/raghava/abcpred/) and BCPREDS (http://ailab-projects1.ist.psu.edu:8080/bcpred/predict.html) servers were employed for Linear B Lymphocytes prediction. ABCpred server (http://www.imtech.res.in/raghava/abcpred/) was employed for linear B Lymphocytes prediction. ABCPred is a consistent algorithm-based webserver specifically used for the appropriate prediction of linear B-cell epitopes. As the B-cell epitope is present on the cell surface, exomembrane topology is considered as one of the essential parameters. The server specificity is 0.75, whereas the threshold of 0.51 (default threshold) and window length of 10 were set. BCPREDS uses a technique that employs the kernel method for the prediction of linear B cell epitopes. Kernel methods consist of different algorithms used for pattern analysis, a well-known member of this group is SVM (support vector machine). The prediction and output performance of BCPred (AUC = 0.758) is based on the SVM along with the employment of AAP (Amino acid pair antigenicity) (AUC = 0.7) [23]. The AAP is used for the prediction of linear B cell epitopes. Also, the B-cell epitopes were aligned with their whole-protein structure using PyMOL visualization tool.
Bayesian inference in based-kernel regression: comparison of count data of condition factor of fish in pond systems
Published in Journal of Applied Statistics, 2022
T. Senga Kiessé, Etienne Rivot, Christophe Jaeger, Joël Aubin
Consider the sequence m is the unknown c.r.f. on a discrete support Y on X, and the h>0, the discrete nonparametric regression estimator m is defined as discrete associated kernel and 3]. The following section provides a brief recall of the kernel method for estimating discrete functions on support 11].