Explore chapters and articles related to this topic
Model Estimation and Evaluation
Published in Douglas D. Gunzler, Adam T. Perzynski, Adam C. Carle, Structural Equation Modeling for Health and Medicine, 2021
Douglas D. Gunzler, Adam T. Perzynski, Adam C. Carle
Formally, suppose one aims to estimate t unknown model parameters from a total of k observed and w* unobserved (latent) variables. In classic covariance-based approaches to SEM, the null hypothesis is [2]. is a vector of unknown parameters of dimension t × 1. The population covariance matrix and model-implied covariance matrix are both of dimension k × k. The sample covariance matrix is used as an approximate of in practice to estimate the unknown parameters.
Longitudinal Data Analysis
Published in Atanu Bhattacharjee, Bayesian Approaches in Oncology Using R and OpenBUGS, 2020
The term is a linear predictor. The random effect component is for the ith individuals jth time point measurement. The covariance matrix is presented as Vi.
Exploratory Data Analysis with Unsupervised Machine Learning
Published in Altuna Akalin, Computational Genomics with R, 2020
One thing that is new in Figure 4.11 is the concept of eigenarrays. The eigenarrays, sometimes called eigenassays, represent the sample space and can be used to plot the relationship between samples rather than genes. In this way, SVD offers additional information than the PCA using the covariance matrix. It offers us a way to summarize both genes and samples. As we can project the gene expression profiles over the top two eigengenes and get a 2D representation of genes, but with the SVD, we can also project the samples over the top two eigenarrays and get a representation of samples in 2D scatter plot. The eigenvector could represent independent expression programs across samples, such as cell-cycle, if we had time-based expression profiles. However, there is no guarantee that each eigenvector will be biologically meaningful. Similarly each eigenarray represents samples with specific expression characteristics. For example, the samples that have a particular pathway activated might be correlated to an eigenarray returned by SVD.
Boundaries tuned support vector machine (BT-SVM) classifier for cancer prediction from gene selection
Published in Computer Methods in Biomechanics and Biomedical Engineering, 2022
This section includes pre-processing as the initial method where the gene extraction dataset is pre-processed. The next method is feature extraction where the assumed samples are created as combination for others. This is done by implementing the improved supervised principal component analysis algorithm (ISPCA) for the purposeof extracting features from the dataset. This overcomes the limitations of existing techniques. Dimensionality reduction is achieved through this method. From the covariance matrix the data’s principal components can be found. Unsupervised PCA has explored that there is no guarantee about the PCs relation with the class variable during the computation of dataset’s principal components. Thus, a supervised principal component analysis (SPCA) has been proposed. Based on the number of components, PCs are selected in this method. A novel wrapper model-based greedy search sequential feature selection algorithm is proposed for the feature selection process. An optimal feature subset is found using this method taking the help of iteratively selecting the features. Boundaries tuned support vector machine classifier (BT-SVM) is a machine learning techniques that is used to verify the selected samples. This is done for an effective model performance and to avoid over fitting problem. The affected and normal feature samples are separated through this method. Right hyper plane is identified through kernel trick and linear computational method in non-linear relationships. The overall proposed flow is shown in (Figure 2).
A novel perspective for parameter estimation of seemingly unrelated nonlinear regression
Published in Journal of Applied Statistics, 2021
Obtain the Aitken type estimates for 14] and Kravaris et al. [15]. However, in this study, covariance matrix of residuals is calculated numerically following the original application of the seemingly unrelated regression. Therefore, the elements of
Predictors of pain-related functional impairment among people living with HIV on long-term opioid therapy
Published in AIDS Care, 2021
David P. Serota, Christine Capozzi, Sara Lodi, Jonathan A. Colasanti, Leah S. Forman, Judith I. Tsui, Alexander Y. Walley, Marlene C. Lira, Jeffrey Samet, Carlos del Rio, Jessica S. Merlin
Descriptive statistics were used to characterize the study sample at baseline, overall and stratified by baseline BPI-I score (≥7 versus <7). A covariance matrix was created for the independent variables. There was high covariance between measures of depression, PTSD, and anxiety. Cross-sectional data was used from baseline and the follow-up visit to increase the sample size, but was then adjusted for within-subject correlation over the two time points. To identify predictors of BPI-I, a series of mixed linear regression models with random intercepts and slopes were fit using data from both visits. First, unadjusted models for each potential predictor of interest were estimated. Second, a stepwise selection procedure was performed in order to maximize the Bayesian Interference Criterion (BIC), a measure of model fit statistic. Age and gender were forced into the final multivariable model. We were interested in the association between both depression and PTSD with pain interference (Barry et al., 2012; Morasco et al., 2013), however, due to the high covariance between the mental health variables, we created two separate multivariable models using each as these variables as the representative candidate. BIC was chosen to maximize use of the available data while minimizing overfitting of the model to our data, which could decrease external validity. Analyses were performed using SAS 9.4 (Cary, North Carolina).