Explore chapters and articles related to this topic
Seismic Data Regularization and Imaging Based on Compressive Sensing and Sparse Optimization
Published in C.H. Chen, Compressive Sensing of Earth Observations, 2017
To be fair, there exists no overall winner that achieves the best performance in terms of both speed and accuracy for all applications. Yang et al. (2010) compared some classical algorithms through some simple examples and face recognition examples; the interior point methods for the L1-norm minimization problem suffer from poor scalability for large-scale real-world problems. Without concerns about speed and data noise, the success rates of the interior point method is the highest among gradient projection (Figueiredo et al. 2007), homotopy (Osborne et al. 2000), the iterative shrinkage thresholding method (Daubechies et al. 2004), the proximal gradient method (Beck and Teboulle 2009), and augmented Lagrange multiplier methods (Rockafellar 1976). A reweighted L1 norm regularization method was proposed by Candes et al. (2008), which outperforms L1 minimization in the sense that substantially fewer measurements are needed for exact recovery.
A proximal gradient splitting method for solving convex vector optimization problems
Published in Optimization, 2022
Yunier Bello-Cruz, Jefferson G. Melo, Ray V.G. Serra
The proximal gradient method (PGM) is one of the most popular and efficient schemes for solving convex composite vector optimization problems. This method is well-known in the scalar case and was recently considered in the multiobjective setting in [22]. The main purpose here was to show that for problems in which the vector function is convex (without any Lipschitz assumption of the Jacobian of the differentiable component), the sequence generated by the PGM converges to a weakly efficient solution. An iteration-complexity result was also established in order to obtain an approximate weakly efficient solution of the vector problem under consideration.
Online domain adaptation for continuous cross-subject liver viability evaluation based on irregular thermal data
Published in IISE Transactions, 2021
We propose to develop a block-coordinate proximal gradient method-based algorithm to solve the non-convex problem, which has guaranteed global convergence and high computational speed (Xu and Yin, 2013). However, the block-coordinate proximal gradient method requires the objective function to be differentiable. We therefore use the Huberized SVM, a differential approximation of the SVM formulation (Wang et al., 2008), as shown in the second line of the above equation. Here, is derivable, and δ is a pre-specified constant.
A dual active-set proximal Newton algorithm for sparse approximation of correlation matrices
Published in Optimization Methods and Software, 2022
Xiao Liu, Chungen Shen, Li Wang
We remark that and are actually the projected gradients steps that play an important role in the global convergence of our proposed algorithm. However, as it is well known, the proximal gradient method converges at most linearly. Hence, in the next subsection, we will consider using the semi-smooth Newton method to accelerate the convergence of the PG iterations.