Explore chapters and articles related to this topic
Statistical Learning
Published in Dirk P. Kroese, Zdravko I. Botev, Thomas Taimre, Radislav Vaisman, Data Science and Machine Learning, 2019
Dirk P. Kroese, Zdravko I. Botev, Thomas Taimre, Radislav Vaisman
and Hp−1 is the p × p inverse Hilbert matrix with (i, j)-th entry: (−1)i+j(i+j−1)(p+i−1p−j)(p+j−1p−i)(i+j−2i−1)2.inverseHilbert matrix
Calculating
Published in Victor A. Bloomfield, Using R for Numerical Analysis in Science and Engineering, 2018
If X is a square matrix, svd(X) and eigen(X) give essentially identical results. We illustrate with the Hilbert matrix used in the svd help example, a square matrix in which the i, j element is 1/(i+ j − 1). The Hilbert matrix is notoriously ill-conditioned for moderate and larger n. It may be defined in R for arbitrary n by the function
Linear Systems
Published in Jeffery J. Leader, Numerical Analysis and Scientific Computation, 2022
Let b=(1,−1,1,−1,1,−1,1,−1)T. Solve Ax=b where A is the Hilbert matrix of order 8 using the LU decomposition (in MATLAB, this may be done by using A\ b).What is κ(A)? Use the invhilb command to solve Ax=b. Is the relative error in your solution from part a. what you would expect?Let b=(1,−1,1,−1,1,−1,1,−1,1,−1,1,−1)T. Repeat parts a. and b. for the Hilbert matrix of order 12.
An elementary algorithm to evaluate trigonometric functions to high precision
Published in International Journal of Mathematical Education in Science and Technology, 2018
The scale and complexity of computer simulations of real-world phenomena are rapidly increasing, driving the need of more precision. An example where high precision was crucial is the ATLAS experiment at the Large Hadron Collider that verified existence of the Higgs boson. Moreover, in public-key cryptography, numbers with thousands of digits are needed. Another example is solving linear systems governed by ill-conditioned matrices (can occur when discretising ill-posed problems) such as the Hilbert matrix (it has an exponentially growing condition number with respect to its size), making the solution to such a system highly sensitive to errors in the data. Further information and motivation for the need of high precision are presented in [5]. Numerical solutions of some problems generating rapid convergence to at least 10 decimals are given in [6,7] with one of the examples (problem 2) requiring extended precision. Further techniques for implementing functions with high accuracy are given in [8].
Quadratic convergence to the optimal solution of second-order conic optimization without strict complementarity
Published in Optimization Methods and Software, 2019
Ali Mohammad-Nezhad, Tamás Terlaky
The condition number may have a magnitude in the order of for LO, where L is the binary length of the parameters, while this condition number could be doubly exponentially small for some instances of SOCO, see Example 22 in [31]. It can be inferred from this comparison that an exact solution of SOCO is generally harder to compute than for an LO problem. Nevertheless, even for the case of LO, very high accuracy, far beyond the double precision arithmetic, might be needed for the computation of an exact solution. For instance, LO solvers, regardless of the algorithm used, fail to produce an accurate solution for an LO problem with a Hilbert matrix of size larger than 20.
Exponential approximation of multivariate bandlimited functions from average oversampling
Published in Applicable Analysis, 2022
Finally, we compare the construction in Section 4 when reduced to the one-dimensional case with that in [25]. Each of the two methods has its pros and cons. The construction in [25] requires solving a linear system with a Hilbert matrix as the coefficient matrix. Although the size of the Hilbert matrix is typically very small, the construction is somewhat inconvenient compared to (44), which has a closed-form. In terms of the sampling width, [25] requires , which is better than here.