Explore chapters and articles related to this topic
Multithreading in LabVIEW
Published in Rick Bitter, Taqi Mohiuddin, Matt Nawrocki, LabVIEW™ Advanced Programming Techniques, 2017
Rick Bitter, Taqi Mohiuddin, Matt Nawrocki
Floating-point numbers also support three precision formats. Single- and double-precision numbers are represented with 32- or 64-bit numbers internal to LabVIEW. Extended precision floating-point numbers have sizes dependent on the platform you are using. Execution speed will vary with the types of operations performed. Extended-precision numbers are slower than double-precision, which are slower than single-precision numbers. In very high performance computing, such as Software Defined Radio (SDR) where large volumes of floating point data need to be processed in fairly short order, a tradeoff of numerical precision versus processing time is made quite often. The lower precision numbers contribute to quantization noise, but in some cases make or break the performance of an application. Floating-point calculations are always slower than integer arithmetic. Each floating point stores sections of a number in various parts of the memory allocated. For example, one bit is used to store the sign of the number, several bytes will be used to store the mantissa, one byte will store the sign of the exponent, and the rest will store the integer exponent of the number. The format for single- and doubleprecision numbers is determined by National Instruments, and they are represented internally in LabVIEW. Extended-precision number formats depend on the hardware supporting your system.
Math Unit Architecture and Instruction Set
Published in Julio Sanchez, Maria P. Canton, Software Solutions for Engineers and Scientists, 2018
Julio Sanchez, Maria P. Canton
The FST (store real) instruction transfers the stack top to the destination operand, which can be a memory variable or another stack register. However, FST can only be used to store the stack top into a single or double precision real variable. FSTP (store real and pop) must be used to store into a memory destination in extended precision real format. Constants, special encodings, temporary results, and other operational data that could affect the precision of the final result should always be stored in extended precision format. On the other hand, final results should not be represented in the extended format since this defeats its purpose, which is absorbing rounding and computational errors.
Commercial VLSI RISC
Published in S.B. Furber, VLSI Risc Architecture and Organization, 2017
There is an on-chip floating-point unit, which supports IEEE 754 single precision, double precision, and an 80-bit extended precision. It can operate concurrently with the integer execution unit A later version of the chip (the 80960KA) will be produced without the FPU for saving cost in applications which do not require high performance floating-point An unusual feature of the chip is that it has full self-test on chip. All the major functional blocks are tested at reset by a program in the microinstruction ROM (taking 47,000 clock cycles). If a failure is located, the failure pin is asserted and execution ceases.
Numerical Caseology by Lagrange Interpolation for the 1D Neutron Transport Equation in a Slab
Published in Nuclear Science and Engineering, 2023
Numerical implementation first requires ν0 from a high-precision Newton-Raphson solver applied to Eq. (3c). Apparently, high precision for the transport eigenvalue ν0 is crucial to the numerical results to follow. One can show that for c less than 0.05, any purely numerical method to determine ν0 will lose significance without extended precision, including the present method. Neither extended precision nor computer algebra are considered in this work in order to maintain reader accessibility to the numerical methods presented. For this reason, only c ≥ 0.1 will be considered, and we otherwise defer to RM/DOM, DPN, ADM, and MREM. Next, the zeros of the modified Legendre polynomials come from expressing the recurrence for Legendre polynomials in matrix form and determining the matrix eigenvalues (zeros) when PN is set to zero. The Legendre functions are determined from recurrence or infinite series according to the stability of the recurrence relation for . The Legendre polynomials come from their stable recurrence relation. Finally, matrix inversions are computed by LU Lower/Upper matrix decomposition.
An elementary algorithm to evaluate trigonometric functions to high precision
Published in International Journal of Mathematical Education in Science and Technology, 2018
The scale and complexity of computer simulations of real-world phenomena are rapidly increasing, driving the need of more precision. An example where high precision was crucial is the ATLAS experiment at the Large Hadron Collider that verified existence of the Higgs boson. Moreover, in public-key cryptography, numbers with thousands of digits are needed. Another example is solving linear systems governed by ill-conditioned matrices (can occur when discretising ill-posed problems) such as the Hilbert matrix (it has an exponentially growing condition number with respect to its size), making the solution to such a system highly sensitive to errors in the data. Further information and motivation for the need of high precision are presented in [5]. Numerical solutions of some problems generating rapid convergence to at least 10 decimals are given in [6,7] with one of the examples (problem 2) requiring extended precision. Further techniques for implementing functions with high accuracy are given in [8].
Linear systems arising in interior methods for convex optimization: a symmetric formulation with bounded condition number
Published in Optimization Methods and Software, 2022
Alexandre Ghannad, Dominique Orban, Michael A. Saunders
For , numerous positive and negative eigenvalues appear to lie outside their theoretical bounds. However, this misleading effect is due to accumulated rounding errors in the eigs() function. In order to verify our claim, we ran the eigenvalue computation in extended precision using the Matlab Symbolic Math toolbox with the variable precision arithmetic (vpa) set to 64 digits. (By default, Matlab uses 16 digits in double precision; 64 digits corresponds to octuple precision.) Because eigs() does not accept vpa input, we computed eigenvalues using eig(). The resulting eigenvalues and bounds are shown in the bottom plot of Figure 4.