Explore chapters and articles related to this topic
Introduction
Published in Dale A. Anderson, John C. Tannehill, Richard H. Pletcher, Munipalli Ramakanth, Vijaya Shankar, Computational Fluid Mechanics and Heat Transfer, 2020
Dale A. Anderson, John C. Tannehill, Richard H. Pletcher, Munipalli Ramakanth, Vijaya Shankar
Other contributions were made in algorithm development dealing with the efficiency of the numerical techniques. Both multigrid and preconditioning techniques were introduced to improve the convergence rate of iterative calculations. The multigrid approach was first applied to elliptic equations by Fedorenko (1962, 1964) and was later extended to the equations of fluid mechanics by Brandt (1972, 1977). At the same time, strides in applying reduced forms of the Euler and Navier–Stokes equations were being made. Murman and Cole (1971) made a major contribution in solving the transonic small-disturbance equation by applying type-dependent differencing to the subsonic and supersonic portions of the flow field. The thin-layer Navier–Stokes equations have been extensively applied to many problems of interest, and the paper by Pulliam and Steger (1978) is representative of these applications. Also, the parabolized Navier–Stokes (PNS) equations were introduced by Rudman and Rubin (1968), and this approximate form of the Navier–Stokes equations has been used to solve many supersonic viscous flow fields. The correct treatment of the streamwise pressure gradient when solving the PNS equations was examined in detail by Vigneron et al. (1978a), and a new method of limiting the streamwise pressure gradient in subsonic regions was developed and is in prominent use today.
Fundamentals of Computational Fluid Dynamics Modeling and Its Applications in Food Processing
Published in C. Anandharamakrishnan, S. Padma Ishwarya, Essentials and Applications of Food Engineering, 2019
C. Anandharamakrishnan, S. Padma Ishwarya
The mesh can be finer or coarser depending on the problem. In general, coarser the mesh, faster the computation and finer the mesh (i.e., more elements), more accurate is the discretization. Mesh refinement is often done on a trial-and-error basis. For instance, the mesh size is decreased by 50%, and the computational simulation is redone. The resultant solutions are compared, and if found similar, the initial mesh configuration is considered appropriate for the defined geometry. However, if there are considerable differences between the two, a more refined mesh needs to be adapted, and the analysis should be repeated until convergence is established. Convergence refers to arriving at a solution which is close to the exact solution within predefined error tolerance or any other specific criterion. The solution is said to be mesh convergent if there is no significant change in the results with the mesh refinement. To obtain an effective model, it is important to arrive at a mesh which fulfils the mesh convergence but is not finer than that which is necessary. This is relevant as mesh refining is done at the expense of longer computation times. Similarly, the hit-and-trial approach can be done with varied element types as well.
Observation-based design of geo-engineering projects with emphasis on optimization of tunnel support systems and excavation sequences
Published in Xia-Ting Feng, Rock Mechanics and Engineering, 2017
M. Sharifzadeh, M. Ghorbani, S. Yasrobi
It is not straightforward to work out a general criterion for choosing the most convenient algorithm for back analysis. However, it should be observed that inverse techniques are particularly convenient when dealing with a relatively large number of unknown parameters and when the finite element mesh has a small number of nodal variables. On the other hand, the direct procedures are preferable when a few parameters are back analyzed using large finite element meshes (Cividini & Gioda, 2003). However, the course of convergence is highly dependent on the number of unknown parameters, the quality of their initial guess and on the optimization strategy chosen. The direct method can give an insufficient solution, especially in cases where Young’s modulus and Poisson’s ratio are to be identified simultaneously (Swoboda et al., 1999).
Privacy-Preserving with Zero Trust Computational Intelligent Hybrid Technique to English Education Model
Published in Applied Artificial Intelligence, 2023
Subsequently, the algorithm enters a loop where it performs a series of calculations in each iteration. The calculations involve updating the cluster centers and the prior probabilities based on specific rules and criteria. The specific method employed for updating these values varies depending on the chosen algorithm or technique for solving the problem. Throughout the iterations, the algorithm adjusts the cluster centers, refining their positions to better represent the underlying data distribution. Simultaneously, the prior probabilities, which indicate the likelihood of each data point belonging to a particular cluster, are also iteratively updated to improve their accuracy. The iterative process continues until a predefined termination condition is met. This condition can be based on several factors, such as the number of iterations performed, the convergence of the values, or the achievement of a desired level of accuracy. Once the termination condition is satisfied, the algorithm halts, and the final values for the cluster centers and prior probabilities are obtained.
Error bound analysis for split weak vector mixed quasi-variational inequality problems in fuzzy environment
Published in Applicable Analysis, 2023
Nguyen Van Hung, Vo Minh Tam, Donal O'Regan
Gap functions are useful in studying solution methods, existence conditions and stability of solutions for optimization-related problems in order to simplify the computational aspects. The concept of a gap function was first introduced by Auslender [1] to transform a variational inequality into an equivalent optimization problem. Based on the gap function of Auslender [1], Fukushima [2] extended the concept of a regularized gap function for a variational inequality. Also, based on the idea of Fukushima [2], the regularized function of Moreau-Yosida type was introduced by Yamashita and Fukushima [3] and they also considered the so-called error bounds for variational inequalities using regularized gap functions. The notion of error bounds is known as an upper estimate of the distance between an arbitrary feasible point and the solution set of a certain problem. It plays a vital role in analyzing the rate of convergence of some algorithms for solving solutions of some problems. Motivated by Yamashita and Fukushima [3], regularized gap functions and error bounds were investigated for various kinds of optimization-related problems. We refer the reader to [4–14] and the references therein for a more detailed discussion.
An Efficient Cavitation Model for Compressible Fluid Film Bearings
Published in Tribology Transactions, 2021
Thomas Ransegnola, Farshid Sadeghi, Andrea Vacca
In most applications, a reasonable guess can be generated as the initial solution to promote fast convergence. This could come from either a simplified analytical solution or an interpolation of boundary conditions. Though this decreases the computation time, it is not required, and the algorithm is capable of handling an initial estimate far from the true solution. If a reasonable guess cannot be generated by other means and it is found that the initial guess leads to poor startup of the algorithm, though, it is recommended that the reader perform preliminary iterations without gradients, such as Gauss-Seidel to generate a closer estimate, where contains the lower triangular components of including the diagonal, and