Explore chapters and articles related to this topic
Optical Flow
Published in Yun-Qing Shi, Huifang Sun, Image and Video Compression for Multimedia Engineering, 2019
The optical-flow field is a dense 2-D distribution of apparent velocities of movement of intensity patterns in image planes, while the 2-D motion field can be understood as the perspective projection of 3-D motion in the scene onto image planes. They are different. Only under certain circumstances are they equal to each other. In practice, however, they are closely related in that image sequences are usually the only data we have in motion analysis. Hence, we can only deal with the optical flow in motion analysis, instead of the 2-D motion field. The aperture problem in motion analysis refers to the problem that occurs when viewing motion via an aperture. Specifically, the only motion we can observe from local measurement is the motion component orthogonal to the underlying moving contour. That is another way to manifest the ill-posed nature of optical-flow computation. In general, motion analysis from image sequences is an inverse problem, which is ill-posed. Fortunately, low-level computational vision problems are only mildly ill-posed. Hence, lowering the noise in image data leads to a possible significant reduction of errors in flow determination.
Active vision systems and catadioptric vision systems for robotic applications
Published in João Manuel, R. S. Tavares, R. M. Natal Jorge, Computational Modelling of Objects Represented in Images, 2018
To perform the binocular tracking of targets an original approach was applied to the vergence control (Batista et al. 1997). The required movements are decomposed into version movements and vergence movements. Tracking is performed based on the motion field/ optical flow. Optical flow from both left and right images are used as measurements. Considering a symmetric configuration for the fixation geometry it can be proved that vergence can be controlled using the flow retinal disparities (see figure 5 for the geometry and variables). The goal is to keep the target in the center of both images. The target moves in the vergence plane. Let us define a 3D coordinate system located in the middle point of the baseline, the cyclopean coordinate system. Let P¯ = (X, Y, Z) be the coordinates of the target in the cyclopean coordinate system. In this coordinate system the target moves with velocity: () V¯=−Ω¯×P¯−t¯
Methods for Estimating the Optical Flow on Fluids and Deformable River Streams: -A Critical Survey
Published in Panagiotis Tsakalides, Athanasia Panousopoulou, Grigorios Tsagkatakis, Luis Montestruque, Smart Water Grids, 2018
Konstantinos Bacharidis, Konstantia Moirogiorgou, George Livanos, Andreas Savakis, Michalis Zervakis
The probabilistic formulation leads to a global motion-field estimate since the final MAP formulation relates the likelihood models of all image pixels and leads to a highly dense motion field. However, Chang’s scheme may also be manipulated to derive local motion estimates for pixel blocks, by using the displacement probabilities Ai $ A_i $ in Eq. 10.10 for each neighborhood. In this form, the local motion estimate is associated with the pixel position in the candidate neighborhood of the subsequent frame with the highest probability of displacement Ai $ A_i $ . The difference between global and local schemes is that the former provides smoother and more robust flow estimates, while the latter scheme results in a loss of overall detail restricting the estimates only to an average block level. On the other hand, by restricting to nearby object boundaries the local scheme provides more accurate results for occluded cases and boundary motion, making it ideal for particle tracking.
Application of optical flow methods to aerated skimming flows above triangular and trapezoidal step cavities
Published in Journal of Hydraulic Research, 2019
The mean flow deformations and turbulence quantities were derived from raw 8-bit image sequences using a local optical flow technique. A variety of optical flow techniques were adopted by previous studies of high-speed air–water flows (e.g. Bung & Valero, 2016a, 2016b, 2016c), and a further validation study was performed by Zhang and Chanson (2018). The optical flow is defined as the apparent motion field between two consecutive images, and its physical meaning depends on the projective nature of the corresponding moving objects in 3D camera space. Existing algorithms attempt to recover the optical flow from spatial and temporal derivatives, and may be classified into local (e.g. Farneback, 2003; Lucas & Kanade, 1981) and global (e.g. Horn & Schunck, 1981) methods. The present study adopts the Farneback (2003) method capable of estimating the global optical flow despite being a local technique. Farneback (2003) proposed that the intensity information in the neighbourhood of a pixel might be modelled with a quadratic polynomial: where x is the pixel coordinates in a local coordinate system, A1 is a symmetric matrix, is a vector and is a scalar. After a shift by d, the displaced neighbourhood may be obtained by transforming the initial approximation: and the displacement is then solved by equating the coefficients of x: In practice, the displacement d may be assumed slow-varying, and is solved for a neighbourhood of x. Large displacements may be treated with a multi-resolution technique (i.e. image pyramid) (e.g. Adelson, Anderson, Bergen, Burt, & Ogden, 1984; Bung & Valero, 2016a; Burt & Adelson, 1983), in which d obtained from successively subsampled images are used as a priori estimates of the true optical flow. Note that the accurate estimation of d hinges on a negligible degree of luminance change for the region under consideration between two successive frames.