Explore chapters and articles related to this topic
Geodesy
Published in Basudeb Bhatta, Global Navigation Satellite Systems, 2021
Azimuthal projections result from projecting a spherical surface onto a plane (Figure 9.11). This projection is also known as planar projection or zenithal projection. In this projection, a flat paper is supposed to touch the globe at one point and project the lines of latitude and longitude on the plane. This type of projection is usually tangent to the globe at one point, but may be secant as well. The point of contact may be the North Pole, the South Pole, a point on the equator, or any point in between. This point specifies the aspect and is the focus of the projection. The focus is identified by a central longitude and a central latitude.
Post extraction inversion slice imaging for 3D velocity map imaging experiments
Published in Molecular Physics, 2021
Felix Allum, Robert Mason, Michael Burt, Craig S. Slater, Eleanor Squires, Benjamin Winter, Mark Brouard
Measuring an entire three-dimensional velocity distribution, as opposed to a planar projection presents clear advantages in experimental geometries lacking an axis of cylindrical symmetry in the plane of the detector, for which the inverse Abel transform is not appropriate. Such experiments include crossed molecular beam scattering, and photofragment angular momentum polarisation experiments, employing non-colinear pump and pulse laser polarisations. Here, we demonstrate that three-dimensional velocity distributions, and thus photofragment anisotropy parameters can be reliably extracted from a set of PEISI imaging data directly, making no assumptions about the symmetry of the system, nor through the fitting of the data to any basis set. In terms of potential applications to studies of photofragment angular polarisation, these results suggest desired alignment parameters could be extracted from measurements in a single experimental geometry, circumventing the need to record data with multiple geometries of the polarisations of pump and probe pulses.
Urban design and public transportation – public spaces, visual proximity and Transit-Oriented Development (TOD)
Published in Journal of Urban Design, 2020
Visual experience between a building and a transit stop involves a sequence of viewsheds (Cullen 1961, 1967). The viewshed is an urban space with clear visual acuity. The viewshed combines elements of the visual world and the visual field (defined by Gibson 1986). The visual world is like a sphere around a person and is clear everywhere while the visual field is the area within the field of view of both eyes. The field of view is clear in the centre (foveal vision) and vague in the periphery (Figure 2a). The viewshed is a circle (planar projection of the visual world) in the foveal plane within a radius of 100–200 m. The viewer identifies affordances (Gibson 1986). This includes transit stops, entrances to buildings and shops, storefronts, people to communicate with, and so on.
A new multi-object tracking pipeline based on computer vision techniques for mussel farms
Published in Journal of the Royal Society of New Zealand, 2023
Dylon Zeng, Ivy Liu, Ying Bi, Ross Vennell, Dana Briscoe, Bing Xue, Mengjie Zhang
The planar projection of an image is achieved using a homography matrix. Mathematically, the transformation between two homogeneous 3D vectors p and q is expressed as where H is a homography matrix with 8 degrees of freedom (Baer 2005). Four pairs of matching feature points are required to determine H. The most common detector-descriptor for image registration is SIFT (Lowe 1999), which uses image gradient magnitudes to describe and match feature points. Other alternatives include SURF (Bay et al. 2006), AKAZE (Alcantarilla and Solutions 2011), and BRISK (Leutenegger et al. 2011).