Explore chapters and articles related to this topic
Transmission electron microscopy and diffraction
Published in D. Campbell, R.A. Pethrick, J.R. White, Polymer Characterization, 2017
D. Campbell, R.A. Pethrick, J.R. White
where the summations are taken over all the chosen combinations of hkl Clearly, it must be arranged that Σti(hkl) ≪ t0, if all images are to be successfully recorded. The only adjustment available is the magnification (M) and low magnification operation is chosen to improve the chance of fulfilling this condition. An added advantage of low magnification imaging is the corresponding large field of view, which is especially valuable when working at sub-visual levels as this increases the chance of including features of interest in the area recorded. The chief disadvantage of low magnification recording is that the resolution becomes limited by the characteristics of the detection system and is worse than the electron optical resolving power. Finally, inspection of expression (9.12) shows that if dark field images are required from n diffracted beams of roughly equal intensity then the maximum operating magnification must be M1/√n if M1 is the maximum magnification at which a single image can be recorded.
Digital Image Processing Systems
Published in Scott E. Umbaugh, Digital Image Processing and Analysis, 2017
Another important parameter of an imaging device is the field of view (FOV). The FOV is the amount of the scene that the imaging device actually “sees,” that is, the angle of the cone of directions from which the device will create the image. Note that the FOV depends not only on the focal length of the lens, but also on the size of the imaging sensor. Figure 2.2-7 shows that the FOV can be defined as FOV=2φ,whereφ=tan−1(d/2f) with d is the diagonal size of the image sensor and f is the focal length of the lens. Thus, for a fixed-size image sensor, a wider FOV requires a lens with a shorter focal length. A lens with a very short focal length compared to image sensor size is called a wide-angle lens. The three basic types of lenses are: (1) wide-angle—short focal length, FOV > 45°, (2) normal—medium focal length, FOV 25° to 45°, and (3) telephoto—long focal length, FOV < 25°.
Data Registration
Published in S. Sitharama Iyengar, Richard R. Brooks, Distributed Sensor Networks, 2016
Richard R. Brooks, Jacob Lamb, Lynne L. Grewe, Juan Deng
The limited fields of view inherent in conventional cameras restrict their application. Researchers have investigated methods for increasing field of view for years [Nayar 97], [Rees 70]. Recently, there have been a number of implementations of cameras the use conic mirrors to address this issue [Nayar 97]. Catadioptric systems combine refractive and reflective elements in a vision system [Danilidis 00]. The properties of these systems have been well studied in literature on telescopes. The first panoramic field camera was proposed by [Rees 70]. Recently, Nayar proposed a compact catadioptric imaging system using folded geometry that provides an effective reflecting surface geometrically equivalent to a paraboloid [Nayar 99]. This system uses orthographic projection rather than perspective projection.
Cloud assisted Internet of things intelligent transportation system and the traffic control system in the smart city
Published in Journal of Control and Decision, 2023
Figure 6 shows the Surveillance based Traffic Prediction. In this analysis, cameras are assumed to be mounted on street lights and surrounding structures to vary in the cameras’ heights and viewing angulars. Therefore it is essential to change the cameras not installed on street lights to standardise the conversion picture. The main principle of CRITIC is that, while underutilised, cars must temporarily have access to priority roads to alleviate congestion on the road. The different viewing angles situation of the cameras at varying heights. The cameras mounted vertically on the street lights are set to cover a particularfield of view FOV in the standard-setting. The picture and field of view(FOV) are then re-adjusted as usual for other camera configurations of varying heights and viewing angles. For the above reason, experimental research for many predefined setups would create a convertible mapping profile. The transforming ratio will then be used to measure real vehicle length and width. The proposed CIoT-ITS has been proposed to reduce consumption, reduce vehicle energy utilisation, minimise traffic congestion, enhance prediction, green transportation, traffic management, performance, and minimise the cost ratio.
Image-based prescribed performance visual servoing control of a QUAV with hysteresis quantised input
Published in International Journal of Systems Science, 2023
Jiannan Chen, Fuchun Sun, Changchun Hua
Considering the vast potential market values of QUAVs, many researchers have made efforts to address the IBVS-based control problem of QUAVs. However, there are three major problems need to be overcome in actual applications. The first problem, due to the underactuation and the nonlinearity of QUAVs, the IBVS-based control of the QUAV is considerably hard. The second problem is the field of view (FOV) that is limited owing to the limitations of the physical structure of the actual camera lens. The last problem is that the force and the moment produced by motors of QUAVs are unsmooth for the constraints of the hardware of motors. The aforementioned problems are very challenging and also are the motivations of this research.
Rotational shot put: a phase analysis of current kinematic knowledge
Published in Sports Biomechanics, 2022
Michael Schofield, John B. Cronin, Paul Macadam, Kim Hébert-Losier
Field of view (i.e., along the movement plane) and camera resolution are important for digitisation accuracy; thus, camera distance and focal length need to be considered (Wilson et al., 1999). Dividing camera resolution by the field of view provides an indication of movement range within a singular pixel. In competition, camera placement is often restricted to the grandstand, which can be 70–100 m away (Schaa, 2010). In such circumstance, focal length can be used to modulate the field of view. Camera resolution, field of view and focal length are important parameters to report to compare kinematic data or understand the variance in reported values (Wilson et al., 1999). Few authors have reported camera distance (Čoh & Štuhec, 2005; Čoh et al., 2008; Schaa, 2010) and resolution (Coh & Jost, 2005; Čoh & Štuhec, 2005; Čoh et al., 2008; Lipovesk et al., 2011). Schaa (2010) reported camera distances of 70–100 m making telephoto lenses necessary; however, focal length and field of view were unreported. Čoh and Štuhec (2005) and Coh and Jost (2005) reported the distance of the overhead camera and its resolution, but not for the other two cameras. Reported camera resolution has been 720 × 576 pixels (Coh & Jost, 2005; Čoh & Štuhec, 2005; Čoh et al., 2008; Lipovesk et al., 2011) that can lead to kinematic noise at the velocities reported in shotput depending on the field of view. Taking a field of view of 5 m with a 720-pixel camera (assuming square pixels) gives 1.44 cm of movement per pixel. At a frame rate of 60 Hz and actual speed of 10 m/s (values above this are commonly reported in shot put), reported values from digitisation can theoretically range from 8.2 to 11.8 m/s due to a 5 cm range of movement within consecutive pixels.