Explore chapters and articles related to this topic
Introduction to Computer Vision and Basic Concepts of Image Formation
Published in Manas Kamal Bhuyan, Computer Vision and Image Processing, 2019
It is important to know the location of a point of a real world scene in the image plane. This can be determined by geometry of image formation process. The physics of light can determine the brightness of a point in the image plane as a function of surface illumination and surface reflectance properties. The visual perception of scenes depends on illumination to visualize objects. The concept of image formation can be clearly understood from the principles of radiometry.Radiometry Radiometry is the science of measuring light in any portion of the electromagnetic spectrum. All light measurements are considered, as radiometry, whereas photometry is a subset of radiometry weighted for a human eye response. Radiometry is the measurement of electromagnetic radiation, primarily optical, whereas Photometry photometry quantifies camera/eye sensitivity. We will now briefly discuss some important radiometric quantities.
Solar Spectral Measurements
Published in Frank Vignola, Joseph Michalsky, Thomas Stoffel, Solar and Infrared Radiation Measurements, 2019
Frank Vignola, Joseph Michalsky, Thomas Stoffel
Photometry is the measurement of light that the human eye can sense. Although there have been slight modifications to the definition, the Commission Internationale de l’Eclairage (CIE) defined the basic luminosity function for daytime (photopic) vision for the average human eye in 1924 (CIE, 1926). Figure 7.4 illustrates how eye response varies with wavelength with a peak near 555 nm. Scotopic response is the wavelength response of the eye in darkened conditions. The response distribution with wavelength is similar but more skewed, and it shifts to shorter wavelengths with the peak response near 507 nm. Since the interest is mostly in daylight applications, the focus will be only on the photopic response that most photometers are designed to measure.
Radiometry and Photometry
Published in Vasudevan Lakshminarayanan, Hassen Ghalila, Ahmed Ammar, L. Srinivasa Varadharajan, Understanding Optics with Python, 2018
Vasudevan Lakshminarayanan, Hassen Ghalila, Ahmed Ammar, L. Srinivasa Varadharajan
Photometry refers to the measurement of light as per the ability of the human visual system to detect photons. Hence, photometry deals specifically with the visible spectrum of light, namely, 400–700 nm. Photometric quantities are simply radiometric quantities multiplied by a weighting factor that is dependent on the wavelength of the light as well as the state of adaptation of the human visual system. The weighting function depicts the efficiency of photon capture by the light or dark adapted eye in sensing the different wavelengths. This weighting function of wavelength is called the luminous efficiency function or the luminosity curve or the visibility function of the eye. For the light adapted (photopic) condition, this function is denoted by the symbol Vλ and for the dark adapted (scotopic) condition, Vλ* (Figure 4.3).
A novel context-aware system to improve driver’s field of view in urban traffic networks
Published in Journal of Intelligent Transportation Systems, 2022
A. Nourbakhshrezaei, M. Jadidi, M. R. Delavar, B. Moshiri
There are many computer vision-based methods for motion detection with respective benefits and shortcomings. Since our system is designed to respond in real-time, then the speed of objects detection is vital. Also, since the intention to detect only moving objects and surprised obstacles, the platform does not need to detect fixed objects without any motion such as trees, traffic lights, parked bikes and cars. In fact, the aim of this system is to send a notification to the vehicles about their surrounding’s moving objects. The background subtraction technique is used in this system due its compatibility with a static and fixed camera, which is the case in our study. The basic idea of background subtraction technique is to detect objects in motion by subtracting the current image, pixel by pixel from the background image. Mainly, Reference Image (RI) is obtained by averaging images over time taking the initial few frames. However, in this research, an image without any object was used as an RI as presented in Figure 7a. It is worth mentioning that there are different methods to determine RI. The optimal approach must be used by considering accuracy, precision, and speed. Considering no-traffic frame as RI is one of the best approaches with high accuracy when the camera is fixed. Moreover, as for each location the reference image needs to be determined once, it does not change the computational complexity of the process. There is a difference between a random frame and RI if the random frame contains a moving object. In order to improve the quality of the foreground region detection or remove noise, some post-processing operations such as morphological dilation or erosion are used. Although, there exists many variations in the background subtraction technique, a pixel at location (x, y) in the current frame is considered as foreground if: where f is the frame. At first, original frame color was converted to a grayscale (Figure 7c). Grayscale images are used in computer vision because a fewer number of channels and smaller data enables researchers to do more complex processes in a shorter time. Converting a color image to a grayscale image is not always unique. One of the well-known strategies for the conversion is inspired by photometry which is a science of measuring lights that are visible to human eyes. The conversion helps to get relative luminance for the pixels. Many methods exist to convert RGB images to grayscales such as average method and weighted method. Equation 2 defines weighted method. If the weighted method will be converted to the average method (Equation 3). In this paper we are using average method.For each pixel i