Explore chapters and articles related to this topic
Focal Plane Arrays
Published in Antoni Rogalski, Zbigniew Bielecki, Detection of Optical Signals, 2022
Antoni Rogalski, Zbigniew Bielecki
The term “focal plane array” (FPA) refers to an assemblage of individual thousands/millions detector picture elements (“pixels”) located at the focal plane of an imaging system. Although the definition could include 1D (“linear”) arrays as well as 2D arrays, it is frequently applied to the latter. Usually, the optics part of an optoelectronic images device is limited only to focusing of the image onto the detector’s array. These so-called “staring arrays” are scanned electronically, usually using circuits integrated with the arrays. The architecture of detector-readout assemblies has assumed a number of forms, which are described in detail, for example, in References 1–4. The types of readout integrated circuits (ROICs) include the function of pixel deselecting, antiblooming on each pixel, subframe imaging, output preamplifiers and may include yet other functions.
Focal Plane Arrays
Published in Antoni Rogalski, 2D Materials for Infrared and Terahertz Detectors, 2020
The term “focal plane array” (FPA) refers to an assemblage of individual detector picture elements (“pixels”) located at the focal plane of an imaging system. Although the definition could include 1D (“linear”) arrays as well as 2D arrays, it is most frequently applied to the latter. Usually, the optics part of an optoelectronic image device is restricted to focusing of the image onto the detector’s array. These so-called “staring arrays” are scanned electronically, usually by circuits integrated with the arrays. The architecture of detector-readout assemblies has assumed a number of forms which have been described in detail [1–4]. The types of readout integrated circuits (ROICs) include the function of pixel deselecting, antiblooming on each pixel, subframe imaging, and output preamplifiers, and may include yet other functions.
Infrared devices and techniques
Published in John P. Dakin, Robert G. W. Brown, Handbook of Optoelectronics, 2017
Antoni Rogalski, Krzysztof Chrzanowski
The term “focal plane array” (FPA) refers to an assemblage of individual detector picture elements (“pixels”) located at the focal plane of an imaging system. Although the definition could include one-dimensional (“linear”) arrays as well as two-dimensional (2D) arrays, it is frequently applied to the latter. Usually, the optics part of an optoelectronic images device is limited only to focusing of the image onto the detectors array. These so-called staring arrays are scanned electronically usually using circuits integrated with the arrays. The architecture of detector-readout assemblies has assumed a number of forms. The types of readout integrated circuits (ROICs) include the function of pixel deselecting, antiblooming on each pixel, subframe imaging, output preamplifying, and may include yet other functions. IR imaging systems, which use 2D arrays, belong to the so-called second generation systems.
Tunable liquid crystal microlens array with negative and positive optical powers based on a self-assembled polymer convex array
Published in Liquid Crystals, 2021
Yingying Xue, Zuowei Zhou, Miao Xu, Hongbo Lu
The first approach to tune the focal length of the LCMLA is to rotate the polariser. Figure 5(a) shows a schematic of the experimental setup employed for testing the focusing properties of the LCMLA. A He-Ne laser beam (λ ~ 633 nm) was used to irradiate the sample after extension by an expander (not shown here); the light intensity was controlled by a neutral-density (ND) filter, and the polarisation direction of the incident light was set parallel to the rubbing direction. In this state, the effective refractive index (neff) of the LC was close to ne, so that ne > np and the LCMLA performed as a negative lens. By adjusting the position of the imaging lens and the Charge Coupled Device (CCD) camera, the light spot array in the 2D image (Figure 5(c)) and the sharp peak array in the 3D image (Figure 5(d)) implied that aComplementary Metal-Oxide-Semiconductor Transistor (CMOS) sensor of the CCD camera (BC 106 N-VIS, 350–1100 nm, Thorlabs) was at the focal plane of the LCMLA. By rotating the polariser to 90°, or by setting the polarisation direction perpendicular to the rubbing direction (Figure 5(b)), the light spot array in the 2D image and the peak array in the 3D image vanished, as shown in Figure 5(e, f), respectively. This phenomenon was because the effective refractive index (neff) of the LC was no. Since no < np, the LCMLA behaved as a positive lens. It is noteworthy that this approach did not provide a continuous tuning from the negative (positive) to a positive (negative) optical power.
Driving circuitry of a full-frame area array charge-coupled device (CCD) supporting multiple output modes and electronic image motion compensation
Published in Instrumentation Science & Technology, 2020
When a charge-coupled device aerial camera obtains an aerial photograph, due to the high speed of the aircraft, there is relative motion between the camera and the target throughout the exposure time, which induces the image of the target on the focal plane to change, which induces image motion. Image motion results in the images of the objects to be aliased to each other, causing image degradation due to smearing of the objects with blurred edges, grayscale distortion, and reduced contrast and resolution.[1] For example, image motion may be separated into the forward image motion that is induced by the forward flight of the aircraft, the random image motion produced by variations in the attitude caused by pitch, yaw, and rolling of the aircraft, image motion due to various aircraft components that may include propellers and engine blocks, image motion due to camera platforms, vibrational image motion of the caused by the camera itself during its operation or impact, and vibrational image motion induced by fluctuations in the air flow. In a vertically photographed aerial image, the degree of the forward image motion is approximately one order of magnitude higher than all of the other types of image motion, and therefore the primary consideration with this type of camera is the compensation of the forward image motion.[4]
Simultaneous multi-dimensional spatial frequency modulation imaging
Published in International Journal of Optomechatronics, 2020
Nathan Worts, John Czerski, Jason Jones, Jeffrey J. Field, Randy Bartels, Jeff Squier
So how is SPIFI implemented? As we will show in this article, it is surprisingly straightforward, and existing systems can be readily adapted to exploit the SPIFI method. First, this imaging technique utilizes a cylindrical lens to focus the excitation beam to a line, effectively creating a light sheet. The light sheet is then incident on the SPIFI mask. The mask is fabricated in-house, as we will show in later sections, and is the heart of the SPIFI method. The mask is located at the focal plane of the cylindrical lens and is spun at a constant rate. The pattern on the mask is specifically designed such that it attenuates (modulates) the intensity of each resolvable pixel across the light sheet at a unique frequency which increases in a linear fashion across the mask. This enables single element detection. An alternative description is to note that as the mask spins, it transmits a spatial frequency that varies as a function of time. The mask is acting as a time-varying diffractive element: there is an emergence of light sheets (after the mask) which will be represented in this article by the three diffracted orders (−1, 0, +1). Finally, these diffracted orders are image relayed in a 4-f configuration to the focal plane of the imaging system with the desired magnification to achieve the highest resolution and desired field-of-view.