Explore chapters and articles related to this topic
Discrete Time Control Systems
Published in Jitendra R. Raol, Ramakalyan Ayyagari, Control Systems, 2020
Jitendra R. Raol, Ramakalyan Ayyagari
Shannon’s sampling theorem states that “if a signal contains no frequency higher than ωc rad/sec., then it is completely characterized by the values of the signal measured at the instants of time separated by T/( π ωc) sec.” [2] In order to avoid aliasing, the sampling frequency rate should be reasonably greater than the Nyquist rate, which is twice the highest frequency component of the original continuous time signal; if the sampling rate is less than twice the input frequency, the output frequency will be different from the input, which is known as aliasing. Then, the output frequency is called alias (-ed) frequency, and the period is called alias (-ed) period. The overlapping of the high frequency components with the fundamental component in the frequency spectrum is sometimes also referred to as frequency-folding, and the frequency ws/2 is often known as folding frequency; wc is called Nyquist frequency. Thus, a low sampling rate/frequency would normally have an adverse effect on the closed loop control system (CLCS) stability. Thus, often one might have to select a sampling frequency much higher than the theoretical minimum. Possibly, one should sample the original signal at 4–5 times higher than the minimum requirement.
Hardware and Implementation
Published in Naim A. Kheir, Systems Modeling and Computer Simulation, 2018
Sajjan G. Shiva, Mahmoud Mohadjer
The basic problem in discrete-time processing of analog signals is how to sample a continuous signal to be able to reconstruct the signal from its samples after processing. The number of samples taken from an analog signal (sample rate) for a digital system directly affects the system performance and cost of the design. The sampling theorem developed by Shannon and Nyquist states that to recover a signal from its samples, the sample rate must be at least twice the signal bandwidth. This minimum sample rate prevents distortion as a result of aliasing (overlapping of frequency spectrum) (see Katz, 1981). In practice, however, the theoretical lower bound of sampling rate is not sufficient for a specific time response. This is mainly because of plant dynamics and system open-loop behavior between the samples. Increasing the sample rate helps the discrete signal converge to its corresponding analog signal more accurately. In practical applications, a sampling rate of 3 to 4 or even as high as 20 times the bandwidth of the analog signal is not uncommon (Franklin, 1980). The performance factors that provide a lower limit to the sample rate are as follows:
Optics Components and Electronic Equipment
Published in Vadim Backman, Adam Wax, Hao F. Zhang, A Laboratory Manual in Biophotonics, 2018
Vadim Backman, Adam Wax, Hao F. Zhang
When selecting a digitizer, specific parameters are needed for both types to correctly sample the target signals. These parameters are bandwidth, sampling rate, dynamic range, and equivalent noise level. The bandwidth describes the frequency range of the input signal that can be digitized or displayed. To properly capture the target signal, the digitizer must have a bandwidth that is larger than the maximum frequency of the signal. Oftentimes, the price of the digitizer is higher when the bandwidth is large. Maximum sampling rate is another key parameter to be considered. The Nyquist theorem requires that in order to digitally sample an analog signal, the sampling rate must be at least twice the maximum frequency component. In practice, however, it is better to sample at four or five times the maximum frequency to confidently recover the signal. For example, if a light modulation has a frequency of 40 MHz, the digitizer is expected to have a sampling frequency of 150–200 MS/s.
Non-parametric modelling and simulation of spatiotemporally varying geo-data
Published in Georisk: Assessment and Management of Risk for Engineered Systems and Geohazards, 2022
Yu Wang, Yue Hu, Kok-Kwang Phoon
The Nyquist–Shannon sampling theorem (e.g. Shannon 1949; Proakis and Manolakis 1988) is a classical sampling theorem in signal processing that establishes a sufficient condition for perfectly capturing the underlying signal from discrete sampled data points. It states that a signal (e.g. spatiotemporally varying geo-data in this study) can be properly recovered only if it does not contain frequency components higher than one-half of sampling frequency. Mathematically, this is expressed as: where fs and fmax are respectively the sampling frequency and maximum frequency in the underlying signal. If the criterion in the above equation is not satisfied, the reconstruction is subject to distortion, which is often termed as “aliasing” (e.g. Harris 2004).
Exponential approximation of multivariate bandlimited functions from average oversampling
Published in Applicable Analysis, 2022
We are concerned with the case when only finitely many sample data are available. Set for . Looking at the Shannon series in (1), let us assume that we have the finite sample data of some . Naturally, one tends to truncate the Shannon series (1) as a manner of approximately reconstructing f. This turns out to be the optimal reconstruction method in the worst case scenario [10,11]. However, this method is of the slow approximation rate of , [12–15]. Dramatic improvement of the approximation rate can be achieved by using oversampling data. Here, oversampling means to sample at a rate strictly less than the Nyquist sampling rate . Through a change of variables if necessary, we assume that the bandwidth and functions in are sampled at the integer points, thus constituting oversampling as .
Research on compressive sensing of strong earthquake signals for earthquake early warning
Published in Geomatics, Natural Hazards and Risk, 2021
Jiening Xia, Yuanxiang Li, Yuxiu Cheng, Juan Li, Shasha Tian
With a long-term focus on the transmission and storage of massive data in earthquake early warning systems, we find that the popular compressive sensing theory in recent years is just suitable for the application. Nyquist sampling theorem points out that in order to capture a signal with a specified bandwidth perfectly, analog-to-digital conversion usually needs to be carried out at a sampling rate greater than twice the frequency band. Compressive sensing theory states that if a signal is sparse in a certain orthogonal space, it can be sampled at a frequency much lower than the Nyquist sampling rate, and the signal may be reconstructed with a high probability. Based on the compressive sensing theory, we propose a kind of transmission architecture of signal acquisition for earthquake monitoring and early warning system, which solves the data compression problems, faced by the acquisition and transmission process in earthquake early warning systems and provides new application directions.