Explore chapters and articles related to this topic
Fundamentals of EEG Signals
Published in Narayan Panigrahi, Saraju P. Mohanty, Brain Computer Interface, 2022
Narayan Panigrahi, Saraju P. Mohanty
An EEG is a series of continuous analog electrical signals in time x(t). The process of converting an analog signal to a digital signal is called sampling or digitization of the signal. For example, if the time interval between samples is Δt and the number of samples in a particular segment in the collected data is M, then successive samples of the EEG signal x(t) can be denoted by x(k·Δt), where k = 0, 1, 2, …, M−1. These sample values of the EEG signal x(t) in time instances 0·Δt, 1·Δt, 2·Δt, … (M−1)·Δt are called the digitized value of the sample. The entire sampled EEG data can be represented by x(k), and x(k) can be divided into a series of sequential epochs (same time segments). These epochs may overlap each other or be contiguous.
The Dawn of the SAR Mosaics Era
Published in Gianfranco (Frank) De Grandi, Elsa Carla De Grandi, Spatial Analysis for Radar Remote Sensing of Tropical Forests, 2021
Gianfranco (Frank) De Grandi, Elsa Carla De Grandi
Indeed, the major problem with 8-bit quantization of the SAR image is the impact on the first-order statistical properties of the compressed image rather than the noise level. Bit quantization of a discrete digital signal is equivalent to a nonlinear transformation in which, for each quantization bin, the linear transfer function between the input and output signals is replaced by a unit step function centered on the bin. Therefore, the statistical properties of the signal after the quantizer will be modified. In general, if a random variable X(RV) is characterized by a PDF fX(x), the RV after a transformation y = T(x) will have a modified PDF fY(y). Therefore, statistical estimators, such as one-point mean value and variance, which rely on the PDF of the RV, will give different results if applied before or after the quantization.
Data Conversion Process
Published in Michael Olorunfunmi Kolawole, Electronics, 2020
There is a strong motivation behind exploring a novel frontier of data processing that could benefit from cutting-edge miniature and power-efficient nanostructured silicon photonic devices. Recent example is photonic accelerator (PAXEL)—a processor that can process time-serial data either in an analog or digital fashion on a real-time basis [1]. Data processing is a way of converting data into a machine-readable form using a predefined sequence of operations. Communications signals can be analog or digital, and information can be transmitted using analog or digital signals. Analog signals are continuously changing in time (or frequency), while digital signals are discrete in time and amplitude. Interchangeability of information transfer allows the development of conversion processes without loss of detail. The challenge is to achieve a high sampling rate and high conversion accuracy in the presence of component mismatch, nonlinearity errors, and noise. Although the electronic circuits required to do this conversion processing can be quite complex, the basic idea is fairly simple. The basic concepts of data conversion and their inherent errors, as well as the choice of the converter types that strongly influence the architecture of the overall system, which are fundamental to the continuing revolution in information technology and communication systems, are explained in this chapter.
A sampling theory for non-decaying signals in mixed Lebesgue spaces
Published in Applicable Analysis, 2022
Yang Han, Bei Liu, Qingyue Zhang
Sampling theorem is the theoretical basis of modern pulse code modulation communication and one of the most basic tools in signal analysis. It is widely used in many fields, such as digital signal processing and wireless communication. Sampling refers to the process of taking values of time-continuous signals at certain time intervals during the conversion of analog signals and digital signals. The classical Shannon sampling theorem [1–4] was extended from the spectral finite function spaces to the more general shift-invariant subspaces [5–7], and many mathematicians studied the sampling in shift-invariant subspaces [8–12]. The sampling theory requires that the input signal is square-integrable, so it is impossible to apply it to signals that do not decay or even grow infinitely.