Explore chapters and articles related to this topic
The Mastering Studio
Published in Evren Göknar, Major Label Mastering, 2020
The primary consideration with main speakers for mastering is that they reproduce full-frequency audio that the Mastering Engineer can relate to accurately so that the finished project translates well to other playback systems. Most main speakers are either two-way (a woofer and tweeter with a crossover network in the cabinet to separate the two frequency bands), or three-way (three separate speaker drivers with two crossovers that separate the three frequency bands). The extra driver(s) should add definition and detail in the corresponding frequency range, but there can be issues at the crossover frequencies to listen for. With some audiophile speakers, an additional tweeter is added for a four-way speaker. A subwoofer (or even two) is often implemented so the lower octave and any sub-harmonic frequencies are adequately represented. Additionally, as I discuss later in this section, speakers are either passive (with separate power amplifiers) or active (amplifiers included in the speaker cabinet). Selecting and testing main speakers is a critical aspect of effective mastering studio setup.
Monitoring
Published in Roey Izhaki, Mixing Audio, 2017
The low-level, low-power line output of our desk or computer needs to be amplified to the powerful speaker-level in order for the mechanical components in a loudspeaker to move. There is always a driving amplifier in the signal chain prior to the loudspeaker drivers, and this amplifier can be either built into the cabinet or live as an external unit. A loudspeaker with no integrated amplifier is known as a passive speaker, and must be fed with a speaker-level signal that has been amplified by an external amplifier. Most multiway speakers contain a crossover within their cabinet. In the case of passive speakers, the crossover is a passive one (there are no active components) and is designed to operate at speaker level. External amplifiers have a huge influence on the overall sound. The NS10s, for example, can sound distinctively different when driven by different makes with different power ratings. There is some critical interaction between an amplifier and a loudspeaker— it is, essentially, one system—and the amplifier determines many aspects of the overall sound, such as transient response, low-frequencies reproduction, and distortion. Matching an amplifier to a loudspeaker is never an easy affair.
Speakers and amps
Published in Trev Wilkins, Access All Areas, 2012
A speaker is very similar in construction to a microphone (in fact, you can use a speaker as a microphone but we won't go there) as it has the same basic components, namely a diaphragm and a means of translating electrical impulses into vibrations (or the other way round for a microphone). A dynamic microphone, the type used most in live sound, works as the diaphragm vibrates in sympathy with the sound it receives, that is to say that it vibrates at the same frequencies. This vibration is passed on by a physical attachment to a coil, as it moves the coil past a magnet this generates a small electrical current that we can think of as an electrical imprint of the sound. After amplification the speaker uses a coil to receive the incoming signal (a larger version of the sound imprint) and in conjunction with a magnet it translates this into vibrations that are made audible by the use of a cone, usually made of stiff paper or fabric, moving back and forth to move the air at the correct frequencies. The coil is usually attached to the cone, which is flexibly attached to a framework (chassis) that holds the magnet. Speakers are generally circular in shape. Often, larger diameters are used to reproduce lower frequencies and smaller ones for higher frequencies, although this isn't always the case.
Optical laser microphone for human-robot interaction: speech recognition in extremely noisy service environments
Published in Advanced Robotics, 2022
Takahiro Fukumori, Chengkai Cai, Yutao Zhang, Lotfi El Hafi, Yoshinobu Hagiwara, Takanobu Nishiura, Tadahiro Taniguchi
To investigate the effectiveness of the optical laser microphone for this task, we conducted the following three experiments: Experiment 1: We first provide a performance comparison of different objects as laser irradiation targets in a recording studio.Experiment 2: We compared the performance of the optical laser microphone and a conventional gunshot microphone for different positions of speakers, microphones, and irradiated objects in a recording studio.Experiment 3: We compared the performance of the optical laser microphone and a conventional gunshot microphone in a domestic environment.
Detection of Affective States of the Students in a Blended Learning Environment Comprising of Smartphones
Published in International Journal of Human–Computer Interaction, 2021
Subrata Tikadar, Samit Bhattacharya
However, it is cumbersome for a teacher to manually detect the states of the individual students, especially when the classroom is large (having a hundred to a few hundred students). It may consume a major portion of limited time for a lecture session, which may affect the flow of lecture delivery, and consequently the learning outcome. A computational model for automatic detection of the affective states may help the teacher a lot in this scenario. The idea of the model for real-time affect detection originated during the teaching experience of one of the authors, who used to teach the basics of data structures and programming language to the first-year undergraduate students (of engineering, mathematics, and science). Each class consisted of nearly 250 students. Lecture delivery was done with the help of slides, projector, and microphone (with speakers placed around the lecture hall). Despite such arrangements, it was observed that there is not much interaction between the teacher and the students except those sitting near the teacher (front rows), and consequently, it was difficult for the teacher to personally monitor the affective states of all the students (which may influence on their engagement and classroom activities). As a result, it was found that many were effectively left out from the teaching process and their learning outcomes were poor (revealed during evaluation). There can be many potential factors for the poor learning outcome, one of which can be attributed to the lack of awareness of the affective states of the students by the teacher. Due to the lack of knowledge about the states, the teacher could not make a timely intervention to address the difficulties faced by the students. Given this background, we thought that an automatic technique such as a computational model might help in detecting the state of his/her students so that s/he can intervene in real-time to help the students, to the extent possible, for better learning experience and outcome.
Instantaneous Frequency Selective Filtering Using Ensemble Empirical Mode Decomposition
Published in IETE Journal of Research, 2022
Rinki Gupta, Arun Kumar, Rajendar Bahl
Since IF is a single valued function of time, the definition in (3) is meaningful only for monocomponent signals whose frequency modulation can be described using a single function of time. For a multicomponent signal containing more than one such component, the instantaneous parameters are defined for each of the constituent monocomponents. The empirical mode decomposition (EMD) algorithm adaptively decomposes a signal into a sum of zero-mean amplitude and frequency modulated components known as the intrinsic mode functions (IMFs), on which meaningful IFs may be defined [2]. The use of the EMD algorithm and its variants has been demonstrated in several applications such as time–frequency representation [3], geophysics [4], signal denoising [5], biomedical signal processing [6,7] and speech signal processing [8,9]. In [3], frequency translation of signal spectrum is used to achieve desired frequency resolution in time–frequency representation of a multicomponent signal using the EMD algorithm, which is then applied to a bat echolocation signal. Chen et al. [4] decomposed frequency slices of input signal to obtain smoothly varying frequency components, on which 1D non-stationary seislet transform is applied to form a representation, namely the EMD-seislet transform. They demonstrated its application in attenuating random noise from an actual seismic signal. In [5], the authors demonstrate signal denoising by discarding noise-dominant IMFs obtained by decomposing noisy signal using the EMD algorithm. In [6], the authors process or eliminate IMFs of noisy electrocardiogram to reduce power line interference. Villa et al. applied ensemble empirical mode decomposition (EEMD) on electrical cochlear response and used sample entropy to identify IMFs arising due to noise in the signal [7]. In [8], the authors propose intrinsic mode function cepstral coefficient extracted from IMFs of speech signal to predict if the speaker has Parkinson’s disease. In [9], the authors use IF and IA estimated from IMFs of audio signal recorded from a single microphone to separate the speech signals of two speakers. In this paper, we develop IF-selective filters and demonstrate their application in speech signal analysis.