Explore chapters and articles related to this topic
Sound and signals in music technology and digital audio
Published in Kirk Ross, Hunt Andy, Digital Sound Processing for Music and Multimedia, 2013
Baron Fourier recognised this in his famous statement that a stationary wave (i.e. one whose successive cycles have the same shape) can be resolved into harmonically related components. So-called ‘Fourier analysis’ is a mathematical technique which resolves a composite into its frequency components — that is, which effects a transformation of a time domain signal into its frequency domain equivalent. Various computer programs used in digital audio and music technology carry out this transformation process, examples being the Fast Fourier Transform (FFT) and the Phase Vocoder.
Making Sounds with Digital Electronics
Published in Russ Martin, Sound Synthesis and Sampling, 2012
Implementing large numbers of filters in analogue circuitry is expensive, and therefore analogue vocoders tend to have restricted numbers of filters, whereas digital vocoders can have much finer resolution. Digital vocoders can also extract additional information about the audio signals in the bands, and the ‘phase vocoder ’ is one example – it can work with narrow, high-resolution bands and can output both amplitude and phase information, which improves the processing quality and enhances the creative possibilities for altering musical signals.
Conducting the in-between: improvisation and intersubjective engagement in soundpainted electro-acoustic ensemble performance
Published in Digital Creativity, 2018
Importantly, processing types are also chosen to act as a dual of potential ensemble action, allowing for interactions between ensemble and Soundpainter to be enacted purely in the realm of movement/sound gestural interaction (with no processing), or to be ‘transferred’ into the realm of shared-signal processing. Two key ways that this occurs are through spectral/temporal freezing and record/playback. Within Soundpainting there exists a gesture called stab-freeze, which first primes a given group through an open-hand gesture, and then enacts the freeze moment (through a fist-in-hand motion), which might manifest as a short repeated fragment or a continuous freezing of the sound that is being played at the moment of the gesture. Within this project, a second freeze gesture has been added which is enacted by the Soundpainter holding out their arm to the side and making a fist. Depending on the active processing module, this will either loop a short grain of the last content played or enact a phase vocoder-based spectral freeze. These two-gesture actions then become duals of one another, with both conductor and ensemble knowing through the given priming gesture whether the human or the machine system are being engaged as the active agent of change within the system. Similarly, the record/play gesture asks the ensemble to segment and loop their memory of a sequence of Soundpainting gestures, playing back their own interpretations on loop. In this project, the electromyogram data are used to recognize hand waving (in/out) gestures that are used to start and stop the looping of MYO data output. In so doing, the Soundpainter is asking the machine agent to record and playback its understanding of a gesture sequence, which may be mapped into continuous control of sound processing. In either looped context, human or machine agent, the performer/system can modify their own engagement within the loop through interactions with the Soundpainter. The shifting interactions between human ensemble and mediating machine agent can further play out over time, through the use of memory gestures which ask the ensemble to remember and recall content, used in tandem with gestures that address the explicit machine memory which manifests through capture and playback of a performer’s sound.