Explore chapters and articles related to this topic
Machine Learning in Acoustic DSP
Published in Francis F. Li, Trevor J. Cox, Digital Signal Processing in Audio and Acoustical Engineering, 2019
Music information retrieval (MIR) may be viewed as a branch of machine audition, i.e. the use of computers to listen to music and transcribe it back to a music score, and perform musical analysis, such as theme analysis, mood analysis, and tonal feature analysis. Music information retrieval involves audio signal analysis, machine learning and pattern recognition, musicology, and psychoacoustics, and, therefore, is truly a multidisciplinary area of study.
Image Is All for Music Retrieval: Interactive Music Retrieval System Using Images with Mood and Theme Attributes
Published in International Journal of Human–Computer Interaction, 2023
Jeongeun Park, Minchae Kim, Ha Young Kim
Music information retrieval (MIR) systems in the music streaming market generally depend on searches using text, voice, or melody/beat audio samples (Casey et al., 2008; Lee & Hu, 2022; Typke et al., 2005). Text-based searches retrieve music based on an input query using metadata, such as the song title or artist. Voice-based searches are similar to text-based searches, except that a speech-to-text process is employed to convert a voice query into text. Text or voice-based searches can be detailed and exhaustive, but they can be inconvenient in that the text must be input manually and the error rate tends to be high for the natural language processing of voice information (Yeh & Liu, 2011). In addition, it is difficult to search for a target song when the user does not know key information such as the artist or song title.
Artificial Intelligence in Music Education
Published in International Journal of Human–Computer Interaction, 2023
The recent trends in music education are associated with AI. Out of many increasingly popular things, chatbots can successfully integrate into online learning programs, yet this practice is not common. More progress has been made toward incorporating new 3.0 technologies, such as Semantic Search Engines (Chen et al., 2021; Sabadash et al., 2018). Large international companies are now developing and producing massive intelligent robots for industrial and everyday usage (Huang & Yu, 2021). AI-assisted apps and robots can understand the speaking intentions of users and interact with them through their own neural networks (Lupker & Turkel, 2021). A gradual shift to AI-enabled music education was seen in infant music education (Lupker & Turkel, 2021). In brief, automated music players became emotion-based. They can identify a child’s emotions from his/her speech and play functional music from a personalized library. It is also interesting how this library is created. The robots use music-information retrieval (MIR) algorithms to extract digital music audio information for automated technical analysis. The information is then classified based on the independent characteristics of each composition (Guo et al., 2020). In doing so, the AI creates a unique collection of materials suitable for early music education (Schwartz, 2014). Such robots act as music teachers a child may have at home (Zhang, 2020). Combined with the children’s living habits, they use specific pitches and rhythms to accompany daily life and enhance music intelligence (Dan et al., 2003). Surely, AI can do more than that (Blackwell et al., 2021), and there are many new and innovative methods of teaching music using AI systems (Zhang & Lu, 2021).
The way you listen to music: effect of swiping direction and album arts on adoption of music streaming application
Published in Behaviour & Information Technology, 2021
A great deal of previous research into music has focused on music information retrieval (i.e. Lamere 2008; Sturm 2014; Byrd and Simonsen 2015) which portrayed how algorithms are used to produce recommendation for listeners after accumulating laborious data compilations along with an aim to extract music characteristics from audio (Lange and Frieler 2018). Music Information Retrieval (MIR) is a rapidly growing interdisciplinary research area that encompasses computer science and information retrieval, musicology and music theory, audio and digital signal processing, cognitive science, library science, publishing, and law (Futrelle and Downie 2003). MIR has prioritised several research areas such as representation: the way of presenting music materials in digital form (Naveda and Leman 2010; Rossetti and Manzolli 2019), indexing: database association with music materials to ease the process of retrieving (Kelly 2010; Shen et al. 2019), compression: efficient audio encoding with compression technologies (Cilibrasi, Vitányi, and Wolf 2004; Louboutin and Meredith 2016), user interface design: easy interface to search and find musical materials from a collection (Wilkie, Holland, and Mulholland 2010; Xambó et al. 2017), metadata: descriptive and contextual information managed through a MIR system (Mandel and Ellis 2008; Long, Bonjack, and Kalwara 2019), intellectual property rights: ownership and distribution of music materials through content providers (Thibeault 2012; Zhang 2018), musical analysis: organisation of various musical compositions and need fulfilments of musicologists through MIR (Brown and Smaragdis 2004; Bhalke, Rao, and Bormane 2016). Our current research is analogous with the user interface design but extensive since we combine both design (album arts presentation in the interface) and touching (swiping direction) experience for music streaming applications.