Earlier this week we had the chance to catch up with Ajay Srinivasamurthy, an Applied Scientist at Amazon in Bengaluru who is also an active member of the Music Information Retrieval community both in India and abroad. Ajay took us through his research interests, some of his past and current projects and also his career path that led him to a point where he is able to carry out research both in Speech Processing and in Music Information Retrieval. He was also kind enough to share a whole bunch of interesting resources online related to his research, which we have shared at the bottom of this post, so make sure you don’t miss those if his area of work is something that interests you as well!

This is the first of many such conversations we plan to have with those in India engaged professionally with Music Technology. We hope that our readers will have much to learn from the experiences of these individuals and that this will help them gain valuable insights into the field and inspire them to shape their own careers in the future.

So without further ado, let’s read what Ajay had to say!

MTC – India: Could you introduce yourself to our readers?

Ajay: I am a machine learning researcher and a music enthusiast currently based out of Bengaluru, India. My research interests include machine learning and signal processing for speech, audio and music. I am currently an Applied Scientist at Amazon India working on speech technologies for Alexa, after a short stint as a postdoctoral researcher at Idiap Research Institute, Martigny, Switzerland. I also volunteer at IIT Dharwad where I teach short courses on music technology and help with building a music technology community in IIT Dharwad and the surrounding region.

I obtained a PhD from Universitat Pompeu Fabra, Barcelona, Spain working at the Music Technology Group (MTG) led by Prof. Xavier Serra. Before joining MTG, I was a research assistant at the Georgia Tech Center for Music Technology, Atlanta USA, where I worked with Dr. Parag Chordia. I have a Masters in Signal Processing from the Dept. of Electrical Communication Engineering, Indian Institute of Science, Bangalore, India.

I am a fan of percussion instruments and I am learning the Carnatic classical percussion instrument “Mridangam”. I also love to play the Morsing (Jew’s harp).

You can find out more about me on my website: http://www.ajaysrinivasamurthy.in

MTC – India: When it comes to Music Technology, what have your areas of interest been? What would you like to talk to us about today?

Ajay: Within Music Technology, I primarily work on Music Information Research (MIR) for audio, which aims to extract musically meaningful information from audio music recordings. Hence, the object of interest is a audio recording, and we extract descriptors along different musical dimensions from the audio recording. For music listeners, this has applications in music discovery, which aims to organize large music collections in relevant ways to provide enriched interaction with those music collections. For musicologists, it has applications in computational musicology, by providing tools for systematic analysis of large scale music collections. In music education, music learners and teachers can benefit from these descriptors to supplement learning and make their music practice more effective.

My main interest is around the extraction of rhythmic descriptors from audio music recordings of Indian art music. Rhythm in Indian art music is sophisticated and layered, with hierarchical rhythmic and metrical structures. These structures provide a basis for rhythmic patterns with significant scope for improvisation in performance. My work revolves around automatic methods to extract and describe these metrical structures and rhythmic patterns from audio music recordings. A short summary of my PhD work is here, along with data, code, examples and additional resources: http://www.ajaysrinivasamurthy.in/phd-thesis

MTC – India: Are you doing this/have you done this by yourself or with a team? Could you tell us about the team you’re working/have worked with?

Ajay: Over the past several years, I have worked with different research groups on these problems. I primarily worked at the MTG during my PhD days with Prof. Xavier Serra leading our efforts, collaborating with research teams at IIT Madras (led by Prof. Hema Murthy), IIT Bombay (led by Prof. Preeti Rao), Bogazici University in Istanbul, Turkey (with Prof. Ali Taylan Cemgil and Dr. Andre Holzapfel who is now in KTH, Sweden).

At MTG, I was a part of the CompMusic project funded by the European Research Council to develop computational methods for the study of different music traditions around the world. From India, we focused on the art music traditions of Carnatic and Hindustani music, developing tools and techniques for automatic melodic, rhythmic and semantic analysis of these music cultures.

MTC – India: Whom does/did your work aim to reach ? What is the impact it has had?

Ajay: Our work aimed to reach the music listeners of music culture by providing them with tools and interfaces for enhanced interaction with music. An example is Dunya (https://dunya.compmusic.upf.edu/), which is both a music listening platform as well as a set of tools to access, organize and analyze music collections of interest in CompMusic project.

While there were isolated MIR work on Indian music before, CompMusic provided significant boost to a structured MIR focused study of Indian Art Music by building data sources and formulating relevant problems within the field through a collaboration of engineers, musicians, musicologists, music labels and music listeners around the world.

Specifically, my work on rhythmic analysis of Indian art music has been a part of the suite of tools we developed for Indian art music. The emphasis of my work (and that of CompMusic) has been on open research, where data, tools and code are available through open source licenses and can be built upon further. Data-driven nature of my work has led to creation of several annotated research corpora for rhythm related computational research in Carnatic music. These datasets can be used for building machine learning models for rhythm analysis of Indian art music.

Through tech transfer, we have been able to polish our research efforts into proof of concept through the CAMUT (https://compmusic.upf.edu/camut) project. There have been further efforts to commercialize our research efforts by other colleagues through spinoffs such as MusicMuni (www.musicmuni.com) in the music education space.

MTC – India: Are you open to collaborating with others on your work? Have you already established any collaborations?

Ajay: My experience from past has been that collaborative efforts are necessary for any significant progress in a research field. The field of MIR itself is quite interdisciplinary and it needs significant inputs from engineers, musicologists, musicians and listeners to identify and solve any relevant problem.

I have continued collaborations with MTG, IIT Madras, IIT Bombay and IIT Dharwad, but I am open to build further collaborations to work on interesting problems around Indian music. Any opportunity for collaboration from musicians, musicologists and engineers is welcome.

MTC – India: What are some exciting questions in your field of work ? How can someone contribute to it ?

The field of MIR and that of automatic rhythm analysis has barely scratched the surface of what can be accomplished. It has several open questions in research and enquiry, and several market needs that can be potentially tapped for commercial exploitation. Being a new and upcoming field, it has significant potential to contribute with more data and application of recent AI advances to music, such as deep learning and semantic analysis.

Within automatic rhythm analysis, extraction and description of rhythm patterns will help to further content based music recommendation. Evaluation of rhythmic patterns to provide automatic feedback on performance can help music students improve their skills and supplement teacher feedback. Tracking of long metrical cycles in Hindustani music, modeling expressive timing, modeling variety of patterns played and analysis of improvisatory passages are some other challenging problems waiting to be addressed.

To contribute to some of these problems, an interest in Music Technology is the primary need. It is difficult for one person to have all the music and engineering skills to build something useful, so it is essential to form cross-disciplinary collaboration to build a community and work on relevant problems within that community.

MTC – India: How, in your view, has the Indian Music Technology scene changed over the past 10 years ? And what are your thoughts about where we’re headed in the coming years ?

Indian music technology scene has evolved both is size and influence over a decade, when I first got exposed to it. There had been significant musicology community for many decades, and hence music technology, in particular MIR, has been able to learn and use that body of knowledge to formulate relevant problems. Early works in music technology research have been isolated and discontinuous, while the last ten years have seen consolidation of efforts at various universities in India. CompMusic project itself has added several hundred hours of data and large body of research to open up several relevant research problems.

In addition, the community of musicians and music listeners have opened up to the use of technology to enhance their experience. Information technology solutions have opened up the community to several online forums that discuss music concerts and concepts and to the use of technology tools for disseminating music related information. Online streaming services have improved access to music. These services now are looking to enhance their services with personalization and recommendation, both of which will need MIR tools and content based analysis of audio. Startups in the MIR space (e.g. Sensibol, MusicMuni) are now using advanced machine learning techniques for music listening and education.

With a significant community working of music technology in India, available data and tools,  and an interest from the industry to develop these technologies, there is now enough support system for sustained research and development of music technologies. It is now possible for an enthusiast to pick up a relevant problem and start working on it with the mentorship of the music technology community in India.

With growing music audience in India, I see the field growing to address with technology needs specific to India, with applications around personalization, music education and music appreciation. Music technology research in India is expected to build on all the data and tools to explore MIR techniques for Indian music and build music AI applications targeted towards Indian audience. A deeper understanding and tools for computational musicology of Indian music might help us to build tools specific to Indian needs.

In general, I see good potential in music perception and cognition research for Indian music, which might help us to build relevant technologies in addition to adding to our scientific enquiry. Further, I would love to see music technology applied in conservation and growth of the rich musical heritage of India with all its art and folk music traditions.

Resources

The resources below point mainly to communities (forums, research groups. organizations, companies), data (audio, metadata) and technologies (tools, libraries, products, courses) around music technology –

Communities:

compmusic-friends (https://compmusic.upf.edu/node/5): Join if you wish to be up to date about efforts in CompMusic.

rasikas.org (www.rasikas.org): A forum of Carnatic music listeners discussing music concerts and concepts.

Data:

MusicBrainz (https://musicbrainz.org/): A community curated encyclopedia of editorial metadata of commercially produced music.

CompMusic datasets and corpora for Indian art music research, including datasets for automatic rhythm analysis: https://compmusic.upf.edu/corpora and https://compmusic.upf.edu/datasets

Technologies:

Dunya (https://dunya.compmusic.upf.edu/): Dunya comprises the music corpora and related software tools that have been developed as part of the CompMusic project.

Saraga (https://compmusic.upf.edu/node/356): A music appreciation application for lovers of Indian art music

Essentia (http://essentia.upf.edu): Open-source library and tools for audio and music analysis, description and synthesis

Audio Signal Processing for Music Applications (https://www.coursera.org/learn/audio-signal-processing): A open online course on audio signal processing methodologies that are specific for music and of use in real applications