We are at a special age of technology where, the ease of creating, distributing and collaborating has led to tremendous creative minds being born in their living rooms. Are you a musician, developer, artist, producer, designer, researcher or neither of them, but heavily motivated in the intersection of music and technology? Then this is for you! Do join us.

In this edition of the music tech meet-up, four speakers will deliver talks about their experiences in building art and music generation systems driven by machine learning, both in the industry and academia. The speakers will also demonstrate some examples of the recent developments in the field AI-based art/music generation. It is an exciting time for new media artists with tools like machine learning and programming, seamlessly available for the creative minds of today. Just like the intelligent brushes of photoshop augmented the abilities of a traditional painter, music now has a whole new world of possibilities.

We look forward to seeing you at the event! Register (for free) here.

Venue: 91 Springboard, Salrpuria Tower – 1, 7th Block, Koramangala

Date & Time: December 29th, 2018 (16:00-18:00 IST) 

Inquiries: +91 8197238177 (Sid)

Programme

* 16.00h- Networking

* 16.10h- Introduction

* 16.15h- Talk by Harshit on “Making art using AI: The evolution of cyborg artist”

* 16.45h – Talk by Srikanth on “Music generation using ML at Jukedeck”

* 17.20h – Combined talk by Albin and Manaswi on “Intelligent Music Production”

* 17.40h – More networking

* 18.00h – Continue discussions at a nearby bar or cafe with those interested

About the Speakers

Harshit Agarwal[website]

Harshit is a new media artist and human computer interaction (HCI) researcher. Through his artwork, he creates experiences for people to explore and express with seemingly distant technologies like artificial intelligence/ machine learning, drones, digital fabrication, sensors, augmented reality and in the process invite people to reflect upon and re-evaluate their relationship with technology. Often, these artworks are tools to study how technology can blend with and enhance human creative expression. A lot of the works that he did focuses on the interplay between human and machine imaginations and intentions, spanning across virtual and physical embodiments.

Harshit is a graduate of the Fluid Interfaces group at MIT Media Lab and the Indian Institute of Technology (IIT Guwahati). He’d carried out art residencies at various places to develop my practice in diverse cultural contexts, including at the Art Center Nabi (Seoul), Museum of Tomorrow (Rio de Janeiro), Kakehi-Lab (Tokyo/ Yokohama). His wokrs have been exhibited at premier art festivals and museums around the world, like the Ars Electronica Festival, Tate Modern, Asia Culture Center (at Otherly Spaces/Knowledge exhibition curated by Kazunao Abe-san), QUT Art Museum (Why the Future Still Needs Us exhibition), Museum of Tomorrow, Alt-Ai (at the School For Poetic Computation, NYC), Art Center Nabi, Laval Virtual, BeFantastic Festival (Bangalore, India), ISEA. His works also have been extensively covered in international media. Along with this, he had also published several research papers on creation tools at human computer interaction conferences, including Siggraph, UIST, UbiComp, TEI, IUI, IDC.

Srikanth Cherla[website]

Srikanth is a Machine Learning Researcher at Jukedeck where he contributes to the design and development of an ingenious AI music composer which employs a range of computational techniques to automatically generate music of different moods and styles. He was awarded a doctorate degree (PhD) in Computer Science in July 2016 by City, University of London under the supervision of Artur Garcez and Tillman Weyde. His research involved the development of novel Neural Network based Machine Learning models, as well as the use of existing ones to learn temporal patterns in musical scores and also to classify non-musical data. He received a master’s degree (MSc) from the Music Technology Group at Universitat Pompeu Fabra and holds a bachelor’s degree (B.Tech.) in Computer Science and Engineering from the International Institute of Information Technology – Hyderabad.

Srikanth has previously worked at Siemens Corporate Technology – India as a Research Engineer (2007-10), on human action recognition in video and event detection in environmental audio among other video and audio analysis topics. He was a Research Assistant (2011-2012) at the Technologies for Acoustics and Audio Processing (TAAP) lab at Simon Fraser University where he worked on digital waveguide synthesis techniques for the tenor saxophone. He also did a brief internship at PMC Technologies (2011) during which he assisted with work on regression methods for failure prediction in manufacturing units in the semi-conductor industry.

He enjoys playing the guitar and has been playing mostly rock and heavy metal music for several years now as a hobby. He also holds a Grade 6 certification in Electric Guitar awarded by Rock School.

Manaswi Mishra[website]

Manaswi Mishra is a music technology researcher currently exploring Music Information Retrieval techniques for augmented learning of musical instrument (IITB). As a graduate student at the Music Technology Group, Barcelona, he researches data driven methods for generating new timbres/textures of sounds. He has spent a year at the Center for Computer Research in Music and Acoustics, Stanford, and also worked as a researcher at Shazam (CA) and AdoriLabs (Bangalore). With an undergraduate degree in Engineering Physics, at IITM, his interests spread from physical modelling of sounds, numerical synthesis to human computer interaction, signal processing and computational creativity. Manaswi is also an active musician with various audio visual projects blending deep learning, creative coding and the arts.

Albin Correya – [website]

Based in Barcelona, Spain, Albin is an interdisciplinary researcher who works within the intersection of music and technology. His personal research interests are aligned on applying knowledge from Audio Signal Processing, Music Information Retrieval, Machine Learning, Natural Language Processing and Human Computer Interaction studies into audio and music production environments.

Albin is currently working as a research engineer at Music Technology Group, Barcelona where he investigate and develop algorithms for the automatic identification of cover song versions in collaboration with the german music start-up Flits. He has previously worked at the french music streaming company, Deezer at their Paris HQ. He holds a M.Sc degree in Sound & Music Computing from Universitat Pompeu Fabra, Barcelona and Bachelor’s degree in Compute Science from Mahatma Gandhi University, Kerala. His works has been also featured in various international  music tech conferences and hackathons such as Sonar+D 2017, Barcelona, Ableton Loop – 2017, Berlin, HAMR@ISMIR 2018, Paris, BnF Hackday, Paris, Music Hackathon Bulgaria, Sofia etc. He is also an active music producer and multi-instrumentalist. His compositions were featured in award winning documentaries and movies (IMDB). He is a great fan of open science and actively contribute to various open community initiatives around the world.