AXEL CHEMLA — ROMEU – SANTOS
I was born in 1993, grew up in Paris, France. After a double undergraduate degree in Engineering Sciences & Music Theory, I specialized in acoustics and graduated at IRCAM in signal processing, computer science, and acoustics. After that, I was involved in several artistic projects, obtained a Business Fundations Certificate @INSEAD, and entered the CRR93 to study electro-acoustic music.
Following my passion, that is exploring sound through jointly through music, sound design, and science, I obtained a PhD grant to study between the Università degli Studi di Milano (LIM) and Paris-Sorbonne (IRCAM), leading me to investigate since 2016 the use of deep learning to generate sounds. During this work I mainly focused on spectral variational auto-encoders and various ways of using and shaping their internal generative latent space (through perceptive, symbolic, and temporal criteria), leading me to artistic & design questions that pushed me towards a research & creation process. After the thesis publication I was involved in several works (MIDI generation, chord extraction), and pursued as a post-doctorate in the ACIDS team my investigations on creative machine learning, searching various ways to drive generative models towards extrapolative rather than interpolative or repetitive setups. I also put a great effort to develop & shape accessible & usable machine learning techniques, working notably with Antoine Caillon to embed neural audio synthesis into artist-compliant environments (VST, Max, Pure Data, and maybe more soon!).
Parallelly to my academic career, I also pursued musical activities notably through composition (both music & theatre), musical production (including mixing & mastering), and performance (involved in several bands & personal activities). Notably, I compose music for the Théâtre de la Suspension company since 2016, and am co-composer in the electronic music trio Daim™, that explores media saturation, maximalism, and sonic experimentation within dance music aesthetic setup (from which was raised the collective w.lfg.ng).
In 2022 I also organized & performed in several events around music and audio generative models: acids workshops alpha & beta, allowing a totally free musical encounter between programmers, composers and an audience around these techniques, as well as a common show with circus performers involving artificial intelligence, real-time synthesis and motion capture (jointly with the IRCAM-ISMM team). I also compose and experiment myself with these novel techniques, as by exemple with my musical piece aletheia that was accepted at AIMC2022 and performed in both Cirque Electrique (Paris) and the Grey Space in the Middle (The Hague). Indeed, it mandatory to me to explore these techniques with artistical motivations, both because they can really yield to unheard sound and musical setups, but also to unravel several characteristics (wanted or unwanted) that would totally disappear with a standard use.
