The widespread availability of equipment and processes for creating sound spaces in the digital audio field has been accompanied by a musical and technical convergence towards certain ‘mainstream’ spatialization processes based on the acoustic modeling of point sources in a reverberant environment. But the question arises to understand how far this approach of sound spatialization embraces and grasps sound space both conceptually and physically. In fact, a number of approaches, such as spatial sound synthesis and spatial sound processing we demonstrate in the paper, have shown great generative and expressive potential. These approaches led to experiment new representations and interfaces that go beyond the usual Euclidean framework for manipulating sound spaces. The generalization of this research led us to design the ERC Advanced Grant G3S project, which combines machine learning and sound spatialization, and which we outline here, in terms of generativity, operative representation, description of spatiality and exploration interfaces.

Available on Zenodo