Yiying Tang

︎︎︎Index   Next ︎︎︎

© Yiying Tang - All Rights Reserved

Adaptive Acoustics

AI-generated Concert Hall

According to Darrell M. West’s report, artificial intelligence has three qualities: Intentionality, Intelligence, and Adaptability. This thesis project explores the adaptability of acoustic design in specific architecture. We are proposing a human/AI collaborative design that will focus on creating spaces that alter and accentuate acoustic features in architecture.

Architecture is a field that is highly dependent on a system of visual orders. In line with this point of view, we start by considering if the visual appeal of architecture has overshadowed other qualities and criteria by which architectural design may be experienced. One such undervalued and often overlooked criterion is sound.

Within our project, we would like to explore the possibilities of how sound can influence the spaces we design through the implementation of an AI-driven platform for architectural-acoustic experimentation. Based on an initial data set of 2000 concert hall interiors, our neural network will be trained to generate its own interpretation of acoustic spaces through the adaptation of existing volumes into an acoustic form. Our AI-driven device can be used to help influence the early stages of design for concert halls and acoustic music spaces. Through this process, we are poised to consider opportunities for collaboration and shared authorship between ourselves and Artificial Intelligence.

After studying different spaces that have different acoustical performances, our goal is to create a generator. By inputting any mesh volume and typing in the space and acoustic parameters, our neural network will transfer it into a feasible acoustic mesh model.

First, we established two datasets, one is meshes of concert hall interiors, and the other is space and acoustic parameters. We paired them up and after the training process, we got a trained neural network. Then, we ran the optimization framework on the neural network and got AI-generated concert halls with acoustical features. Finally, we put concert halls into simulation software to measure their feasibility.

This is our dataset and the weightiness of different parameters. By using a control variate method, we draw the conclusion towards the ranking of impact factors, that is, mesh form, mesh count and volume will have the greatest influences.

For our final results, we would like to test our neural network within an unusual site for a concert hall - an urban context. We chose two sites, one for a small concert hall, around 300 people. The other is for a big concert hall, around 2000 people.

These are the 44 results and simulations for our 2 sites. We used different shapes of input meshes, mesh counts, space, and acoustic parameters. Some are really good results, but some are less functional. The ellipse shape makes it easier for the network to reshape the output mesh. As for acoustic simulation results, the performance of sound waves is visualized through simulated particles. A range of results can be seen demonstrating how sound travels within our forms.

Plan and section drawings are generated through another neural network: 2D Style Transfer. Using existing concert hall plans and section drawings to project an additional layer of information onto plans and sections drawn from 4 selected models generated by our Graph CNN.

interior renderings give a sense of how the concert hall generated by Graph CNN works.

This thesis project is the first 3D-3D neural network used in architecture. Our AI-driven device has the ability to help influence the early stages of design for concert halls and acoustic spaces in the future.

Design Team: Yiying Tang, Maksim Drapey, Yubei Song