The science of acoustics is an old one: Galileo Galilei, Marin Mersenne, Robert Hookeand Félix Savart all made contributions into the creation of the acoustics as a science. After the advent of calculus, the general wave equation was derived by the French mathematician and scientist Jean Le Rond d’Alembert. Hermann von Helmholtz then contributed to the understanding of the mechanisms of hearing and the psychophysics of sound . Psychoacoustics is very important, which can be applied to mechanical maintenance , noise assessment  , emotional analysis  and so on. It was under this background that, during the early 1900s, the American physicist Wallace Clement Sabine has performed his pioneer contributions . He is regarded as the founder of scientific architectural acoustics. More than 100 years later, scientists have found correlations for most of the acoustic criteria.
Acoustic research is used in many fields, and architecture and urban construction are very important parts   . Initially, the design of the church considered the influence of sound reflection, reverberation time and other factors on the audience     . Nowadays, materials that absorb sound are commonly used    and glass barrier is a good sound insulation material  .
This paper aims to summarize much of the theoretical background of acoustics and examine a theoretical auditorium in the shape of a quarter of an ellipsoid and its acoustical properties.
2. Theoretical Background
2.1. Variables and Definition
The variables and definition are in Table 1.
The fluid is an ideal fluid, meaning there is no mechanical energy loss in the medium. The medium is continuously distributed. The medium is compressible. The medium is homogeneous and in steady-state without the presence of a sound wave. The amplitude of changing variables is very small compared to their values when there is no sound wave.
Table 1. Variables and definition.
Sound propagation is isentropic, i.e. pressure changes from to . When a wave travels from one medium to another, the two mediums are always in touch.
From these assumptions, we will next be looking at the fundamentals of acoustics. The following is a condensed version of the acoustic fundamentals put forth in , with more explanation and detailed proofs.
2.3. Harmonic Plane Waves in 1D
A harmonic plane wave is an ideal waveform whose wavefronts are all planes. Looking at harmonic plane waves in 1D, an equation for its sound pressure can be found:
where is a polynomial about x. By substituting this into the wave equation for pressure,
Since any power of e cannot always be equal to 0, the above equation can be simplified as the following:
By introducing the wave number, k, as , the above equation can take another form:
A general solution to this differential equation is
where A and B are to be determined by boundary conditions. If considering both time and space,
here, the first term denotes a wave propagating in the positive direction while the second term denotes a wave propagating in the negative direction.
To simplify the calculation, we consider only a propagation wave, or one which follows
Given the boundary condition , the above formula can be transformed:
Taking the real part,
From the equation of motion , it can be found that . Applying this to 1D plane waves,
Thus it can be found that velocity has the same phase as sound pressure.
2.4. Characteristic Impedance and Wave Impedance
The characteristic impedance of sound is an intrinsic property of a medium and is defined as . For water in standard conditions, characteristic impedance is equal to 1.5 × 106 Pa/s while for air in standard conditions, characteristic impedance is equal to 420 Pa/s.
Wave impedance is defined as the ratio of sound pressure to velocity of the mass point, or
For plane waves, wave impedance is equal to characteristic impedance, or
Mean that its value is a constant in space and time.
3. Energy Flux Density and Sound Intensity of a Plane Wave
Since , its value in a plane wave can be found by
An important thing to note that one cannot simply multiply the complex representations of sound pressure and velocity of a mass point, but has to use the real parts. It should also be noted that energy flux density has a direction in the same direction as velocity of the mass point.
Intensity can also be calculated by
3.1. Transmission between Mediums: Boundary Conditions
Before examining the transmission of sound between mediums, the boundary conditions between the two mediums need to be examined. First, we choose an infinitely thin layer between the two mediums. Since the layer is infinitely thin, its mass approaches 0, meaning no net force should be applied on it. Therefore, the pressure exerted on the layer from either side should be the same. In the absence of sound waves, the pressure of the medium on either side is the same, so sound pressure is the same at the boundary for any two materials, or
where 1 and 2 denotes the two different mediums.
We assumed that the two mediums are always in touch on the boundary, meaning that the normal velocity in the two mediums is same, or
where 1 and 2 again denotes the two mediums and n means the normal component of the velocity.
3.2. Reflection and Transmission: Normal Incidence
We define x = 0 as the boundary between two semi-infinite mediums, with the sound wave traveling from Medium 1 to Medium 2. Using the equation for sound pressure in harmonic plane waves, the sound pressure for the two mediums can be found:
where the first term in each equation denotes the forward-propagating component and the second term denotes the backward-propagating component. Because there is no backward-propagating component of the sound wave in the second medium, .
At the boundary, the two boundary conditions listed in the last section must be satisfied:
As noted in the section titled “Harmonic Plane Waves in 1D”, and can be represented using sound pressure:
Substituting 0 in for x and applying the boundary condition, the following equality can be found:
Defining coefficient of reflection for pressure to be and denoting as and as , we can solve for it by using the two boundary conditions:
Similarly, we define coefficient of transmission for pressure as , and solve for it by:
We can also find the coefficient of reflection and transmission of velocity and as
Also, the coefficient of reflection and transmission of intensity and can be found:
It can be seen that , meaning intensity is conserved under normal incidence. It can also easily be proven that the transmission and reflection of intensity is the same when traveling from Medium 2 to Medium 1.
3.3. Reflection and Transmission: Oblique Incidence
To find the coefficient of reflection and transmission of oblique incidence, we set up the following model (Figure 1).
The incidence ray meets the boundary between the two mediums at x = 0 and z = 0. Its incidence angle is and a part of the ray, , reflects with angle of reflection . The rest is transmitted into Medium 2 with angle .
From the wave equations for pressure, we know
The boundary conditions in this case are
The solutions of each of the waves to the Helmholtz equation is
where and . Therefore,
The normal velocity in Medium 1 can be expressed as
Meanwhile, the normal velocity in Medium 2 can be expressed as
Figure 1. The coefficient of reflection and transmission of oblique incidence.
Applying our two boundary conditions,
The two equations listed above need to be satisfied for any two x-values. Therefore, substituting in any two x-values, we are able to get
where the second is also Snell’s Law for sound waves.
We further get, by substituting in x = 0,
The refraction and reflection coefficient can be calculated by
Another way to represent this is by defining and , and using a method similar to the last section we can get
Similar to the last section,
4. Model Definition
After examining the theoretical of acoustics, we now examine a specific auditorium design and the acoustical effects of it. This is done using the COMSOL application. Here, we will examine one in the shape of a quarter of an ellipsoid, as this type of design—with a narrow, low stage and a wider, taller audience platform—is more common nowadays in auditorium design. We first introduce some parameters (Table 2).
Table 2. Global parameters used to define the auditorium and its study.
Here, the first three variables denote the semi-axes of the auditorium in the x, y, and z-direction respectively. The next two denote the dimensions for the door of the auditorium, which is to be made from glass. Then, length_stand, height_stand, and width_stand denote, respectively, the length, height, and width of the audience platform. f0 denotes the frequency of the sound to test for, and is chosen because it is a common frequency made by human sounds. Then, a receiver is specified among the entire audience platform, and is located at (receiver_x, receiver_y, receiver_z). Then, because the application can only test for a specified number of rays, that number is denoted as number_rays. The variables volume_total and source_receiver_dist denote, respectively, and total volume of the auditorium and the distance between the source and the receiver.
Next, to build the physical setup of the auditorium, we build an ellipsoid with semiaxes as denoted above and use two blocks to represent the other three-quarters of the ellipsoid (Figure 2, Figure 3); then we do a difference to get a quarter of an ellipsoid. The door is located in the plane of y = 0 and has its center lying on the plane x = 0. The stand is located on the x-y plane with z-coordinate equal to height_stand and starts 0.5 m away from the entrance. Similarly, the stage is located on another x-y plane with z-coordinate equal to height_stage. The power of the sound is set to 1mW because this is roughly the loudness desirable for a large auditorium such as this one. The stage is represented simply by a rectangle with part of it extending outside of the auditorium, but since all the wave propagation will be located within the auditorium the parts that extend outside can be ignored. The absorption coefficient of the floor, the wall, and glass—or the door—as a function of frequency is as follows (Table 3, Figure 4).
5. Results and Discussions
5.1. Transmission Time and Loss
First, to examine the acoustics of the auditorium, we can first look at the transmission of sound and how well sound is transmitted (Figure 5).
We can see in Figure 6 that the sound reaches the front of the audience with an intensity of roughly 75 dB 0.02 seconds after the sound is emitted. This is about 10 dB weaker than the loudest point where it is emitted.
Figure 2. The completed auditorium as viewed from a y-z point of view.
Figure 3. The completed auditorium as viewed from an x-y point of view.
Figure 4. The absorption coefficient of the different materials as a function of frequency.
Figure 5. The sound intensity level at time 0.01 s after the emission of the signal.
Figure 6. The sound intensity level at time 0.02 s after the emission of the signal.
Table 3. The absorption coefficients of different materials for different frequencies.
After 0.05 seconds, the sound wave has reached roughly the middle of the auditorium, with sound intensity levels between 60 and 80 dB. An interesting phenomenon is that the sound has formed two distinct waves, with the strongest parts of the sound staying near the roof (Figure 7). This may be good in that the sound may be preserved for a longer time while the audience is not assaulted with loud sounds. It may also help the audience members towards the back to be able to hear the sound at similar loudness as the front. However, this phenomenon may also be bad because the signal strength at the audience level may be lower than ideal.
After 0.1 seconds, the sound wave has reached the back of the audience with loudness around 50 to 70 dB (Figure 8). The audience members towards the back hear the signal at around 25 dB lower than the audience members in the front, assuming no amplifiers are used. As with Figure 7, the strongest parts of the sound are inclined towards the top and the sides of the auditorium. This may work better if the other side of the hemi-ellipsoid is also built as part of the auditorium—the round-shape roof in essence serves as a funnel, allowing audience members at the back to hear a louder sound while also preventing the audience members towards the front from hearing sounds that are too loud. As it is, the phenomenon is not fully exploited, to the audience’s detriment.
Figure 7. The sound intensity level at time 0.05 s after the emission of the signal.
Figure 8. The sound intensity level at time 0.1 s after the emission of the signal.
A classical concern with architectural acoustics is the measurement of reverberation time  , which is defined as the time needed for sound intensity to decrease to 10−6 of its original strength, or for sound intensity level to decrease by 60 dB.  suggests that a higher reverberation time reduces intelligibility of speech, while a lower reverberation time makes the sounds made in the auditorium sound “dead”. He also suggests that a reverberation time of around 2 seconds is good for large halls, while small halls’ optimal reverberation time would be around 1 second. However, Newman (1974) suggests that multipurpose auditoriums have optimal reverberation times around 1.6 to 1.8 seconds, with extreme limits between 1.4 to 1.9 seconds.
First, looking at the auditorium as a whole, we can see that after 1.5 seconds, the signals persists, but most of it is below 20 dB (Figure 9). This means that the reverberation time of the auditorium is roughly 1.5 s, which lies slightly below the optimal range. It also means that the auditorium as it is currently designed is may not be the best auditorium for music.
For the audience members sitting near the front, they first receive sound 0.02 seconds after the sound has been made and at a level around 70 to 75 dB. This means that the reverberation time for them should be the time which they receive sound at about 10 to 15 dB minus 0.02.
Figure 10 shows the sound intensity level 1.4 seconds after the emission of the signal. It can be seen that near the front of the audience, the sound intensity levels are distributed between roughly 15 to 25 dB. This is about 50 dB weaker than the sound the audience members near the front receive. Therefore, it is likely that the reverberation time audience members near the front is slightly larger than 1.4 (Figure 10, Figure 11).
From this result, we can see that the signal for the audience members at the front has decayed to roughly 10 - 15 dB, which means the reverberation time for the audience members at the front is about 1.4 seconds. This may be slightly short, and the quick disappearance of the tones means the design of the auditorium can perhaps be improved.
Figure 9. The sound intensity level at time 1.5 s after the emission of the signal.
Figure 10. The sound intensity level at time 1.4 s after the emission of the signal.
Figure 11. The sound intensity level 1.44 s (as interpolated from 1.4 and 1.5) after the emission of the signal.
For audience members at the back, they first receive sound 0.1 seconds after the emission of the signal (Figure 8) at roughly 60 dB. Therefore, the reverberation time for it should be the time at which it first starts receiving no sound.
From Figure 12, we can see that sound is still present at the back of the audience 1.85 seconds after the emission of the signal. Meanwhile, it is not present in Figure 13, or 1.9 seconds after the emission of the signal. This means that reverberation time near the back of the audience is roughly 1.9 seconds. This is at the upper end of the suggested optimal reverberation time, meaning that the back of the audience, paradoxically, is perhaps better suited as a place to listen to music in the current design of the auditorium. The initial signal reaches it at around 60 dB, which is still a good loudness.
The other parts of the audience stand should have reverberation times lying between 1.4 and 1.9 seconds, which is a good but improvable range. The large range means that the different parts of the auditorium may have very different sound quality, so it should probably be improved upon.
However, it should be mentioned that the finding of 1.9 seconds as the reverberation time for the back of the audience is based on the formal definition of reverberation, which may not be the most suitable in this case because even at 1.5 seconds, the sound intensity level at the back rarely reaches above 20 dB, which is the loudness of a whisper or rustling leaves . This means that perhaps the perceived reverberation time is probably less than 1.9 seconds, which is a more satisfactory conclusion.
Figure 12. The sound intensity level 1.85 seconds (as interpolated from 1.8 and 1.9) after the emission of the signal.
Figure 13. The sound intensity level 1.9 seconds after the emission of the signal.
5.3. Disruption to the Performers
The sound signal is reflected back to the performers on stage, and this may be problematic for the performers as they get disrupted by the sounds they made before.
After 0.6 seconds, the reflected sound at the stage still has loudness of about 40 - 50 dB (Figure 14). This may be a problem for faster pieces of music, because for faster pieces, the notes may be closer-spaced than 0.6 seconds. This may cause the performer’s (or performers’) confusion as they try to sort out which of the sounds is the new sound and which is the reflection of the previous sound.
Figure 15 shows that the reflected sound at the stage decays to only about 20 - 30 dB at the stage one second after the sound is emitted. This is good news for slower pieces of music, because one second is less significant in those cases.
5.4. Analysis of the Differences between Different Frequencies
For Figures 8-15, the frequency being depicted is invariably 16 kHz. However, as seen in Figure 4, the absorption of the three materials differs for different frequencies. Although this would not affect transmission time and loss (Figures 5-8), reverberation and the disruption of the performers is very much affected.
Figure 14. The sound intensity level near the stage 0.6 s after the emission of the signal.
Figure 15. The sound intensity level near the stage one second after the emission of the signal.
It is obvious from the figure below that 1.5 seconds does not represent the reverberation time for a 500 Hz signal. In fact, even after two seconds, the sound intensity level of the majority of the auditorium—including the audience members and the stage—is still between 30 and 50 dB, which is a very strong signal. The high density of the points means that very few of the rays have decayed to an intensity less than the threshold, also supporting the fact that the leftover sound at this point is still very loud. In fact, as seen in Figure 16, it is only after 4.6 seconds that the signal becomes insignificant, i.e. the reverberation time of the auditorium for 500 Hz is roughly 4.6 seconds.
However, it can be seen that even 4.6 seconds after the emission of the signal, some parts of the sound still reach 40 - 50 dB (Figure 17). It is possible that these represent places of constructive interference, but the more likely explanation—made more plausible by the fact that there are so many points, each representing one of 10,000 initial rays, left—is that the reverberation time is slightly longer than 4.6 seconds. This presents the problem that the different frequencies have very different sound intensity levels at any given time until roughly five seconds after emission. For sounds are comprised of more than one frequency—and this includes almost all sounds—this is a big problem. For speech, the fact that the low-frequency components of the sound persist for much longer than high-frequency components mean that it is much harder to articulate the sound well. In fact, as stated earlier, the long reverberation time itself means that it is hard to articulate speech clearly, and the difference between high and low-frequency components makes the problem worse. For music and bands, the fact that low-frequency sounds persist for much longer means that the lower-frequency instruments will have to play at slower paces with the notes separated by long time intervals, which limits the piece of music itself. One way to solve this would be to increase the size of the auditorium: a larger auditorium would increase the absorption of the sound. However, this is an unideal way to solve this, as the absorption would increase for all frequencies of sound, with the higher-frequency components absorbed even faster. The reverberation time for the high-frequency components would thus decrease by a lot, meaning that the problem still remains. A different way to solve this would be to increase the floor’s absorption for low-frequency components. Referring back to Figure 4, it can be found that one of the main problems with absorption is that the floor absorbs much less of the low-frequency components than high-frequency components. This is probably the main reason for the difference in reverberation times for low-frequency and high-frequency components. The way to solve this would be to build the floor from a material which absorbs the low-frequency components much better than the current material.
For disruption of the performers, we see above that the time required for the leftover sound at the stage to decrease to a level of roughly 25 - 35 dB is about 2.25 seconds (Figure 18). In comparison, for the 16 kHz signal, the loudness at the stage decayed to only 20 - 30 dB within one second. Thus, the different absorption between frequencies is also very apparent for the performers, and they themselves will hear the low-frequency components for much longer than the high-frequency components. This may interrupt their performance, furthering the need to build the floor out of a material with higher absorption for low-frequency components.
5.5. Impulse Response of the Receiver
After looking at the auditorium in general, we look at the specific receiver which we have defined in the model definition section.
Figure 16. The sound intensity level of a 500 Hz signal 1.5 seconds after emission.
Figure 17. The sound intensity level of a 500 Hz signal 4.6 seconds after emission.
Figure 18. The sound intensity level of a 500 Hz signal 2.25 seconds (as interpolated from 2.2 and 2.3 seconds) after the emission of the signal.
From the above Figure 19, we can see that the sound pressure level—which is roughly the same as sound intensity level—for the different frequencies from 0 to 2000 Hz vary a lot, which confirms the previous conclusion that absorption for the different frequencies is vastly different and that this may interfere with the performers on stage.
After we look at the impulse response for different frequencies, we now take a look at the change in sound pressure over time for 500 Hz and 16 kHz impulses.
Figure 20 is the juxtaposition of Figure 21 upon Figure 22, and it shows that the impulse response of the 16 kHz signal is much stronger but decays much faster than the 500 Hz signal (note the different time scales of the three graphs).
Figure 19. The sound pressure level of different frequencies for the receiver.
Figure 20. The sound pressure against time for different frequencies for the receiver.
Figure 21. Thesound pressure against time for a 16 kHz signal.
Figure 22. The sound pressure against time for a 500 Hz signal.
In fact, the strength of the 16 kHz signal can reach a magnitude about 10 times stronger than that of the 500 Hz signal. This means that for a piece of music, the high-frequency components may be much louder than the low-frequency components, perhaps causing the low-frequency components and instruments to be ignored. This is an undesirable effect, and needs to be fixed. Another problem with the impulse responses is that the 16 kHz signal—i.e. the high-frequency components—decay much faster than the 500 Hz signal—i.e. the low-frequency components. This means that audience members will hear the low-pitch sounds for much longer than the high-pitch sounds, and this may cause confusion and reduce the general quality of the music. Also, for speech, the low-frequency components remain for much longer, making it hard for the speaker to articulate well.
In conclusion, the theoretical auditorium outlined above is well-suited to single high-frequency music or performances, but much less desirable for speech or music involving both high and low-frequency components. This is probably due to the highly different absorption coefficient of the floor for low and high frequencies, and one way to solve this would be to make the floor from material that absorbs low frequencies much better than the current material. Also, the advantage of the ellipsoidal shape of the auditorium can be more fully exploited by building a half of an ellipsoid, not a quarter.
 Poveda-Martínez, P. and Ramis-Soriano, J. (2020) A Comparison between Psychoacoustic Parameters and Condition Indicators for Machinery Fault Diagnosis Using Vibration Signals. Applied Acoustics, 166, Article ID: 107364.
 Choi, W. and Pate, M.B. (2017) An Evaluation and Comparison of Two Psychoacoustic Loudness Models Used in Low-Noise Ventilation Fan Testing. Building and Environment, 120, 41.
 Kwon, G., Jo, H. and Kang, Y.J. (2018) Model of Psychoacoustic Sportiness for Vehicle Interior Sound: Excluding Loudness. Applied Acoustics, 136, 16-25.
 Gruber, L.N., Janowsky, D.S., Mandell, A.J., et al. (1984) A Psychoacoustic Effect upon Mood and Its Relation to Affective Instability. Comprehensive Psychiatry, 25, 106-112.
 Nowoświat, A., Olechowska, M. and Marchacz, M. (2020) The Effect of Acoustical Remedies Changing the Reverberation Time for Different Frequencies in a Dome Used for Worship: A Case Study. Applied Acoustics, 160, Article ID: 107143.
 çakir, O., Sevinç, Z. and EmreIlal, M. (2019) Characterization of Noise in Eating Establishments Based on Psychoacoustic Parameters. Applied Mechanics and Materials, 887, 539-546.
 Segura-Garcia, J., Navarro-Ruiz, J.M., Perez-Solano, J.J., et al. (2018) Spatio-Temporal Analysis of Urban Acoustic Environments with Binaural Psycho-Acoustical Considerations for IoT-Based Applications. Sensors, 18, 690.
 Elicio, L. and Martellotta, F. (2015) Acoustics as a Cultural Heritage: The Case of Orthodox Churches and of the “Russian Church” in Bari. Journal of Cultural Heritage, 16, 912-917.
 Cirillo, E. and Martellotta, F. (2009) Acoustics of Apulian-Romanesque Churches: Correlations between Architectural and Acoustic Parameters. Building Acoustics, 10, 55-76.
 Girón, S., Galindo, M. and Gómez-Gómez, T. (2020) Assessment of the Subjective Perception of Reverberation in Spanish Cathedrals. Building and Environment, 171, Article ID: 106656.
 Shtrepi, L. and Prato, A. (2020) Towards a Sustainable Approach for Sound Absorption Assessment of Building Materials: Validation of Small-Scale Reverberation Room Measurements. Applied Acoustics, 165, Article ID: 107304.
 Karmann, C., Bauman, F.S., Raftery, P., Schiavon, S., Frantz, W.H. and Roy, K.P. (2017) Cooling Capacity and Acoustic Performance of Radiant Slab Systems with Free-Hanging Acoustical Clouds. Energy & Buildings, 138, 676-686.
 Liu, M.Z., Wittchen, K.B. and Heiselberg, P.K. (2015) Control Strategies for Intelligent Glazed Façade and Their Influence on Energy and Comfort Performance of Office Buildings in Denmark. Applied Energy, 145, 43-51.
 Kim, K., Kim, B.S. and Park, S. (2006) Analysis of Design Approaches to Improve the Comfort Level of a Small Glazed-Envelope Building during Summer. Solar Energy, 81, 39-51.
 Meissner, M. (2017) Acoustics of Small Rectangular Rooms: Analytical and Numerical Determination of Reverberation Parameters. Applied Acoustics, 120, 111-119.
 Ziarko, B. (2018) Adaptation of Coutyards into Covered and Glazed Atriums and Its Impact on the Level of Acoustic Comfort inside—Case Study. Technical Transactions, 115, 133-140.