WO2009075926A1 - System and method for sound system simulation - Google Patents
System and method for sound system simulation Download PDFInfo
- Publication number
- WO2009075926A1 WO2009075926A1 PCT/US2008/077628 US2008077628W WO2009075926A1 WO 2009075926 A1 WO2009075926 A1 WO 2009075926A1 US 2008077628 W US2008077628 W US 2008077628W WO 2009075926 A1 WO2009075926 A1 WO 2009075926A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- reverberation time
- audio
- measured
- user
- predicted
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0091—Means for obtaining special acoustic effects
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K15/00—Acoustics not otherwise provided for
- G10K15/08—Arrangements for producing a reverberation or echo sound
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/305—Electronic adaptation of stereophonic audio signals to reverberation of the listening space
Definitions
- design system and simulation system are used interchangeably and refer to systems that allow a user to build a model of at least a portion of a venue, arrange sound system components around or within the venue, and calculate one or more measures characterizing an audio signal generated by the sound system components.
- the design system or simulation system may also simulate the audio signal generated by the sound system components thereby allowing the user to hear the audio simulation.
- a sound system design/simulation system provides a more realistic simulation of an existing venue by matching a measured reverberation characteristic of the existing venue and adjusting one or more acoustic parameters characterizing the model such that a predicted reverberation characteristic substantially matches the measured reverberation characteristic.
- One embodiment of the present invention is directed to an audio simulation system comprising: a model manager configured to enable a user to build a 3-dimensional model of a venue and place and aim one or more loudspeakers in the model; a user interface configured to associate a material with a surface in the 3- dimensional model and to receive at least one measured reverberation time value; an audio engine configured to adjust an absorption coefficient of the material such that a predicted reverberation time value matches the at least one measured RT value; and an audio player generating at least two acoustic signals simulating an audio program played over the one or more loudspeakers in the model, the simulated audio program based on the adjusted absorption coefficient.
- the predicted reverberation time value matches the at least one measured reverberation time value to within 0.5 seconds. In another aspect, the predicted reverberation time value matches the at least one measured reverberation time value to within 0.05 seconds.
- each material is characterized by an index and adjusted according to its index. In a further aspect, the index is a product of a surface area associated with the material and a reflection coefficient of the material. In a further aspect, the absorption coefficient of the material is adjusted according to a surface area associated with the material. In a further aspect, the absorption coefficient of the material is adjusted according to a reflection coefficient of the material. In another aspect, the at least one measured reverberation time value is an RT60 value.
- Another embodiment of the present invention is directed to an audio simulation method comprising: providing an audio simulation system including a model manager, an audio engine, and an audio player; receiving at least one measured reverberation time; and matching a predicted reverberation time to the at least one measured reverberation time.
- the predicted reverberation time is within 0.5 seconds of the measured reverberation time.
- the predicted reverberation time is within 0.1 seconds of the measured reverberation time.
- an absolute value of a difference between the predicted reverberation time and the measured reverberation time is less than about 0.05 seconds.
- the step of matching further comprises adjusting a material characteristic such that the predicted reverberation time matches the at least one measured reverberation time.
- the material characteristic is an absorption coefficient of a material.
- the absorption coefficient of a material is adjusted according to a prioritized list of materials, each material in the prioritized list characterized by an index.
- the index is proportional to a product of a surface area of the material and a reflection coefficient of the material.
- Another embodiment of the present invention is directed to an audio simulation system comprising: a user interface configured to receive at least one measured reverberation time of a venue; an audio engine configured to predict a reverberation time of the venue based on at least one absorption coefficient of a material associated with a surface of the venue; means for adjusting the at least one absorption coefficient such that the predicted reverberation time matches the at least one measured reverberation time; and an audio player generating at least two acoustic signals simulating an audio program played in the venue, the simulated audio program based on the at least one absorption coefficient.
- Another embodiment of the present invention is directed to a computer- readable medium storing computer-executable instructions for performing a method comprising: providing an audio simulation system including a model manager, an audio engine, and an audio player; receiving at least one measured reverberation time of a venue; and adjusting an absorption coefficient of a material associated with a surface of the venue such that a predicted reverberation time based on the adjusted absorption coefficient matches the at least one measured reverberation time.
- Fig. 1 is a diagram illustrating an architecture for an interactive sound system design system.
- Fig. 2 illustrates a display portion of a user interface of the system shown in Fig. 1.
- FIG. 3 illustrates a detailed view of a modeling window in the display portion of Fig. 2.
- Fig. 4 illustrates a detailed view of a detail window in the display portion of Fig. 2.
- Fig. 5 illustrates a detailed view of a data window in the display portion of Fig. 2.
- Fig. 6a illustrates a detailed view of the data window with an MTF tab selected.
- Fig. 6b displays exemplar MTF plots indicative of typical speech intelligibility problems.
- Fig. 7 is a flowchart illustrating a reverberation matching process.
- Fig. 8 illustrates a data window prior to the matching process of Fig. 7.
- Fig. 9 illustrates another data window prior to the matching process of Fig. 7.
- Fig. 10 illustrates a data window after the matching process of Fig. 7.
- FIG. 11 illustrates another data window after the matching process of Fig. 7.
- Fig. 12 illustrates another data window after the matching process of Fig. 7.
- Fig. 1 illustrates an architecture for an interactive sound system design system.
- the design system includes a user interface 110, a model manager 120, an audio engine 130 and an audio player 140.
- the model manager 120 enables the user to build a 3-dimensional model of a venue, select venue surface materials, and place and aim one or more loudspeakers in the model.
- a property database 124 stores the acoustic properties of materials that may be used in the construction of the venue.
- An audio database 126 stores the acoustic properties of loudspeakers and other audio components that may be used as part of the designed sound system.
- Variables characterizing the venue or the acoustic space 122 such as, for example, temperature, humidity, background noise, and percent occupancy may be stored by the model manager 120.
- the audio engine 130 estimates one or more sound qualities or sound measures of the venue based on the acoustic model of the venue managed by the model manager 120 and the placement of the audio components.
- the audio engine 130 may estimate the direct and/or indirect sound field coverage at any location in the venue and may generate one or more sound measures characterizing the modeled venue using methods and measures known in the acoustic arts.
- the audio player 140 generates at least two acoustic signals that preferably give the user a realistic simulation of the designed sound system in the actual venue. The user may select an audio program that the audio player uses as a source input for generating the at least two acoustic signals that simulate what a listener in the venue would hear.
- the at least two acoustic signals may be generated by the audio player by filtering the selected audio program according to the predicted direct and reverberant characteristics of the modeled venue predicted by the audio engine.
- the audio player 140 allows the designer to hear how an audio program would sound in the venue, preferably before construction of the venue begins. This allows the designer to make changes to the selection of materials and/or surfaces during the initial design phase of the venue where changes can be implemented at low cost relative to the cost of retrofitting these same changes after construction of the venue.
- the auralization of the modeled venue provided by the audio player also enables the client and designer to hear the effects of different sound systems in the venue and allows the client to justify, for example, a more expensive sound system when there is an audible difference between sound systems.
- An example of an audio player is described in U.S. Patent No. 5,812,676 issued September 22, 1998.
- Fig. 2 illustrates a display portion of a user interface of the system shown in Fig. 1.
- the display 200 shows a project window 210, a modeling window 220, a detail window 230, and a data window 240.
- the project window 210 may be used to open existing design projects or start a new design project.
- the project window 210 may be closed to expand the modeling window 220 after a project is opened.
- the modeling window 220, detail window 230, and the data window 240 simultaneously present different aspects of the design project to the user and are linked such that data changed in one window is automatically reflected in changes in the other windows.
- Each window can display different views characterizing an aspect of the project. The user can select a specific view by selecting a tab control associated with the specific view.
- Fig. 3 illustrates an exemplar modeling window 220.
- control tabs 325 may include a Web tab, a Model tab, a Direct tab, a Direct+Reverb tab, and a Speech tab.
- the Web tab provides a portal for the user to access the Web to, for example, access plug-in software components or download updates from the Web.
- the Model tab enables the user to build and view a model.
- the model may be displayed in a 3-dimensional perspective view that can be rotated by the user.
- the model tab 326 has been selected and displays the model in a plan view in a display area 321 and shows the locations of user selectable speakers 328, 329 and listeners 327.
- the Direct, Direct+Reverb, and Speech tabs estimate and display coverage patterns for the direct field, the direct+reverb field, and a speech intelligibility field.
- the coverage area may be selected by the user.
- the coverage patterns are preferably overlaid over a portion of the displayed model.
- the coverage patterns may be color-coded to indicate high and low areas of coverage or the uniformity of coverage.
- the direct field is estimated based on the SPL at a location generated by the direct signal from each of the speakers in the modeled venue.
- the direct+reverb field is estimated based on the SPL at a location generated by both the direct signal and the reflected signals from each of the speakers in the modeled venue.
- a statistical model of reverberation may be used to model the higher order reflections and may be incorporated into the estimated direct+reverb field.
- the speech intelligibility field displays the speech transmission index (STI) over the portion of the displayed model.
- STI speech transmission index
- the STI is described in K.D. Jacob et al., "Accurate Prediction of Speech Intelligibility without the Use of In-Room Measurements," J. Audio Eng. So ⁇ , Vol. 39, No. 4, pp 232-242 (April, 1991), Houtgast, T. and Steeneken, H. J. M. "Evaluation of Speech Transmission Channels by Using
- Fig. 4 shows an exemplar detail window 230.
- the property tab 426 is shown selected.
- Other control tabs 425 may include a Simulation tab, a Surfaces tab, a Loudspeakers tab, a Listeners tab, and an EQ tab.
- the detail window display one or more input controls that allow the user to specify a value or select from a list of values for a simulation parameter.
- simulation parameter include a frequency or frequency range encompassed by the coverage map, a resolution characterizing the granularity of the coverage map, and a bandwidth displayed in the coverage map.
- the user may also specify one or more surfaces in the model for display of the acoustic prediction data.
- the Surfaces, Loudspeakers, and Listeners tab allows the user to view the properties of the surfaces, loudspeakers, and listeners, respectively, placed in the model and allows the user to quickly change one or more parameters characterizing a surface, loudspeaker or listener.
- the Properties tab allows the user to quickly view, edit, and modify a parameter characterizing an element such as a surface or loudspeaker in the model. A user may select an element in the modeling window and have the parameter values associated with that element displayed in the detail window. Any change made by the user in the detail window is reflected in an updated coverage map, for example, in the modeling window.
- the EQ tab When selected, the EQ tab enables the user to specify an equalization curve for one or more selected loudspeakers. Each loudspeaker may have a different equalization curve assigned to the loudspeaker.
- Fig. 5 shows an exemplar data window 240 with a Time Response tab 526 selected.
- Other control tabs 525 may include a Frequency Response tab, a Modulation Transfer Function (MTF) tab, a Statistics tab, a Sound Pressure Level (SPL) tab, and a Reverberation Time (RT60) tab.
- the Frequency Response tab displays the frequency response at a particular location selected by the user. The user may position a sample cursor in the coverage map displayed in the modeling window 220 and the frequency response at that location is displayed in the data window 240.
- the MTF tab displays a normalized amount of modulation preserved as a function of the frequency at a particular location selected by the user.
- the Statistics tab displays a histogram indicating the uniformity of the coverage data in the selected coverage map.
- the histogram preferably plots a normalized occurrence of a particular SPL against the SPL value.
- the mean and standard deviations may be displayed on the histogram as color-coded lines.
- the SPL tab displays the room frequency response as a function of frequency.
- a color-coded line representing the mean SPL at each frequency may be displayed in the data window along with color-coded lines representing a background noise level and/or a house curve, which represents the desired room frequency response.
- a shaded band may surround the mean SPL line to indicate a standard deviation from the mean.
- the RT60 tab displays the reverberation time as a function of frequency.
- the reverberation time is typically the RT60 time although other measures characterizing the reverberation decay may be used.
- the RT60 time is defined as the time required for the reverberation to exponentially decay by 60 dB.
- the user may choose to display the average absorption data as a function of frequency instead of the reverberation time.
- a time response plot is displayed in the data window 240.
- the time response plot shows a signal strength or SPL along the vertical axis, the elapsed time on the horizontal axis and indicates the arrival of acoustic signals at a user-selected location.
- the vertical spikes or pins shown in Fig. 5 represent an arrival of a signal at a sampling location from one of the loudspeakers in the design.
- the arrival may be a direct arrival 541 or an indirect arrival that has been reflected from one or more surfaces in the model.
- each pin may be color-coded to indicate a direct arrival, a first order arrival representing a signal that has been reflected from a single surface 542, a second order arrival representing a signal that has been reflected from two surfaces 543, and higher order arrivals.
- a reverberant field envelope 545 may be estimated and displayed in the time response plot. An example of how the reverberant field envelope may be estimated is described in K.D. Jacob, "Development of a New Algorithm for Predicting the Speech Intelligibility of Sound Systems, "presented at the 83 rd Convention of the Audio Engineering Society, New York, NY (1987).
- a user may select a pin shown in Fig. 5 and have the path of the selected pin displayed in the modeling window 220.
- the user may then make a modification to the design in the detail window 240 and see how the modification affects the coverage displayed in the modeling window 220 or how the modification affects a response in the data window.
- a user can quickly and easily adjust a delay for a loudspeaker using a concurrent display of the modeling window 220, the data window 240, and the detail window 230.
- the user may adjust the delay for a loudspeaker to provide the correct localization for a listener located at the sample position. Listeners tend to localize sound based on the first arrival that they hear.
- the listener If the listener is positioned closer to a second loudspeaker located farther away from an audio source than a first loudspeaker, they will tend to localize the source to the second loudspeaker and not to the audio source. If the second loudspeaker is delayed such that the audio signal from the second loudspeaker arrives after the audio signal from the first loudspeaker, the listener will be able to properly localize the sound.
- the user can select the proper delays by displaying in the data window the direct arrivals in the time response plot.
- the user can select a pin representing one of the direct arrivals to identify the source of the selected direct arrival in the modeling window, which displays the path of the selected direct arrival from one of the loudspeakers in the model.
- the user can then adjust the delay of the identified loudspeaker in the detail window such than the first direct arrival the listener hears is from the loudspeaker closest to the audio source.
- the user can select an objectionable time arrival from the time response plot in the data window and see the path in the modeling window to identify the loudspeaker and surfaces associated with the selected path.
- the user can select one of the surfaces associated with the selected path and modify or change the material associated with the selected surface in the detail window and see the effect in the data window.
- the user may re-orient the loudspeaker by selecting the loudspeaker tab in the detail window and entering the changes in the detail window or the user may move the loudspeaker to a new location by dragging and dropping the loudspeaker in the modeling window.
- Fig. 6a shows the data window with the MTF tab 626 selected.
- the Modulation Transfer Function returns a normalized modulation preserved as a function of modulation frequency for a given octave band.
- a discussion of the MTF is presented in K.D. Jacob, "Development of a New Algorithm for Predicting the Speech Intelligibility of Sound Systems," presented at the 83 rd Convention of the Audio Engineering Society, New York, NY (1987), Houtgast, T. and Steeneken, H. J. M. "Evaluation of Speech Transmission Channels by Using Artificial Signals" Acoustica, Vol. 25, pp 355-367 (1971) and "Predicting Speech Intelligibility in Rooms from the Modulation Transfer Function.
- Fig. 6b displays exemplar MTF plots that may indicate the source of a speech intelligibility problem.
- the MTF corresponding to the 1 kHz MTF 660 shown in Fig. 6a is re-displayed to provide a comparison to the other MTF plots.
- the MTF labeled 690 in Fig. 6b illustrates an MTF that may be expected if background noise significantly affects the speech intelligibility of the modeled space.
- background noise is a significant contributor to poor speech intelligibility
- the MTF is significantly reduced independent of the modulation frequency as illustrated in Fig. 6b by comparing the MTF labeled 690 to the MTF labeled 660.
- the MTF is reduced at higher modulation frequencies where the rate of reduction of the MTF increases as the reverberation times increase as illustrated by the MTF labeled 693 in Fig. 6b.
- the MTF labeled 696 in Fig. 6b illustrates an effect of late-arriving reflections on the MTF.
- a late-arriving reflection is manifested in the MTF by a notch 697 located at a modulation frequency that is inversely proportional to the time delay of the late-arriving reflection.
- reverberation can have a significant impact on the speech intelligibility of a venue. More importantly, listeners can distinguish very slight differences in reverberation that cannot be predicted using current ab initio simulation tools.
- Current sound system design-only systems can adequately predict sound coverage patterns or speech intelligibility coverage patterns for a modeled venue and sound system. These coverage patterns, however, are fairly coarse relative to the human ear and cannot give the listener a realistic simulation of the modeled venue. In such a situation, the simulation of the modeled venue that the user experiences may be substantially different from what the user experiences when in the actual venue. The difference may be an unpleasant surprise to the listener who assumed that the simulation of the modeled venue was accurate and would closely match the experience in the actual venue.
- the venue may still be modeled and a range of reverberation times provided. In this way, the user may still listen to a range of reverberation times and gain an appreciation of a range of possible listening experiences of the venue.
- the modeled venue may already exist and measured reverberation times for the existing venue may be available to the modeler.
- the modeler may enter the measured reverberation times for the existing venue into the simulation system and have the system automatically adjust the model to match the measured reverberation times.
- the adjusted model generates a simulation that more closely matches what the user would experience in the existing venue and allows the user to make a more precise evaluation of the modeled sound system.
- the reverberation characteristics of a venue may be viewed as having three regimes: an early reflections period, an early reverberant field period, and a late decaying tail period.
- the reverberant characteristics of the early reflections period are generally determined by characteristics such as the locations of audio sources, geometry of the venue, acoustic absorption of the venue surfaces, and the location of the listener.
- the reverberant characteristics of the early reverberant field period are generally determined by characteristics such as the scattering surfaces of the venue.
- the reverberant characteristics of the late decaying tail period are substantially determined by a reverberation time, RT, characterizing an exponential decay.
- a reverberation time characteristic is the RT60 time, which is the time it takes the reverberation in the late decaying tail period to decay by 60 db.
- Other measures of the reverberation characteristic of the late decaying tail period may be used following the teachings described herein.
- the reverberation time, RT60 may be estimated from the absorption coefficient and area of each surface characterizing the venue using, for example, the Sabine equation.
- a listener is typically more sensitive to the reverberant characteristics of the late decaying tail period than the reverberant characteristics of the early reflections or early reverberant field periods.
- Matching a predicted reverberation time to a measured reverberation time gives the listener a more realistic simulation of the venue.
- Matching of the predicted reverberation time to the measured reverberation time may be accomplished by adjusting the acoustic absorption coefficient, hereinafter referred to as the absorption coefficient, of one or more surfaces of the modeled venue.
- the absorption coefficient is adjusted such that the predicted reverberation time value for the late decaying tail period matches the measured reverberation time value of the venue such that the difference between the predicted reverberation time value and measured reverberation time value is barely perceived, if at all, by the listener.
- the absorption coefficient of a material may be frequency dependent.
- the audio spectrum is preferably discretized into one or more frequency bands and a predicted reverberation time value for each band is estimated using the absorption coefficient values corresponding to the associated band.
- Adjusting the absorption coefficients of the materials in the venue to match the reverberation time values also affects the reverberation characteristics of the early reflections and/or early reverberant field periods.
- the inventors have discovered, however, that adjustments to the absorption coefficient of the materials may be done such that the differences in the reverberation characteristics of the early reflections and early reverberant field periods arising from the adjustments are typically not noticeable by the listener.
- adjustments to the absorption coefficients are determined by a prioritized list of materials that are ranked according to a surface- area-weighted reflection coefficient.
- the modeled venue may contain one or more surfaces associated with the same material and to rank the materials, the total surface area associated with each material is used to calculate the index, ⁇ .
- the surface area associated with the m-th material is the sum of surface areas associated with the m-th material. If a ray tracing method is used to predict a portion of the reverberation, the surface area associated with the m-th material is weighted according to the number of ray impingements on the m-th surface and is given by the equation: where A(m) is the total surface area associated with the m-th material, Atot is the total surface area of the venue, n(i) is the number of impingements on the i-th surface and the sum is taken over all surfaces associated with the m-th material, and ntot is the total number of ray impingements.
- Adjustments to the absorption coefficient of the materials on the prioritized list are made according to the index of each material.
- the material with the largest index is adjusted first and if the adjustment to that material is sufficient to match the predicted reverberation time value to the measured reverberation time value, the remaining materials on the prioritized list are not adjusted.
- the magnitude of the adjustment may be limited by a pre-determined maximum adjustment value, MAV. If the material with the largest index is adjusted by the MAV and the reverberation time values still do not match, the material with the next largest index is adjusted up to its MAV and if the reverberation time values still do not match, the material with the next largest index is adjusted and so on until all the materials in the prioritized list have been adjusted by their respective MAV.
- the system may alert the user to the mismatch and ask the user to allow an increase in the MAV.
- the MAV is selected to limit a change in the sound pressure level of a sound wave reflected by the surface.
- the MAV may be determined by the equation:
- MaxDelta is maximum change in the SPL of the reflected wave and ⁇ (i,j) is the absorption coefficient for the i-th surface in the j-th frequency band.
- MaxDelta may be set to a value in a closed range of 0.01 to 2 dB, preferably in a closed range of 0.1 to 1 dB, and more preferably in a closed range of 0.25 to 1 dB.
- the adjusted absorption coefficient may be clipped to ensure that the absorption coefficient is within the closed range of zero to one.
- a ranking based on the index described above enables the system to use the smallest adjustment to the absorption coefficient to match the reverberation time values while reducing the effects on the early reflections and early reverberant field periods arising from the adjustment to the absorption coefficient.
- Selecting a T/US2008/077628 material having the largest surface area generally has the greatest effect on the reverberation time but also tends to affect the early reflection patterns from the material's surfaces.
- the change in the early reflection patterns may be reduced by selecting a surface with the lowest absorption coefficient or equivalents the highest reflection coefficient.
- Fig. 7 is a flowchart illustrating an exemplar process for matching predicted reverberation times to measured reverberation times.
- the audio spectrum is discretized into one or more frequency bands and the reverberation time is individually matched within each frequency band.
- the width of the frequency band may be selected by the user depending on a desired accuracy or on the available material data and preferably is between three octaves and one-tenth octave and more preferably is within a closed range of one octave to one-third octave wide.
- the predicted reverberation time for that band is compared to the measured reverberation time for that band.
- the reverberation times are considered matched if the absolute value of the difference between the predicted reverberation time and the measured reverberation time is less than or equal to a pre-defined value.
- the reverberation times are considered matched when the predicted reverberation time is within a pre-defined value of the measured reverberation time.
- the process proceeds to the next frequency band, as indicated in step 720.
- the pre-defined value may be a user-defined value or a system-defined constant based on, for example, psycho-acoustic data.
- the predefined value may be selected such that the difference between the predicted and measured reverberation times is not perceptible by a listener.
- the predefined value may be less than 0.5 seconds, preferably less than 0.1 seconds, and more preferably less than or equal to about 0.05 seconds.
- the absorption coefficient, ⁇ , of one or more materials may be adjusted such that the predicted reverberation time value matches the measured reverberation time value, as indicated in step 740.
- the magnitude of an adjustment, ⁇ may be limited by a pre-defined maximum adjustment value, MAV, to limit the change to a material's ⁇ and to apportion the required adjustment over all the materials, if necessary. If the maximum allowed adjustment to the first material is not sufficient to match reverberation time values, the ⁇ of the second material is adjusted, and so on until the ⁇ of all the materials have been adjusted by its MAV, as indicated in step 730.
- MAV maximum adjustment value
- a new predicted reverberation time value is estimated based on the adjusted ⁇ of the materials in 750.
- the predicted reverberation time value is given by Sabine's equation:
- RT(j) is the predicted reverberation time for the j-th frequency band
- V is the volume in cubic meters
- A(i) is the surface area, in square meters, of the i-th surface
- ⁇ (ij) is the absorption coefficient of the i-th surface of the j-th frequency band
- A' is the surface area, in square meters, of the selected surface
- ⁇ '(j) is the change in absorption coefficient in the j-th frequency band of the material associated with the selected surface.
- Absorption coefficients may be modified to account for various occupancy levels of the modeled venue. For example, an absorption coefficient for a floor surface where the audience may sit may be modified depending on whether the surface is partially or fully covered by the audience or is empty.
- Fig. 8 illustrates a window that may be displayed to the user to show the status of the reverberation time matching process. In some embodiments, a wizard may be used to guide the user through the matching process.
- the window 800 includes a list box 820 displaying the reverberation time for each frequency band, a list control box 810 that allows the user to select a frequency width for the matching process.
- the user has selected a one octave frequency band and has entered the measured reverberation time values for each octave band in the list box 820.
- the window 800 includes a plot area 830 where the measured and predicted reverberation time values are displayed as a function of frequency, as indicated by lines 840 and 850, respectively. The plots of the measured and predicted reverberation time values allow the user to quickly see the mismatches between the measured and predicted reverberation time values.
- the wizard displays a list of materials associated with the surfaces in the modeled venue as shown, for example, in Fig. 9.
- a table 910 is displayed listing each material 930, the absorption coefficient for the material at each frequency band 940, and the total surface area of each material in the modeled venue 950.
- a check box 920 next to each material allows the user to lock the absorption coefficients for that material. If the material is locked, the absorption coefficients for the locked material are not adjusted during the matching process.
- a user may lock a material when, for example, the user has measured absorption coefficient values for the material and is confident in its accuracy.
- the reverberation time matching process is executed and the results displayed to the user as shown, for example, in Fig. 10.
- the measured reverberation time values are plotted as a function of frequency 1040 along with the new predicted reverberation time values 1050 to allow the user to graphically review the matching.
- the user may select another tab 1020, 1030 to view the matching results in different formats. For example, the user may select tab 1020 to view the differences between the measured and predicted reverberation time values in text form. If the user selects tab 1030, the user may review the adjustments to the material absorption coefficients made during the matching process.
- the material adjustment table 1110 displays the adjustments to the material absorption coefficients made during the match process.
- the material adjustment table 1110 displays a list of materials 1120, a list of surface areas associated with the material 1140, and the adjustments made to each absorption coefficient 1130.
- the materials in the materials list 1120 that have been adjusted are indicated in the materials list 1120.
- the adjustments portion 1130 of the table 1110 may be color-coded to indicate upward or downward adjustments to the absorption coefficient values. Materials that were locked show zero adjustments across the frequency spectrum such as "Brick - Bare" in Fig. 11.
- Fig. 12 displays the change in reflection strength of a selected material caused by the matching process.
- a window 1200 displays a list box 1210 listing the adjusted materials and a plot display area 1220 that shows a reflection strength as a function of frequency for the material selected in the list box 1210.
- a plot 1230 of the reflection strength from the mineral board is displayed in the plot display area 1220.
- Plot 1230 indicates that at 1000 Hz, a ray reflecting from the mineral board is about 1.5 dB louder than a ray reflecting from an unadjusted mineral board.
- the user may undo the matching process by pressing the "Back” button or the user may accept the matching by pressing the "Finish” button 1290.
- the adjusted absorption coefficients are used for subsequent calculations in place of the original default absorption coefficient values.
- Embodiments of the systems and methods described above comprise computer components and computer-implemented steps that will be apparent to those skilled in the art.
- portions of the audio engine, model manager, user interface, and audio player may be implemented as computer-implemented steps stored as computer- executable instructions on a computer-readable medium such as, for example, floppy disks, hard disks, optical disks, Flash ROMS, nonvolatile ROM, flash drives, and RAM.
- the computer-executable instructions may be executed on a variety of processors such as, for example, microprocessors, digital signal processors, gate arrays, etc.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Stereophonic System (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
A sound system design/simulation system provides a more realistic simulation of an existing venue by matching a measured reverberation characteristic of the existing venue and adjusting one or more acoustic parameters characterizing the model such that a predicted reverberation characteristic substantially matches the measured reverberation characteristic.
Description
System and Method for Sound System Simulation
BACKGROUND
[0001] This disclosure relates to systems and methods for sound system design and simulation. As used herein, design system and simulation system are used interchangeably and refer to systems that allow a user to build a model of at least a portion of a venue, arrange sound system components around or within the venue, and calculate one or more measures characterizing an audio signal generated by the sound system components. The design system or simulation system may also simulate the audio signal generated by the sound system components thereby allowing the user to hear the audio simulation.
SUMMARY
[0002] A sound system design/simulation system provides a more realistic simulation of an existing venue by matching a measured reverberation characteristic of the existing venue and adjusting one or more acoustic parameters characterizing the model such that a predicted reverberation characteristic substantially matches the measured reverberation characteristic.
[0003] One embodiment of the present invention is directed to an audio simulation system comprising: a model manager configured to enable a user to build a 3-dimensional model of a venue and place and aim one or more loudspeakers in the model; a user interface configured to associate a material with a surface in the 3- dimensional model and to receive at least one measured reverberation time value; an audio engine configured to adjust an absorption coefficient of the material such that a predicted reverberation time value matches the at least one measured RT value; and an audio player generating at least two acoustic signals simulating an audio program played over the one or more loudspeakers in the model, the simulated audio program based on the adjusted absorption coefficient. In an aspect, the predicted reverberation time value matches the at least one measured reverberation time value to within 0.5 seconds. In another aspect, the predicted
reverberation time value matches the at least one measured reverberation time value to within 0.05 seconds. In another aspect, each material is characterized by an index and adjusted according to its index. In a further aspect, the index is a product of a surface area associated with the material and a reflection coefficient of the material. In a further aspect, the absorption coefficient of the material is adjusted according to a surface area associated with the material. In a further aspect, the absorption coefficient of the material is adjusted according to a reflection coefficient of the material. In another aspect, the at least one measured reverberation time value is an RT60 value.
[0004] Another embodiment of the present invention is directed to an audio simulation method comprising: providing an audio simulation system including a model manager, an audio engine, and an audio player; receiving at least one measured reverberation time; and matching a predicted reverberation time to the at least one measured reverberation time. In an aspect, the predicted reverberation time is within 0.5 seconds of the measured reverberation time. In another aspect, the predicted reverberation time is within 0.1 seconds of the measured reverberation time. In another aspect, an absolute value of a difference between the predicted reverberation time and the measured reverberation time is less than about 0.05 seconds. In another aspect, the step of matching further comprises adjusting a material characteristic such that the predicted reverberation time matches the at least one measured reverberation time. In a further aspect, the material characteristic is an absorption coefficient of a material. In a further aspect, the absorption coefficient of a material is adjusted according to a prioritized list of materials, each material in the prioritized list characterized by an index. In another aspect, the index is proportional to a product of a surface area of the material and a reflection coefficient of the material.
[0005] Another embodiment of the present invention is directed to an audio simulation system comprising: a user interface configured to receive at least one measured reverberation time of a venue; an audio engine configured to predict a reverberation time of the venue based on at least one absorption coefficient of a material associated with a surface of the venue; means for adjusting the at least one absorption coefficient such that the predicted reverberation time matches the at least
one measured reverberation time; and an audio player generating at least two acoustic signals simulating an audio program played in the venue, the simulated audio program based on the at least one absorption coefficient.
[0006] Another embodiment of the present invention is directed to a computer- readable medium storing computer-executable instructions for performing a method comprising: providing an audio simulation system including a model manager, an audio engine, and an audio player; receiving at least one measured reverberation time of a venue; and adjusting an absorption coefficient of a material associated with a surface of the venue such that a predicted reverberation time based on the adjusted absorption coefficient matches the at least one measured reverberation time.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] Fig. 1 is a diagram illustrating an architecture for an interactive sound system design system.
[0008] Fig. 2 illustrates a display portion of a user interface of the system shown in Fig. 1.
[0009] Fig. 3 illustrates a detailed view of a modeling window in the display portion of Fig. 2.
[0010] Fig. 4 illustrates a detailed view of a detail window in the display portion of Fig. 2.
[0011] Fig. 5 illustrates a detailed view of a data window in the display portion of Fig. 2.
[0012] Fig. 6a illustrates a detailed view of the data window with an MTF tab selected.
[0013] Fig. 6b displays exemplar MTF plots indicative of typical speech intelligibility problems.
[0014] Fig. 7 is a flowchart illustrating a reverberation matching process.
[0015] Fig. 8 illustrates a data window prior to the matching process of Fig. 7.
[0016] Fig. 9 illustrates another data window prior to the matching process of Fig. 7.
[0017] Fig. 10 illustrates a data window after the matching process of Fig. 7.
[0018] Fig. 11 illustrates another data window after the matching process of Fig. 7.
[0019] Fig. 12 illustrates another data window after the matching process of Fig. 7.
DETAILED DESCRIPTION
[0020] Fig. 1 illustrates an architecture for an interactive sound system design system. The design system includes a user interface 110, a model manager 120, an audio engine 130 and an audio player 140. The model manager 120 enables the user to build a 3-dimensional model of a venue, select venue surface materials, and place and aim one or more loudspeakers in the model. A property database 124 stores the acoustic properties of materials that may be used in the construction of the venue. An audio database 126 stores the acoustic properties of loudspeakers and other audio components that may be used as part of the designed sound system. Variables characterizing the venue or the acoustic space 122 such as, for example, temperature, humidity, background noise, and percent occupancy may be stored by the model manager 120.
[0021] The audio engine 130 estimates one or more sound qualities or sound measures of the venue based on the acoustic model of the venue managed by the model manager 120 and the placement of the audio components. The audio engine 130 may estimate the direct and/or indirect sound field coverage at any location in the venue and may generate one or more sound measures characterizing the modeled venue using methods and measures known in the acoustic arts.
[0022] The audio player 140 generates at least two acoustic signals that preferably give the user a realistic simulation of the designed sound system in the actual venue. The user may select an audio program that the audio player uses as a source input for generating the at least two acoustic signals that simulate what a listener in the venue would hear. The at least two acoustic signals may be generated by the audio player by filtering the selected audio program according to the predicted direct and reverberant characteristics of the modeled venue predicted by the audio engine. The audio player 140 allows the designer to hear how an audio program would sound in the venue, preferably before construction of the venue begins. This allows the designer to make changes to the selection of materials and/or surfaces during the initial design phase of the venue where changes can be implemented at low cost relative to the cost of retrofitting these same changes after construction of the venue. The auralization of the modeled venue provided by the audio player also enables the client and designer to hear the effects of different sound systems in the venue and allows the client to justify, for example, a more expensive sound system when there is an audible difference between sound systems. An example of an audio player is described in U.S. Patent No. 5,812,676 issued September 22, 1998.
[0023] Examples of interactive sound system design systems are described in co- pending U.S. patent application no. 10/964,421 filed October 13, 2004. Procedures and methods used by the audio engine to calculate coverage, speech intelligibility, etc., may be found in, for example, K. Jacob et al., "Accurate Prediction of Speech Intelligibility without the Use ofln-Room Measurements," J. Audio Eng. Soc, Vol. 39, No. 4, pp. 232-242 (April, 1991). Auralization methods implemented by the audio player may be found in, for example, M. Kleiner et al., "Auralization: Experiments in Acoustical CAD, "Audio Engineering Society Preprint # 2990, Sept., 1990.
[0024] Fig. 2 illustrates a display portion of a user interface of the system shown in Fig. 1. In Fig. 2, the display 200 shows a project window 210, a modeling window 220, a detail window 230, and a data window 240. The project window 210 may be used to open existing design projects or start a new design project. The project window 210 may be closed to expand the modeling window 220 after a project is opened.
[0025] The modeling window 220, detail window 230, and the data window 240 simultaneously present different aspects of the design project to the user and are linked such that data changed in one window is automatically reflected in changes in the other windows. Each window can display different views characterizing an aspect of the project. The user can select a specific view by selecting a tab control associated with the specific view.
[0026] Fig. 3 illustrates an exemplar modeling window 220. In Fig. 3, control tabs 325 may include a Web tab, a Model tab, a Direct tab, a Direct+Reverb tab, and a Speech tab. The Web tab provides a portal for the user to access the Web to, for example, access plug-in software components or download updates from the Web. The Model tab enables the user to build and view a model. The model may be displayed in a 3-dimensional perspective view that can be rotated by the user. In Fig. 3, the model tab 326 has been selected and displays the model in a plan view in a display area 321 and shows the locations of user selectable speakers 328, 329 and listeners 327.
[0027] The Direct, Direct+Reverb, and Speech tabs estimate and display coverage patterns for the direct field, the direct+reverb field, and a speech intelligibility field. The coverage area may be selected by the user. The coverage patterns are preferably overlaid over a portion of the displayed model. The coverage patterns may be color-coded to indicate high and low areas of coverage or the uniformity of coverage. The direct field is estimated based on the SPL at a location generated by the direct signal from each of the speakers in the modeled venue. The direct+reverb field is estimated based on the SPL at a location generated by both the direct signal and the reflected signals from each of the speakers in the modeled venue. A statistical model of reverberation may be used to model the higher order reflections and may be incorporated into the estimated direct+reverb field. The speech intelligibility field displays the speech transmission index (STI) over the portion of the displayed model. The STI is described in K.D. Jacob et al., "Accurate Prediction of Speech Intelligibility without the Use of In-Room Measurements," J. Audio Eng. Soα, Vol. 39, No. 4, pp 232-242 (April, 1991), Houtgast, T. and Steeneken, H. J. M. "Evaluation of Speech Transmission Channels by Using
Artificial Signals" Acoustica, Vol. 25, pp 355-367 (1971), "Predicting Speech
Intelligibility in Rooms from the Modulation Transfer Function. I. General Room Acoustics," Acoustica, Vol. 46, pp 60-72 (1980) and the international standard "Sound System Equipment - Part 16: Objective Rating of Speech Intelligibility by Speech Transmission Index, IEC 60268-16.
[0028] Fig. 4 shows an exemplar detail window 230. In Fig. 4, the property tab 426 is shown selected. Other control tabs 425 may include a Simulation tab, a Surfaces tab, a Loudspeakers tab, a Listeners tab, and an EQ tab.
[0029] When the Simulation tab is selected, the detail window display one or more input controls that allow the user to specify a value or select from a list of values for a simulation parameter. Examples of simulation parameter include a frequency or frequency range encompassed by the coverage map, a resolution characterizing the granularity of the coverage map, and a bandwidth displayed in the coverage map. The user may also specify one or more surfaces in the model for display of the acoustic prediction data.
[0030] The Surfaces, Loudspeakers, and Listeners tab allows the user to view the properties of the surfaces, loudspeakers, and listeners, respectively, placed in the model and allows the user to quickly change one or more parameters characterizing a surface, loudspeaker or listener. The Properties tab allows the user to quickly view, edit, and modify a parameter characterizing an element such as a surface or loudspeaker in the model. A user may select an element in the modeling window and have the parameter values associated with that element displayed in the detail window. Any change made by the user in the detail window is reflected in an updated coverage map, for example, in the modeling window.
[0031] When selected, the EQ tab enables the user to specify an equalization curve for one or more selected loudspeakers. Each loudspeaker may have a different equalization curve assigned to the loudspeaker.
[0032] Fig. 5 shows an exemplar data window 240 with a Time Response tab 526 selected. Other control tabs 525 may include a Frequency Response tab, a Modulation Transfer Function (MTF) tab, a Statistics tab, a Sound Pressure Level (SPL) tab, and a Reverberation Time (RT60) tab. The Frequency Response tab
displays the frequency response at a particular location selected by the user. The user may position a sample cursor in the coverage map displayed in the modeling window 220 and the frequency response at that location is displayed in the data window 240. The MTF tab displays a normalized amount of modulation preserved as a function of the frequency at a particular location selected by the user. The Statistics tab displays a histogram indicating the uniformity of the coverage data in the selected coverage map. The histogram preferably plots a normalized occurrence of a particular SPL against the SPL value. The mean and standard deviations may be displayed on the histogram as color-coded lines. The SPL tab displays the room frequency response as a function of frequency. A color-coded line representing the mean SPL at each frequency may be displayed in the data window along with color-coded lines representing a background noise level and/or a house curve, which represents the desired room frequency response. A shaded band may surround the mean SPL line to indicate a standard deviation from the mean. The RT60 tab displays the reverberation time as a function of frequency. The reverberation time is typically the RT60 time although other measures characterizing the reverberation decay may be used. The RT60 time is defined as the time required for the reverberation to exponentially decay by 60 dB. The user may choose to display the average absorption data as a function of frequency instead of the reverberation time.
[0033] In Fig. 5, a time response plot is displayed in the data window 240. The time response plot shows a signal strength or SPL along the vertical axis, the elapsed time on the horizontal axis and indicates the arrival of acoustic signals at a user-selected location. The vertical spikes or pins shown in Fig. 5 represent an arrival of a signal at a sampling location from one of the loudspeakers in the design. The arrival may be a direct arrival 541 or an indirect arrival that has been reflected from one or more surfaces in the model. In a preferred embodiment, each pin may be color-coded to indicate a direct arrival, a first order arrival representing a signal that has been reflected from a single surface 542, a second order arrival representing a signal that has been reflected from two surfaces 543, and higher order arrivals. A reverberant field envelope 545 may be estimated and displayed in the time response plot. An example of how the reverberant field envelope may be
estimated is described in K.D. Jacob, "Development of a New Algorithm for Predicting the Speech Intelligibility of Sound Systems, "presented at the 83rd Convention of the Audio Engineering Society, New York, NY (1987).
[0034] A user may select a pin shown in Fig. 5 and have the path of the selected pin displayed in the modeling window 220. The user may then make a modification to the design in the detail window 240 and see how the modification affects the coverage displayed in the modeling window 220 or how the modification affects a response in the data window. For example, a user can quickly and easily adjust a delay for a loudspeaker using a concurrent display of the modeling window 220, the data window 240, and the detail window 230. In this example, the user may adjust the delay for a loudspeaker to provide the correct localization for a listener located at the sample position. Listeners tend to localize sound based on the first arrival that they hear. If the listener is positioned closer to a second loudspeaker located farther away from an audio source than a first loudspeaker, they will tend to localize the source to the second loudspeaker and not to the audio source. If the second loudspeaker is delayed such that the audio signal from the second loudspeaker arrives after the audio signal from the first loudspeaker, the listener will be able to properly localize the sound.
[0035] The user can select the proper delays by displaying in the data window the direct arrivals in the time response plot. The user can select a pin representing one of the direct arrivals to identify the source of the selected direct arrival in the modeling window, which displays the path of the selected direct arrival from one of the loudspeakers in the model. The user can then adjust the delay of the identified loudspeaker in the detail window such than the first direct arrival the listener hears is from the loudspeaker closest to the audio source.
[0036] The concurrent display of both the model and coverage field in the modeling window, a response characteristic such as time response in the data window, and a property characteristic such as loudspeaker parameters in the detail window enables the user to quickly identify a potential problem, try various fixes, see the result of these fixes, and select the desired fix.
[0037] Removing objectionable time arrivals is another example where the concurrent display of the model, response, and property characteristics enables the user to quickly identify and correct a potential problem. Generally, arrivals that arrive more than 100 ms after the direct arrival and are more than 10 dB above the reverberant field may be noticed by the listener and may be unpleasant to the listener. The user can select an objectionable time arrival from the time response plot in the data window and see the path in the modeling window to identify the loudspeaker and surfaces associated with the selected path. The user can select one of the surfaces associated with the selected path and modify or change the material associated with the selected surface in the detail window and see the effect in the data window. The user may re-orient the loudspeaker by selecting the loudspeaker tab in the detail window and entering the changes in the detail window or the user may move the loudspeaker to a new location by dragging and dropping the loudspeaker in the modeling window.
[0038] Fig. 6a shows the data window with the MTF tab 626 selected. The Modulation Transfer Function (MTF) returns a normalized modulation preserved as a function of modulation frequency for a given octave band. A discussion of the MTF is presented in K.D. Jacob, "Development of a New Algorithm for Predicting the Speech Intelligibility of Sound Systems," presented at the 83rd Convention of the Audio Engineering Society, New York, NY (1987), Houtgast, T. and Steeneken, H. J. M. "Evaluation of Speech Transmission Channels by Using Artificial Signals" Acoustica, Vol. 25, pp 355-367 (1971) and "Predicting Speech Intelligibility in Rooms from the Modulation Transfer Function. I. General Room Acoustics," Acoustica, Vol. 46, pp 60-72 (1980), and the international standard "Sound System Equipment - Part 16: Objective Rating of Speech Intelligibility by Speech Transmission Index, IEC 60268-16. In Fig. 6, the MTF for octave bands corresponding to 125 Hz 650, 1 kHz 660, and 8 kHz 670 are shown for clarity although other octave bands may be displayed. In an ideal situation, a MTF substantially equal to one indicates that modulation of the voice box of a human speaker generating the speech is substantially preserved and therefore the speech intelligibility should be ideal. In a real-world situation, however, the MTF may drop significantly below the ideal and indicate possible speech intelligibility problems.
[0039] Fig. 6b displays exemplar MTF plots that may indicate the source of a speech intelligibility problem. In Fig. 6b, the MTF corresponding to the 1 kHz MTF 660 shown in Fig. 6a is re-displayed to provide a comparison to the other MTF plots. The MTF labeled 690 in Fig. 6b illustrates an MTF that may be expected if background noise significantly affects the speech intelligibility of the modeled space. When background noise is a significant contributor to poor speech intelligibility, the MTF is significantly reduced independent of the modulation frequency as illustrated in Fig. 6b by comparing the MTF labeled 690 to the MTF labeled 660. When reverberation is a significant contributor to poor speech intelligibility, the MTF is reduced at higher modulation frequencies where the rate of reduction of the MTF increases as the reverberation times increase as illustrated by the MTF labeled 693 in Fig. 6b. The MTF labeled 696 in Fig. 6b illustrates an effect of late-arriving reflections on the MTF. A late-arriving reflection is manifested in the MTF by a notch 697 located at a modulation frequency that is inversely proportional to the time delay of the late-arriving reflection.
[0040] As Fig. 6b illustrates, reverberation can have a significant impact on the speech intelligibility of a venue. More importantly, listeners can distinguish very slight differences in reverberation that cannot be predicted using current ab initio simulation tools. Current sound system design-only systems can adequately predict sound coverage patterns or speech intelligibility coverage patterns for a modeled venue and sound system. These coverage patterns, however, are fairly coarse relative to the human ear and cannot give the listener a realistic simulation of the modeled venue. In such a situation, the simulation of the modeled venue that the user experiences may be substantially different from what the user experiences when in the actual venue. The difference may be an unpleasant surprise to the listener who assumed that the simulation of the modeled venue was accurate and would closely match the experience in the actual venue. If the venue has not been built, the venue may still be modeled and a range of reverberation times provided. In this way, the user may still listen to a range of reverberation times and gain an appreciation of a range of possible listening experiences of the venue.
[0041] In many situations, the modeled venue may already exist and measured reverberation times for the existing venue may be available to the modeler. In such
situations, the modeler may enter the measured reverberation times for the existing venue into the simulation system and have the system automatically adjust the model to match the measured reverberation times. The adjusted model generates a simulation that more closely matches what the user would experience in the existing venue and allows the user to make a more precise evaluation of the modeled sound system.
[0042] The reverberation characteristics of a venue may be viewed as having three regimes: an early reflections period, an early reverberant field period, and a late decaying tail period. The reverberant characteristics of the early reflections period are generally determined by characteristics such as the locations of audio sources, geometry of the venue, acoustic absorption of the venue surfaces, and the location of the listener. The reverberant characteristics of the early reverberant field period are generally determined by characteristics such as the scattering surfaces of the venue. The reverberant characteristics of the late decaying tail period are substantially determined by a reverberation time, RT, characterizing an exponential decay. An example of a reverberation time characteristic is the RT60 time, which is the time it takes the reverberation in the late decaying tail period to decay by 60 db. Other measures of the reverberation characteristic of the late decaying tail period may be used following the teachings described herein. The reverberation time, RT60, may be estimated from the absorption coefficient and area of each surface characterizing the venue using, for example, the Sabine equation.
[0043] The inventors have discovered that a listener is typically more sensitive to the reverberant characteristics of the late decaying tail period than the reverberant characteristics of the early reflections or early reverberant field periods. Matching a predicted reverberation time to a measured reverberation time gives the listener a more realistic simulation of the venue. Matching of the predicted reverberation time to the measured reverberation time may be accomplished by adjusting the acoustic absorption coefficient, hereinafter referred to as the absorption coefficient, of one or more surfaces of the modeled venue. The absorption coefficient is adjusted such that the predicted reverberation time value for the late decaying tail period matches the measured reverberation time value of the venue such that the difference
between the predicted reverberation time value and measured reverberation time value is barely perceived, if at all, by the listener.
[0044] The absorption coefficient of a material may be frequency dependent. The audio spectrum is preferably discretized into one or more frequency bands and a predicted reverberation time value for each band is estimated using the absorption coefficient values corresponding to the associated band. Adjusting the absorption coefficients of the materials in the venue to match the reverberation time values also affects the reverberation characteristics of the early reflections and/or early reverberant field periods. The inventors have discovered, however, that adjustments to the absorption coefficient of the materials may be done such that the differences in the reverberation characteristics of the early reflections and early reverberant field periods arising from the adjustments are typically not noticeable by the listener.
[0045] In some embodiments, adjustments to the absorption coefficients are determined by a prioritized list of materials that are ranked according to a surface- area-weighted reflection coefficient. For example, the materials may be ranked according to an index, ε(ij) = A(i)(1-α(i,j)), where ε(i,j) is the index for the i-th surface in the j-th frequency band, A(i) is the surface area of the i-th surface, α(i,j) is the absorption coefficient for the i-th surface in the j-th frequency band, and (1-α(i,j)) is a reflection coefficient for the i-th surface in the j-th frequency band. The modeled venue may contain one or more surfaces associated with the same material and to rank the materials, the total surface area associated with each material is used to calculate the index, ε.
[0046] If a diffuse sound field is assumed, the surface area associated with the m-th material is the sum of surface areas associated with the m-th material. If a ray tracing method is used to predict a portion of the reverberation, the surface area associated with the m-th material is weighted according to the number of ray impingements on the m-th surface and is given by the equation:
where A(m) is the total surface area associated with the m-th material, Atot is the total surface area of the venue, n(i) is the number of impingements on the i-th surface and the sum is taken over all surfaces associated with the m-th material, and ntot is the total number of ray impingements.
[0047] Adjustments to the absorption coefficient of the materials on the prioritized list are made according to the index of each material. The material with the largest index is adjusted first and if the adjustment to that material is sufficient to match the predicted reverberation time value to the measured reverberation time value, the remaining materials on the prioritized list are not adjusted. The magnitude of the adjustment may be limited by a pre-determined maximum adjustment value, MAV. If the material with the largest index is adjusted by the MAV and the reverberation time values still do not match, the material with the next largest index is adjusted up to its MAV and if the reverberation time values still do not match, the material with the next largest index is adjusted and so on until all the materials in the prioritized list have been adjusted by their respective MAV. If all the materials in the prioritized list have been adjusted by the MAV and the RT values still do not match, the system may alert the user to the mismatch and ask the user to allow an increase in the MAV. In some embodiments, the MAV is selected to limit a change in the sound pressure level of a sound wave reflected by the surface. The MAV may be determined by the equation:
MAV = (1 - 1 oMβx0e/(a/10)(1 - a(i, J)) (2)
where MaxDelta is maximum change in the SPL of the reflected wave and α(i,j) is the absorption coefficient for the i-th surface in the j-th frequency band. MaxDelta may be set to a value in a closed range of 0.01 to 2 dB, preferably in a closed range of 0.1 to 1 dB, and more preferably in a closed range of 0.25 to 1 dB. The adjusted absorption coefficient may be clipped to ensure that the absorption coefficient is within the closed range of zero to one.
[0048] A ranking based on the index described above enables the system to use the smallest adjustment to the absorption coefficient to match the reverberation time values while reducing the effects on the early reflections and early reverberant field periods arising from the adjustment to the absorption coefficient. Selecting a
T/US2008/077628 material having the largest surface area generally has the greatest effect on the reverberation time but also tends to affect the early reflection patterns from the material's surfaces. The change in the early reflection patterns may be reduced by selecting a surface with the lowest absorption coefficient or equivalents the highest reflection coefficient.
[0049] Use of the prioritized list is not required, however, and other methods of adjusting the absorption coefficients may be used as long as the alterations generated in the early reflections and early reverberant field periods caused by the reverberation time matching are not perceptible by the user.
[0050] Fig. 7 is a flowchart illustrating an exemplar process for matching predicted reverberation times to measured reverberation times. The audio spectrum is discretized into one or more frequency bands and the reverberation time is individually matched within each frequency band. The width of the frequency band may be selected by the user depending on a desired accuracy or on the available material data and preferably is between three octaves and one-tenth octave and more preferably is within a closed range of one octave to one-third octave wide. After the reverberation time for each band has been matched, the process exits as shown in step 710.
[0051] Within each band, the predicted reverberation time for that band is compared to the measured reverberation time for that band. The reverberation times are considered matched if the absolute value of the difference between the predicted reverberation time and the measured reverberation time is less than or equal to a pre-defined value. In other words, the reverberation times are considered matched when the predicted reverberation time is within a pre-defined value of the measured reverberation time. The process proceeds to the next frequency band, as indicated in step 720. The pre-defined value may be a user-defined value or a system-defined constant based on, for example, psycho-acoustic data. The predefined value may be selected such that the difference between the predicted and measured reverberation times is not perceptible by a listener. For example, the predefined value may be less than 0.5 seconds, preferably less than 0.1 seconds, and more preferably less than or equal to about 0.05 seconds.
[0052] If the difference between the predicted and measured reverberation time values is greater than the pre-defined value, the absorption coefficient, α, of one or more materials may be adjusted such that the predicted reverberation time value matches the measured reverberation time value, as indicated in step 740. In some embodiments, the magnitude of an adjustment, δα, may be limited by a pre-defined maximum adjustment value, MAV, to limit the change to a material's α and to apportion the required adjustment over all the materials, if necessary. If the maximum allowed adjustment to the first material is not sufficient to match reverberation time values, the α of the second material is adjusted, and so on until the α of all the materials have been adjusted by its MAV, as indicated in step 730.
[0053] A new predicted reverberation time value is estimated based on the adjusted α of the materials in 750. The predicted reverberation time value is given by Sabine's equation:
RT( h =r 0-16V (3)
∑A(i)a(i,j) + A'δa'(j)
where RT(j) is the predicted reverberation time for the j-th frequency band, V is the volume in cubic meters, A(i) is the surface area, in square meters, of the i-th surface, α(ij) is the absorption coefficient of the i-th surface of the j-th frequency band, A' is the surface area, in square meters, of the selected surface, and δα'(j) is the change in absorption coefficient in the j-th frequency band of the material associated with the selected surface. Absorption coefficients may be modified to account for various occupancy levels of the modeled venue. For example, an absorption coefficient for a floor surface where the audience may sit may be modified depending on whether the surface is partially or fully covered by the audience or is empty.
[0054] If the new reverberation time value still does not match the measured reverberation time value after all the materials have been adjusted by their maximum allowed adjustment, the remaining difference is displayed to the user, and the user is presented with an option to repeat the process shown in Fig. 7 with a larger MAV. If the user selects this option, the process is repeated for bands that still have mismatched reverberation time values but with a larger MAV.
[0055] Fig. 8 illustrates a window that may be displayed to the user to show the status of the reverberation time matching process. In some embodiments, a wizard may be used to guide the user through the matching process. The window 800 includes a list box 820 displaying the reverberation time for each frequency band, a list control box 810 that allows the user to select a frequency width for the matching process. In the example shown in Fig. 8, the user has selected a one octave frequency band and has entered the measured reverberation time values for each octave band in the list box 820. The window 800 includes a plot area 830 where the measured and predicted reverberation time values are displayed as a function of frequency, as indicated by lines 840 and 850, respectively. The plots of the measured and predicted reverberation time values allow the user to quickly see the mismatches between the measured and predicted reverberation time values.
[0056] When the user selects the next button in window 800, the wizard displays a list of materials associated with the surfaces in the modeled venue as shown, for example, in Fig. 9. In Fig. 9, a table 910 is displayed listing each material 930, the absorption coefficient for the material at each frequency band 940, and the total surface area of each material in the modeled venue 950. A check box 920 next to each material allows the user to lock the absorption coefficients for that material. If the material is locked, the absorption coefficients for the locked material are not adjusted during the matching process. A user may lock a material when, for example, the user has measured absorption coefficient values for the material and is confident in its accuracy.
[0057] When the user selects the next button in Fig. 9, the reverberation time matching process is executed and the results displayed to the user as shown, for example, in Fig. 10. In Fig. 10, the measured reverberation time values are plotted as a function of frequency 1040 along with the new predicted reverberation time values 1050 to allow the user to graphically review the matching. The user may select another tab 1020, 1030 to view the matching results in different formats. For example, the user may select tab 1020 to view the differences between the measured and predicted reverberation time values in text form. If the user selects tab 1030, the user may review the adjustments to the material absorption coefficients made during the matching process.
[0058] Fig. 11 displays the adjustments to the material absorption coefficients made during the match process. The material adjustment table 1110 displays a list of materials 1120, a list of surface areas associated with the material 1140, and the adjustments made to each absorption coefficient 1130. The materials in the materials list 1120 that have been adjusted are indicated in the materials list 1120. The adjustments portion 1130 of the table 1110 may be color-coded to indicate upward or downward adjustments to the absorption coefficient values. Materials that were locked show zero adjustments across the frequency spectrum such as "Brick - Bare" in Fig. 11.
[0059] Fig. 12 displays the change in reflection strength of a selected material caused by the matching process. In Fig. 12, a window 1200 displays a list box 1210 listing the adjusted materials and a plot display area 1220 that shows a reflection strength as a function of frequency for the material selected in the list box 1210. For example, in Fig. 12, 5/8" mineral board has been selected and a plot 1230 of the reflection strength from the mineral board is displayed in the plot display area 1220. Plot 1230 indicates that at 1000 Hz, a ray reflecting from the mineral board is about 1.5 dB louder than a ray reflecting from an unadjusted mineral board. The user may undo the matching process by pressing the "Back" button or the user may accept the matching by pressing the "Finish" button 1290. When the user presses the "Finish" button, the adjusted absorption coefficients are used for subsequent calculations in place of the original default absorption coefficient values.
[0060] Embodiments of the systems and methods described above comprise computer components and computer-implemented steps that will be apparent to those skilled in the art. For example, it should be understood by one of skill in the art that portions of the audio engine, model manager, user interface, and audio player may be implemented as computer-implemented steps stored as computer- executable instructions on a computer-readable medium such as, for example, floppy disks, hard disks, optical disks, Flash ROMS, nonvolatile ROM, flash drives, and RAM. Furthermore, it should be understood by one of skill in the art that the computer-executable instructions may be executed on a variety of processors such as, for example, microprocessors, digital signal processors, gate arrays, etc. For ease of exposition, not every step or element of the systems and methods described
above is described herein as part of a computer system, but those skilled in the art will recognize that each step or element may have a corresponding computer system or software component. Such computer system and/or software components are therefore enabled by describing their corresponding steps or elements (that is, their functionality), and are within the scope of the present invention.
[0061] Having thus described at least illustrative embodiments of the invention, various modifications and improvements will readily occur to those skilled in the art and are intended to be within the scope of the invention. Accordingly, the foregoing description is by way of example only and is not intended as limiting. The invention is limited only as defined in the following claims and the equivalents thereto.
Claims
1. An audio simulation system comprising: a model manager configured to enable a user to build a 3-dimensional model of a venue and place and aim one or more loudspeakers in the model; a user interface configured to associate a material with a surface in the 3- dimensional model and to receive at least one measured reverberation time value; an audio engine configured to adjust an absorption coefficient of the material such that a predicted reverberation time value matches the at least one measured RT value; and an audio player generating at least two acoustic signals simulating an audio program played over the one or more loudspeakers in the model, the simulated audio program based on the adjusted absorption coefficient.
2. The audio simulation system of claim 1 wherein the predicted reverberation time value matches the at least one measured reverberation time value to within 0.5 seconds.
3. The audio simulation system of claim 1 wherein the predicted reverberation time value matches the at least one measured reverberation time value to within 0.05 seconds.
4. The audio simulation system of claim 1 wherein each material is characterized by an index and adjusted according to its index.
5. The audio simulation system of claim 4 wherein the index is a product of a surface area associated with the material and a reflection coefficient of the material.
6. The audio simulation system of claim 4 wherein the absorption coefficient of the material is adjusted according to a surface area associated with the material.
7. The audio simulation system of claim 4 wherein the absorption coefficient of the material is adjusted according to a reflection coefficient of the material.
8. The audio simulation system of claim 1 wherein the at least one measured reverberation time value is an RT60 value.
9. An audio simulation method comprising: providing an audio simulation system including a model manager, an audio engine, and an audio player; receiving at least one measured reverberation time; and matching a predicted reverberation time to the at least one measured reverberation time.
10. The simulation method of claim 9 wherein the predicted reverberation time is within 0.5 seconds of the measured reverberation time.
11. The simulation method of claim 9 wherein an absolute value of a difference between the predicted reverberation time and the measured reverberation time is less than about 0.05 seconds.
12. The simulation method of claim 9 wherein the step of matching further comprises adjusting a material characteristic such that the predicted reverberation time matches the at least one measured reverberation time.
13. The simulation method of claim 12 wherein the material characteristic is an absorption coefficient of a material.
14. The simulation method of claim 13 wherein the absorption coefficient of a material is adjusted according to a prioritized list of materials, each material in the prioritized list characterized by an index.
15. The simulation method of claim 9 wherein the index is proportional to a product of a surface area of the material and a reflection coefficient of the material.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP08859469.2A EP2232891B1 (en) | 2007-12-12 | 2008-09-25 | System and method for sound system simulation |
JP2010538002A JP5255067B2 (en) | 2007-12-12 | 2008-09-25 | System and method for speech system simulation |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/954,539 | 2007-12-12 | ||
US11/954,539 US8150051B2 (en) | 2007-12-12 | 2007-12-12 | System and method for sound system simulation |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2009075926A1 true WO2009075926A1 (en) | 2009-06-18 |
Family
ID=40377672
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2008/077628 WO2009075926A1 (en) | 2007-12-12 | 2008-09-25 | System and method for sound system simulation |
Country Status (4)
Country | Link |
---|---|
US (2) | US8150051B2 (en) |
EP (1) | EP2232891B1 (en) |
JP (1) | JP5255067B2 (en) |
WO (1) | WO2009075926A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2471089A (en) * | 2009-06-16 | 2010-12-22 | Focusrite Audio Engineering Ltd | Audio processing device using a library of virtual environment effects |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8150051B2 (en) * | 2007-12-12 | 2012-04-03 | Bose Corporation | System and method for sound system simulation |
US8396226B2 (en) * | 2008-06-30 | 2013-03-12 | Costellation Productions, Inc. | Methods and systems for improved acoustic environment characterization |
US8989882B2 (en) | 2008-08-06 | 2015-03-24 | At&T Intellectual Property I, L.P. | Method and apparatus for managing presentation of media content |
US20120038827A1 (en) * | 2010-08-11 | 2012-02-16 | Charles Davis | System and methods for dual view viewing with targeted sound projection |
DE102011001605A1 (en) * | 2011-03-28 | 2012-10-04 | D&B Audiotechnik Gmbh | Method and computer program product for calibrating a public address system |
EP2830043A3 (en) * | 2013-07-22 | 2015-02-18 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Method for Processing an Audio Signal in accordance with a Room Impulse Response, Signal Processing Unit, Audio Encoder, Audio Decoder, and Binaural Renderer |
US9723419B2 (en) | 2014-09-29 | 2017-08-01 | Bose Corporation | Systems and methods for determining metric for sound system evaluation |
US9607629B2 (en) * | 2014-10-27 | 2017-03-28 | Bose Corporation | Frequency response display |
US9734686B2 (en) * | 2015-11-06 | 2017-08-15 | Blackberry Limited | System and method for enhancing a proximity warning sound |
US10375498B2 (en) | 2016-11-16 | 2019-08-06 | Dts, Inc. | Graphical user interface for calibrating a surround sound system |
US11102603B2 (en) | 2019-05-28 | 2021-08-24 | Facebook Technologies, Llc | Determination of material acoustic parameters to facilitate presentation of audio content |
CN115862665B (en) * | 2023-02-27 | 2023-06-16 | 广州市迪声音响有限公司 | Visual curve interface system of echo reverberation effect parameters |
DE102023105669A1 (en) | 2023-03-07 | 2024-09-12 | Ralph Kessler | Device for acoustic modification of interior acoustics with integrated sound absorption element and sound detection device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1986002791A1 (en) * | 1984-10-22 | 1986-05-09 | Northwestern University | Spatial reverberation |
EP0593228A1 (en) * | 1992-10-13 | 1994-04-20 | Matsushita Electric Industrial Co., Ltd. | Sound environment simulator and a method of analyzing a sound space |
US5812676A (en) | 1994-05-31 | 1998-09-22 | Bose Corporation | Near-field reproduction of binaurally encoded signals |
EP1647909A2 (en) * | 2004-10-13 | 2006-04-19 | Bose Corporation | System and method for designing sound systems |
Family Cites Families (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3111186A (en) * | 1959-07-17 | 1963-11-19 | Bell Telephone Labor Inc | Automatic measurement of reverberation time |
US3535453A (en) * | 1967-05-15 | 1970-10-20 | Paul S Veneklasen | Method for synthesizing auditorium sound |
GB8403509D0 (en) * | 1984-02-10 | 1984-03-14 | Barnett P W | Acoustic systems |
JPH03194599A (en) * | 1989-12-25 | 1991-08-26 | Fujita Corp | Room acoustic simulation system |
JP3039667B2 (en) * | 1990-02-16 | 2000-05-08 | 大成建設株式会社 | Reverberation time control device |
US5263019A (en) * | 1991-01-04 | 1993-11-16 | Picturetel Corporation | Method and apparatus for estimating the level of acoustic feedback between a loudspeaker and microphone |
JP3152818B2 (en) * | 1992-10-13 | 2001-04-03 | 松下電器産業株式会社 | Sound environment simulation experience device and sound environment analysis method |
US5424487A (en) * | 1992-10-21 | 1995-06-13 | Kabushiki Kaisha Kawai Gakki Seisakusho | Sound effect-creating device |
JPH06161477A (en) * | 1992-11-20 | 1994-06-07 | Nippon Telegr & Teleph Corp <Ntt> | Virtual environment generating device |
JPH0732700U (en) * | 1993-11-18 | 1995-06-16 | 株式会社河合楽器製作所 | Soundproof performance confirmation device |
US5884436A (en) * | 1995-05-09 | 1999-03-23 | Lear Corporation | Reverberation room for acoustical testing |
JP2924781B2 (en) * | 1996-03-25 | 1999-07-26 | ヤマハ株式会社 | Reverberation generation method and reverberation generation device |
JP3282995B2 (en) * | 1997-12-04 | 2002-05-20 | 戸田建設株式会社 | Real sound data generation device for sound insulation performance evaluation, sound insulation performance evaluation device, and information storage medium |
JP2000284788A (en) * | 1999-03-29 | 2000-10-13 | Yamaha Corp | Method and device for generating impulse response |
US6895378B1 (en) * | 2000-09-22 | 2005-05-17 | Meyer Sound Laboratories, Incorporated | System and method for producing acoustic response predictions via a communications network |
JP2002123262A (en) * | 2000-10-18 | 2002-04-26 | Matsushita Electric Ind Co Ltd | Device and method for simulating interactive sound field, and recording medium with recorded program thereof |
US7783054B2 (en) * | 2000-12-22 | 2010-08-24 | Harman Becker Automotive Systems Gmbh | System for auralizing a loudspeaker in a monitoring room for any type of input signals |
US20030007648A1 (en) * | 2001-04-27 | 2003-01-09 | Christopher Currell | Virtual audio system and techniques |
JP4726174B2 (en) * | 2001-09-28 | 2011-07-20 | 大成建設株式会社 | Indoor acoustic design method |
US7096169B2 (en) * | 2002-05-16 | 2006-08-22 | Crutchfield Corporation | Virtual speaker demonstration system and virtual noise simulation |
JP3956804B2 (en) * | 2002-08-23 | 2007-08-08 | ヤマハ株式会社 | Sound environment prediction data creation method, sound environment prediction data creation program, and sound environment prediction system |
WO2004097350A2 (en) * | 2003-04-28 | 2004-11-11 | The Board Of Trustees Of The University Of Illinois | Room volume and room dimension estimation |
JP2005321661A (en) * | 2004-05-10 | 2005-11-17 | Kenwood Corp | Information processing system, information processor, information processing method, and program for improving sound environment |
JP4234174B2 (en) * | 2004-06-30 | 2009-03-04 | パイオニア株式会社 | Reverberation adjustment device, reverberation adjustment method, reverberation adjustment program, recording medium recording the same, and sound field correction system |
TWI245258B (en) * | 2004-08-26 | 2005-12-11 | Via Tech Inc | Method and related apparatus for generating audio reverberation effect |
US8284947B2 (en) * | 2004-12-01 | 2012-10-09 | Qnx Software Systems Limited | Reverberation estimation and suppression system |
JP2007003989A (en) * | 2005-06-27 | 2007-01-11 | Asahi Kasei Homes Kk | Sound environment analysis simulation system |
JP4668118B2 (en) * | 2006-04-28 | 2011-04-13 | ヤマハ株式会社 | Sound field control device |
US7805286B2 (en) * | 2007-11-30 | 2010-09-28 | Bose Corporation | System and method for sound system simulation |
US8150051B2 (en) * | 2007-12-12 | 2012-04-03 | Bose Corporation | System and method for sound system simulation |
US8396226B2 (en) * | 2008-06-30 | 2013-03-12 | Costellation Productions, Inc. | Methods and systems for improved acoustic environment characterization |
US20100119075A1 (en) * | 2008-11-10 | 2010-05-13 | Rensselaer Polytechnic Institute | Spatially enveloping reverberation in sound fixing, processing, and room-acoustic simulations using coded sequences |
-
2007
- 2007-12-12 US US11/954,539 patent/US8150051B2/en active Active
-
2008
- 2008-09-25 JP JP2010538002A patent/JP5255067B2/en not_active Expired - Fee Related
- 2008-09-25 WO PCT/US2008/077628 patent/WO2009075926A1/en active Application Filing
- 2008-09-25 EP EP08859469.2A patent/EP2232891B1/en not_active Not-in-force
-
2012
- 2012-03-30 US US13/434,906 patent/US9716960B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1986002791A1 (en) * | 1984-10-22 | 1986-05-09 | Northwestern University | Spatial reverberation |
EP0593228A1 (en) * | 1992-10-13 | 1994-04-20 | Matsushita Electric Industrial Co., Ltd. | Sound environment simulator and a method of analyzing a sound space |
US5812676A (en) | 1994-05-31 | 1998-09-22 | Bose Corporation | Near-field reproduction of binaurally encoded signals |
EP1647909A2 (en) * | 2004-10-13 | 2006-04-19 | Bose Corporation | System and method for designing sound systems |
Non-Patent Citations (8)
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2471089A (en) * | 2009-06-16 | 2010-12-22 | Focusrite Audio Engineering Ltd | Audio processing device using a library of virtual environment effects |
Also Published As
Publication number | Publication date |
---|---|
JP5255067B2 (en) | 2013-08-07 |
EP2232891B1 (en) | 2019-04-10 |
US8150051B2 (en) | 2012-04-03 |
US20090154716A1 (en) | 2009-06-18 |
US20120183151A1 (en) | 2012-07-19 |
JP2011507030A (en) | 2011-03-03 |
EP2232891A1 (en) | 2010-09-29 |
US9716960B2 (en) | 2017-07-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8150051B2 (en) | System and method for sound system simulation | |
US7805286B2 (en) | System and method for sound system simulation | |
US9264834B2 (en) | System for modifying an acoustic space with audio source content | |
Vorländer et al. | Virtual reality for architectural acoustics | |
CN107770717B (en) | Generating binaural audio by using at least one feedback delay network in response to multi-channel audio | |
CN111065041B (en) | Generating binaural audio by using at least one feedback delay network in response to multi-channel audio | |
US11516614B2 (en) | Generating sound zones using variable span filters | |
Jeon et al. | Objective and subjective assessment of sound diffuseness in musical venues via computer simulations and a scale model | |
Morales et al. | Receiver placement for speech enhancement using sound propagation optimization | |
Tommasini et al. | A computational model to implement binaural synthesis in a hard real-time auditory virtual environment | |
Haeussler et al. | Crispness, speech intelligibility, and coloration of reverberant recordings played back in another reverberant room (Room-In-Room) | |
US20230209302A1 (en) | Apparatus and method for generating a diffuse reverberation signal | |
Meacham et al. | Auralization of a hybrid sound field using a wave-stress tensor based model | |
Queiroz et al. | AcMus: an open, integrated platform for room acoustics research | |
Tahvanainen | Analysis and perception of the seat-dip effect in concert halls | |
Kimmich et al. | Optimisation of the Orchestra Pit Acoustics in Opera Houses by Acoustical Simulations using the Finite Element Method | |
Bistafa et al. | A survey of the acoustic quality for speech in auditoriums | |
Koya | Predicting the Overall Spatial Quality of Automotive Audio Systems | |
Iglesias Barros | Acoustic characteristics of two portuguese churches built at the end of the 1960’s | |
Oke | Semantic Design Methodologies for Acoustic Modelling Interfaces | |
Begault | Preference versus Reference: Listeners as participants in sound reproduction | |
Goddard | Development of a Perceptual Model for the Trade-off Between Interaural Time and Level Differences for the Prediction of Auditory Image Position | |
Stewart et al. | Hybrid convolution and filterbank artificial reverberation algorithm using statistical analysis and synthesis | |
Jeon et al. | The design goal of acoustic coefficients in concert halls related to reverberation and loudness | |
Beresford | Perceptual effects of spectral magnitude distortions in a multi-channel automotive audio environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 08859469 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010538002 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2008859469 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |