WO2010057997A1 - Converter and method for converting an audio signal - Google Patents
Converter and method for converting an audio signal Download PDFInfo
- Publication number
- WO2010057997A1 WO2010057997A1 PCT/EP2009/065587 EP2009065587W WO2010057997A1 WO 2010057997 A1 WO2010057997 A1 WO 2010057997A1 EP 2009065587 W EP2009065587 W EP 2009065587W WO 2010057997 A1 WO2010057997 A1 WO 2010057997A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- transfer function
- signal
- simplified
- channels
- input
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 230000005236 sound signal Effects 0.000 title claims description 10
- 238000012546 transfer Methods 0.000 claims abstract description 56
- 230000006870 function Effects 0.000 claims description 66
- 238000012545 processing Methods 0.000 claims description 14
- 230000001052 transient effect Effects 0.000 claims 1
- 238000006243 chemical reaction Methods 0.000 abstract description 2
- 230000004044 response Effects 0.000 description 36
- 230000005284 excitation Effects 0.000 description 6
- 238000005259 measurement Methods 0.000 description 6
- 230000000875 corresponding effect Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 235000019801 trisodium phosphate Nutrition 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 241000334993 Parma Species 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
- H04S3/002—Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01H—MEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
- G01H17/00—Measuring mechanical vibrations or ultrasonic, sonic or infrasonic waves, not provided for in the preceding groups
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S1/00—Two-channel systems
- H04S1/002—Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/01—Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S5/00—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation
- H04S5/005—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation of the pseudo five- or more-channel type, e.g. virtual surround
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/305—Electronic adaptation of stereophonic audio signals to reverberation of the listening space
Definitions
- the invention relates to a method for simplifying a model of an acoustic environment, wherein said model comprises a set of transfer functions, each transfer function corresponding to a set of sound propagation paths between a sound emitting position and a sound receiving position in said acoustic environment.
- the method according to the invention comprises the steps of - calculating a simulated reverberation of a first stimulus emitted at said sound emitting position as received at said sound receiving position by applying said transfer function to said first stimulus;
- the local maxima in the intensity envelope represent significant reverberation components, corresponding to predominant propagation paths in the modelled acoustic environment. Selecting a subset of these local maxima allows their match with a simplified transfer function, which can then be used to simulate the acoustic environment with reduced processing and memory requirements without a perceived reduction of the sound image quality.
- the selected number of local maxima does not exceed a predetermined maximum.
- the maximum size and/or complexity of the transfer function is thus limited in advance, defining the processing and memory requirements that will be necessary to handle the simplified acoustic environment model.
- said selected subset of local maxima are selected from among those above a time-intensity attenuation function.
- time-intensity attenuation function may be an exponential attenuation function.
- the resulting simulated reverberation cannot be distinguished by the human ear from a version where all reverberation components are included, yet the exclusion of reverberation components below said exponential time-intensity attenuation function substantially reduces the processing requirements and complexity.
- said simplified transfer function may be applied by convolution with said first stimulus. This provides a particularly faithful simulated reverberation.
- each simplified transfer function is expressed as a combination of a signal delay and a signal attenuation for each selected local maximum.
- the simulation can be carried out by applying these transfer functions to the audio signals in comparatively simple time domain operations, rather than by convolution.
- the present invention also relates to a method for converting a first audio stream comprising N input channels into a second audio stream comprising M output channels, comprising the steps of, for each input and output channel: selecting, in a model of an M-channel acoustic environment simplified using the abovementioned method, a simplified transfer function associated with said input channel and output channel; processing at least part of an input signal from said input channel by applying said selected simplified transfer function so as to generate at least part of an output signal for said output channel.
- M may also be lower or equal to N.
- the selected subsets of local maxima are different for at least two of the M audio channels.
- the method allows, using two transfer functions, to artificially change a mono signal into a stereo signal by simulating the signal paths from a single source to two receiving points in a room.
- the two transfer functions are subsequently modified in order to select a different set of local maxima.
- the selected sets of reflections are different for all M channels.
- the invention would normally be used to change a multi channel signal into another multi channel signal having even more channels, for instance converting a 2D audio signal into a 3D audio signal by calculating additional channels representing an additional layer of elevated speakers, in its extreme the invention can be used to convert a mono signal into a 3D signal by using a multitude of simplified transfer functions, one for each source speaker location combination and by ensuring a decorrelation of the M channels that the subset of selected local maxima are different for each of the M channels.
- the output signal comprises an early part and a late part.
- Such a late part normally tends to have few pronounced peaks associated with particular reflections. Consequently, the simplified transfer function may be used to generate only the early part, while the late part may be generated using other approaches, for instance a conventional approach as known from the abovementioned prior art, or an algorithmic reverberation method.
- the early part tends to have several pronounced peaks representing dominant reflections where the method of the present invention allows a removal of the non-dominant reverberation components, thus reducing processing requirements complexity.
- How the output signal is divided into said early and late parts may be determined according to sound type. Different kinds of sounds have different divisions between the early part having several dominant local maxima and the late part having few or no dominant local maxima.
- the division between early and late part can be adjusted to optimally apply the appropriate method to the appropriate part.
- the late part of the reverberation can be advantageously removed, for instance for news reading.
- a converter as claimed is advantageously used in an audio device.
- the audio device can convert for instance a 2 dimensional sound into a 3 dimensional sound having more channels than the 2 dimensional sound. This allows the converter to create a 3 dimensional impression from input channels that do not contain information about the third dimension.
- Such an audio device comprising a converter as claimed may advantageously be in a motor vehicle.
- the acoustic environment in a motor vehicle is often less than optimal and different from the acoustic environment in which many audio recordings are made.
- the reverberation can be adjusted using the method and converter of the present invention allowing optimal reproduction of the sound in the motor vehicle's acoustical environment.
- Figure 1 shows a room with a measurement set up for characterizing the room.
- Figure 2 shows an intensity envelope of a measured impulse response showing the reverberation at the measurement position in the room.
- Figure 3 shows an intensity envelope of a simulated impulse response obtained using a model of the room.
- Figure 4 shows the intensity envelopes of both the measured impulse response and the simulated impulse response.
- Figure 5 shows the intensity envelope of the simulated impulse response after setting some components to zero, leaving only a predetermined number of highest reflections in the reverberation.
- Figure 6 shows a converter for converting N audio channels to M audio channels using a room model.
- Figure 7 shows an audio device comprising the converter.
- Figure 1 shows a room with a measurement setup for characterizing the room.
- an excitation source 2 for instance a loudspeaker
- a measuring device 3 is placed to capture the room's response to the stimulus by the excitation source 2.
- the stimulus can be a Time Stretched Pulse.
- This is basically an exponential sine sweep, which provides several advantages over the older MLS (Maximum Length Sequence) method.
- One technique for obtaining higher S/N ratios involves recording multiple TSPs and then averaging; ambient noise and self noise of the equipment reduces 3 dB for any doubling of the number of recordings.
- a remaining problem of this technique is that speaker-induced distortion will not disappear.
- sweep length will be approximately 10 times as long as the estimated reverberation time of the measured room, typically resulting in a length of 15-80 s. This presumes measuring 10 octaves from start to stop frequency. The sweeps utilized should also be faded in and out to avoid artefacts.
- loudspeaker power Another factor with direct influence on the signal/noise ratio is loudspeaker power compared to the background noise level. It is recommended to use a calibration of 85 dBa SPL at 2m distance by playing a -20 dbFS bandlimited (500-2000 Hz) signal. The sweep goes out 14 dB louder at -6dBFS.
- Such an impulse is thus provided via the excitation source 2 and sound waves travel along various paths, in figure 1 a first path 4 and a second path 5. Since the first path 4 and the second path 5 have different length the sound waves will arrive at different times at the measurement device 3, resulting in a reverberation being captured.
- the reverberation is different for different measurement and excitation positions and based on measured reverberations a model of a room can be established.
- This method is widely known and can be found in "An Optimised Method for Capturing Multidimensional "Acoustic Fingerprints", by Ralph Kessler, Audio Engineering Society, Convention Paper, Presented at the 118th Convention, 2005 May 28-31 Barcelona, Spain, and in publications by Prof. Angelo Farina of Italy.
- the model of the room can be constructed as a set of such transfer functions corresponding to a set of sound emitting positions and sound receiving positions.
- the N input audio channels are convolved with the set of transfer functions, resulting in M audio channels having a sound image that resembles the modelled room.
- Figure 2 shows a measured impulse response showing the reverberation at the measurement position in the room.
- the intensity envelope 20 of the measured impulse response as a function of time is shown, on a logarithmic-linear graph, in figure 2 and comprises several local maxima 21 , 22, 23, 24, 25 corresponding to multiple propagation paths in the room. Depending on the characteristics of the room the reflections cause different amounts of delay and attenuation. The peaks 21 , 22, 23, 24, 25 in the envelope 20 consequently have different positions and different amplitudes.
- Figure 3 shows a simulated impulse response obtained using the abovementioned model of the room.
- the intensity envelope 30 of the simulated impulse response is shown in figure 3 and comprises several local maxima 31 , 32, 33, 34, 35 that correspond to multiple propagation paths in the modelled room.
- different amounts of delay and attenuation are incorporated into the transfer function.
- the local maxima 21 , 22, 23, 24, 25 in the envelope 30 are obtained and are positioned at the appropriate positions in the reverberation and having different amplitudes, matching the measured impulse response as close as possible.
- Figure 4 shows both the measured impulse response and the modelled impulse response.
- the intensity envelope 20 of the measured impulse response and the envelope 30 of the calculated impulse response are overlapped for comparison and as can be seen, in this example a good match between the intensity envelopes 20, 30 has been achieved.
- the first local maximum or peak 31 in the calculated envelope 30 corresponds well to the first peak 21 of the measured envelope 20, showing that the transfer function matches the modelled room quite well.
- Figure 5 shows the modelled impulse response after setting some components to zero, leaving only a predetermined number of dominant propagation paths in the reverberation.
- the transfer function is simplified. This simplification is verified by calculating the impulse response using the simplified transfer function and checking whether the resulting impulse response still matches the measured impulse response satisfactorily.
- the criterion for checking the simplified transfer function is that a selected subset of the set of local maxima of the intensity envelope of the measured impulse response is still maintained in the intensity envelope of the simulated impulse response.
- the transfer function This means that some local maxima can be removed through modifying, i.e. simplifying, the transfer function.
- This is shown in figure 5 in that the first peak 31 , second peak 32 and fifth peak 35 are still present in the intensity envelope 30 of the simulated impulse response, while the third peak 33 and the fourth peak 34 are no longer present.
- Figure 5 shows the intensity envelope 20 of the measured impulse response for ease of comparison.
- the number of the selected subset of local maxima 31 , 32 and 35 is not higher than a predetermined number, for instance, in the illustrated example, not higher than three. This limits in advance the complexity of the simplified transfer function.
- this selection is carried out by fitting, to the intensity envelope 20, a time-intensity attenuation function 40, as illustrated in Fig. 5, that undercuts the predetermined maximum number of local maxima to be selected, and selecting the local maxima reaching above it, which will be those more clearly perceived by the human ear.
- the simplified transfer function may be expressed as a signal delay and a signal attenuation for each selected local maximum.
- the calculation of the impulse response will thus be possible in a comparatively simple time domain operation, rather than by convolution.
- Figure 6 shows a converter for converting N audio channels to M audio channels using a room model.
- the converter 60 has input channels 64 connected to a processor 61 that can calculate multiple reverberations for various combinations of input channel and output channel.
- the output signals of the processor 61 are provided to output channels 65.
- the transfer functions or parameters for the transfer functions to be used by the processor 61 are provided via a model input block which is arranged to receive information about the model or transfer functions from the parameter input 66.
- said processor calculates, for each input and output channel combination, a convolution of the input signal with the corresponding simplified transfer function.
- the simplified transfer function is expressed as a combination of a signal delay and a signal attenuation for each selected local maximum, these are applied in time domain operations to the input signal.
- Figure 7 shows a converter for separately processing the early and late part of the reverberation.
- the converter 60 has input channels 64 connected to a divider 70 for dividing the input signals.
- the processor 61 that can calculate multiple reverberations for various combinations of input channel and output channel so as to generate the early part of output signals.
- the late part is either not generated at all or generated by a separate processor 72 also connected to the divider 70 where the late part is for instance generated in a conventional manner, such as algorithmic reverberation.
- the output channels are provided by the processors 61 and 72 to combiner 71 where the resulting early and late parts for each channel are combined into a single output signal to be provided to the outputs 65.
- the transfer functions or parameters for the transfer functions to be used by the processor 61 are provided via a model input block which is arranged to receive information about the model or transfer functions from the parameter input 66.
- Figure 8 shows an audio device comprising the converter.
- the audio device 80 comprises the converter 60, either the converter 60 of figure 6 or figure 7.
- the audio device receives N input channels, for instance from an optical disc 81 or a transmission channel (not shown).
- the N input channels are provided to the converter 60 to be converted to M channels.
- the converter needs information about the transfer functions to be used. This information can be embedded in the converter 60 or in the audio device 80, or can be received from an external source. Shown in figure 8 is the situation where the information is retrieved from the optical disc. In such a case the optical disc can comprise both the input channels as the room model information.
- Figure 9 shows the measured impulse response comprising an early part and a late part.
- the early part and the late part are contiguous, but in alternative embodiments they may also be overlapping or spaced apart.
- the processing of the early part and the late part of the reverberation can be divided and treated separately.
- the intensity envelope of the reverberation 20 of figure 2 is shown again in figure 9 but now with the vertical dotted line indicating the dividing point between the early part 21 , 22, 23, 24, 25 and the late part
- the dividing point is not fixed in time but is determined based on the type of sound (for instance voice, classic, jazz, pop etc) or the type of acoustical environment modelled. In the case of figure 9 the dividing point is chosen to be between the early section having peaks resulting from distinct dominant reflections of a relatively high amplitude, and the late part 91 having a relatively uniform decaying envelope shape without dominant peaks. As is clear from the description, the invention can advantageously be used on the early part 21 , 22, 23, 24, 25 with its peaks.
- the late part 91 can be processed using methods and means known from the prior art or can be ignored all together.
- Figure 10 shows the simulated early part.
- the simulated early part 100 of the impulse response comprises only the dominant peaks 31 , 32, 33, 34, 35 as selected by using the modelling method of the present invention, equivalently to figure 5 but with the late part removed before applying the method of the present invention.
- Figure 11 shows the simulated late part.
- the modelled late part 110 of the impulse response lacks the dominant peaks 31 , 32, 33, 34, 35 of the early part as the early part was removed before the processing, but includes the late part 111.
- Figure 12 shows two channels 120, 121 where the selection of peaks for use in the simplified transfer function of the model were chosen to be different so as to avoid a comb filter effect when played back in a room.
- two identical impulse responses 120, 121 are shown, while in reality the impulse responses will slightly differ for each channel.
- the second peak 32 has been omitted, while in the second modelled impulse response 121 the fourth peak 34 has been omitted.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Stereophonic System (AREA)
- Reverberation, Karaoke And Other Acoustics (AREA)
- Circuit For Audible Band Transducer (AREA)
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DK09801675.1T DK2368375T3 (da) | 2008-11-21 | 2009-11-20 | Omformer og fremgangsmåde til at omforme et lydsignal |
CA2744429A CA2744429C (en) | 2008-11-21 | 2009-11-20 | Converter and method for converting an audio signal |
CN200980153440.8A CN102334348B (zh) | 2008-11-21 | 2009-11-20 | 转换器及转换音频信号的方法 |
US13/130,737 US9100767B2 (en) | 2008-11-21 | 2009-11-20 | Converter and method for converting an audio signal |
EP09801675.1A EP2368375B1 (en) | 2008-11-21 | 2009-11-20 | Converter and method for converting an audio signal |
JP2011536881A JP5611970B2 (ja) | 2008-11-21 | 2009-11-20 | オーディオ信号を変換するためのコンバータ及び方法 |
HK12107294.4A HK1166908A1 (en) | 2008-11-21 | 2012-07-25 | Converter and method for converting an audio signal |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP08169729.4 | 2008-11-21 | ||
EP08169729 | 2008-11-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010057997A1 true WO2010057997A1 (en) | 2010-05-27 |
Family
ID=42060701
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2009/065587 WO2010057997A1 (en) | 2008-11-21 | 2009-11-20 | Converter and method for converting an audio signal |
Country Status (10)
Country | Link |
---|---|
US (1) | US9100767B2 (ja) |
EP (1) | EP2368375B1 (ja) |
JP (1) | JP5611970B2 (ja) |
KR (1) | KR101646540B1 (ja) |
CN (1) | CN102334348B (ja) |
CA (1) | CA2744429C (ja) |
DK (1) | DK2368375T3 (ja) |
HK (1) | HK1166908A1 (ja) |
TW (1) | TWI524785B (ja) |
WO (1) | WO2010057997A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2560078A (en) * | 2016-12-27 | 2018-08-29 | Harman Int Ind | Control for vehicle sound output |
WO2023287782A1 (en) * | 2021-07-15 | 2023-01-19 | Dolby Laboratories Licensing Corporation | Data augmentation for speech enhancement |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014085439A (ja) * | 2012-10-22 | 2014-05-12 | Nippon Hoso Kyokai <Nhk> | インパルス応答測定システム及びインパルス応答測定方法 |
DE102012224454A1 (de) * | 2012-12-27 | 2014-07-03 | Sennheiser Electronic Gmbh & Co. Kg | Erzeugung von 3D-Audiosignalen |
WO2015099429A1 (ko) * | 2013-12-23 | 2015-07-02 | 주식회사 윌러스표준기술연구소 | 오디오 신호 처리 방법, 이를 위한 파라메터화 장치 및 오디오 신호 처리 장치 |
JP6371167B2 (ja) * | 2014-09-03 | 2018-08-08 | リオン株式会社 | 残響抑制装置 |
WO2017192972A1 (en) | 2016-05-06 | 2017-11-09 | Dts, Inc. | Immersive audio reproduction systems |
WO2018079254A1 (en) * | 2016-10-28 | 2018-05-03 | Panasonic Intellectual Property Corporation Of America | Binaural rendering apparatus and method for playing back of multiple audio sources |
US10979844B2 (en) | 2017-03-08 | 2021-04-13 | Dts, Inc. | Distributed audio virtualization systems |
US9820073B1 (en) | 2017-05-10 | 2017-11-14 | Tls Corp. | Extracting a common signal from multiple audio signals |
US20220345843A1 (en) * | 2021-04-27 | 2022-10-27 | Apple Inc. | Audio level metering for listener position and object position |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050114121A1 (en) * | 2003-11-26 | 2005-05-26 | Inria Institut National De Recherche En Informatique Et En Automatique | Perfected device and method for the spatialization of sound |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6253100A (ja) * | 1985-09-02 | 1987-03-07 | Nippon Gakki Seizo Kk | 音響特性制御装置 |
JPH0748920B2 (ja) * | 1986-05-31 | 1995-05-24 | 株式会社富士通ゼネラル | 音響再生装置 |
JPS63173500A (ja) * | 1987-01-13 | 1988-07-18 | Sony Corp | カ−オ−デイオ装置 |
DE19545623C1 (de) * | 1995-12-07 | 1997-07-17 | Akg Akustische Kino Geraete | Verfahren und Vorrichtung zur Filterung eines Audiosignals |
JPH1127800A (ja) * | 1997-07-03 | 1999-01-29 | Fujitsu Ltd | 立体音響処理システム |
JP2000059894A (ja) * | 1998-08-12 | 2000-02-25 | Victor Co Of Japan Ltd | 音像定位装置とfirフィルタ係数の処理方法及びfirフィルタの演算方法 |
JP3435141B2 (ja) * | 2001-01-09 | 2003-08-11 | 松下電器産業株式会社 | 音像定位装置、並びに音像定位装置を用いた会議装置、携帯電話機、音声再生装置、音声記録装置、情報端末装置、ゲーム機、通信および放送システム |
FR2852779B1 (fr) * | 2003-03-20 | 2008-08-01 | Procede pour traiter un signal electrique de son | |
FR2899424A1 (fr) | 2006-03-28 | 2007-10-05 | France Telecom | Procede de synthese binaurale prenant en compte un effet de salle |
CN101529930B (zh) * | 2006-10-19 | 2011-11-30 | 松下电器产业株式会社 | 声像定位装置、声像定位系统、声像定位方法、程序及集成电路 |
JP4941106B2 (ja) * | 2007-05-30 | 2012-05-30 | カシオ計算機株式会社 | 共鳴音付加装置および共鳴音付加プログラム |
CN101178897B (zh) * | 2007-12-05 | 2011-04-20 | 浙江大学 | 利用基频包络剔除情感语音的说话人识别方法 |
-
2009
- 2009-11-20 TW TW098139475A patent/TWI524785B/zh not_active IP Right Cessation
- 2009-11-20 US US13/130,737 patent/US9100767B2/en active Active
- 2009-11-20 WO PCT/EP2009/065587 patent/WO2010057997A1/en active Application Filing
- 2009-11-20 CN CN200980153440.8A patent/CN102334348B/zh active Active
- 2009-11-20 KR KR1020117014254A patent/KR101646540B1/ko active IP Right Grant
- 2009-11-20 CA CA2744429A patent/CA2744429C/en active Active
- 2009-11-20 EP EP09801675.1A patent/EP2368375B1/en active Active
- 2009-11-20 DK DK09801675.1T patent/DK2368375T3/da active
- 2009-11-20 JP JP2011536881A patent/JP5611970B2/ja active Active
-
2012
- 2012-07-25 HK HK12107294.4A patent/HK1166908A1/xx not_active IP Right Cessation
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050114121A1 (en) * | 2003-11-26 | 2005-05-26 | Inria Institut National De Recherche En Informatique Et En Automatique | Perfected device and method for the spatialization of sound |
Non-Patent Citations (4)
Title |
---|
"Postmasking ED - Hugo Fastl; Eberhard Zwicker", 1 January 2007, PSYCHOACOUSTICS: FACTS AND MODELS (THIRD EDITION), SPRINGER BERLIN, PAGE(S) 83, ISBN: 9783540231592, XP007912521 * |
AHNERT W ET AL: "EARS AURALIZATION SOFTWARE", JOURNAL OF THE AUDIO ENGINEERING SOCIETY, AUDIO ENGINEERING SOCIETY, NEW YORK, NY, US, vol. 41, no. 11, 1 November 1993 (1993-11-01), pages 894 - 904, XP000514224, ISSN: 1549-4950 * |
BEERENDS J G ET AL: "A PERCEPTUAL AUDIO QUALITY MEASURE BASED ON A PSYCHOACOUSTIC SOUND REPRESENTATION", JOURNAL OF THE AUDIO ENGINEERING SOCIETY, AUDIO ENGINEERING SOCIETY, NEW YORK, NY, US, vol. 40, no. 12, 1 December 1992 (1992-12-01), pages 963 - 978, XP000514954, ISSN: 1549-4950 * |
TSINGOS N ET AL: "Perceptual audio rendering of complex virtual environments", ACM TRANSACTIONS ON GRAPHICS, ACM, US LNKD- DOI:10.1145/1015706.1015710, vol. 23, no. 3, 1 August 2004 (2004-08-01), pages 249 - 258, XP002453152, ISSN: 0730-0301 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2560078A (en) * | 2016-12-27 | 2018-08-29 | Harman Int Ind | Control for vehicle sound output |
GB2560078B (en) * | 2016-12-27 | 2021-07-21 | Harman Int Ind | Control for vehicle sound output |
WO2023287782A1 (en) * | 2021-07-15 | 2023-01-19 | Dolby Laboratories Licensing Corporation | Data augmentation for speech enhancement |
Also Published As
Publication number | Publication date |
---|---|
KR20120006480A (ko) | 2012-01-18 |
CN102334348A (zh) | 2012-01-25 |
KR101646540B1 (ko) | 2016-08-08 |
HK1166908A1 (en) | 2012-11-09 |
CA2744429A1 (en) | 2010-05-27 |
US9100767B2 (en) | 2015-08-04 |
CN102334348B (zh) | 2014-12-31 |
TWI524785B (zh) | 2016-03-01 |
US20120070011A1 (en) | 2012-03-22 |
EP2368375B1 (en) | 2019-05-29 |
TW201026105A (en) | 2010-07-01 |
JP5611970B2 (ja) | 2014-10-22 |
JP2012509632A (ja) | 2012-04-19 |
EP2368375A1 (en) | 2011-09-28 |
CA2744429C (en) | 2018-07-31 |
DK2368375T3 (da) | 2019-09-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9100767B2 (en) | Converter and method for converting an audio signal | |
JP5637661B2 (ja) | 時変性の指向特性を有する音源を録音および再生する方法 | |
CN104641659B (zh) | 扬声器设备和音频信号处理方法 | |
JP5533248B2 (ja) | 音声信号処理装置および音声信号処理方法 | |
JP5857071B2 (ja) | オーディオ・システムおよびその動作方法 | |
KR100608025B1 (ko) | 2채널 헤드폰용 입체 음향 생성 방법 및 장치 | |
JP4059478B2 (ja) | 音場制御方法及び音場制御システム | |
CN102972047B (zh) | 用于再现立体声的方法和设备 | |
US11611828B2 (en) | Systems and methods for improving audio virtualization | |
Farina et al. | Ambiophonic principles for the recording and reproduction of surround sound for music | |
JP2012145962A (ja) | オーディオ入力信号の反響コンテンツを抽出および変更するためのシステム | |
WO2019229199A1 (en) | Adaptive remixing of audio content | |
US20090052681A1 (en) | System and a method of processing audio data, a program element, and a computer-readable medium | |
JP2012509632A5 (ja) | オーディオ信号を変換するためのコンバータ及び方法 | |
CN113170271A (zh) | 用于处理立体声信号的方法和装置 | |
US20200059750A1 (en) | Sound spatialization method | |
WO2014203496A1 (ja) | 音声信号処理装置、および音声信号処理方法 | |
JP5163685B2 (ja) | 頭部伝達関数測定方法、頭部伝達関数畳み込み方法および頭部伝達関数畳み込み装置 | |
JP2004509544A (ja) | 耳に近接配置されるスピーカ用の音声信号処理方法 | |
JP6774912B2 (ja) | 音像生成装置 | |
Rosen et al. | Automatic speaker directivity control for soundfield reconstruction | |
AU2015255287B2 (en) | Apparatus and method for generating an output signal employing a decomposer | |
KR20200128671A (ko) | 주변 신호를 복수의 주변 신호 채널들에 분배하는 오디오 신호 프로세서, 시스템 및 방법들 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200980153440.8 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09801675 Country of ref document: EP Kind code of ref document: A1 |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2744429 Country of ref document: CA Ref document number: 2011536881 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20117014254 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2009801675 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13130737 Country of ref document: US |