WO2015029205A1 - 音響処理装置、音響処理方法、及び音響処理プログラム - Google Patents
音響処理装置、音響処理方法、及び音響処理プログラム Download PDFInfo
- Publication number
- WO2015029205A1 WO2015029205A1 PCT/JP2013/073255 JP2013073255W WO2015029205A1 WO 2015029205 A1 WO2015029205 A1 WO 2015029205A1 JP 2013073255 W JP2013073255 W JP 2013073255W WO 2015029205 A1 WO2015029205 A1 WO 2015029205A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sound
- sound image
- signal
- equalizer
- transfer function
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
- H04S3/02—Systems employing more than two channels, e.g. quadraphonic of the matrix type, i.e. in which input signals are combined algebraically, e.g. after having been phase shifted with respect to each other
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/307—Frequency adjustment, e.g. tone control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
- H04S7/303—Tracking of listener position or orientation
- H04S7/304—For headphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/305—Electronic adaptation of stereophonic audio signals to reverberation of the listening space
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/11—Positioning of individual sound objects, e.g. moving airplane, within a sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/13—Aspects of volume control, not necessarily automatic, in stereophonic sound systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/01—Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/07—Synergistic effects of band splitting and sub-band processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S5/00—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation
- H04S5/005—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation of the pseudo five- or more-channel type, e.g. virtual surround
Definitions
- the inventors identified the cause of timbre reproduction failure by uniform equalizer processing for acoustic signals, and found that the sound wave transmission characteristics differ depending on the direction of sound image localization.
- the frequency change of sound waves localized in one direction may be canceled by chance, but it does not match the frequency change of sound waves localized in the other direction. It was found that the reproduction of the timbre in the case of the sound was different from that in the assumed environment such as the original sound field.
- the sound processing device of the present embodiment is a sound processing device that corrects differences in timbres heard in different environments, and the same sound is heard in one environment.
- the equalizer has an equalizer that adjusts the frequency characteristic so that the frequency characteristic of the sound wave when heard in the other environment follows the frequency characteristic of the sound wave of the other sound wave, and the equalizer corresponds to a plurality of sound image signals that are localized in different directions. And a plurality of characteristic frequency characteristic changing processes are performed on corresponding sound image signals.
- the acoustic processing method of the present embodiment is an acoustic processing method for correcting differences in timbres heard in different environments, and when the same sound is heard in one environment. And adjusting the frequency characteristic so that the frequency characteristic of the sound wave when listened to in the other environment follows the frequency characteristic of the sound wave of the sound wave, and the adjustment step includes a plurality of sound image signals that are localized in different directions. And a characteristic frequency characteristic changing process is performed on the corresponding sound image signal.
- the equalizers EQ1, EQ2, and EQ3 are, for example, FIR filters and IIR filters.
- Three types of equalizers EQi correspond to an equalizer EQ2 corresponding to a sound image signal localized at the center, an equalizer EQ1 corresponding to a sound image signal localized in front of the left speaker SaL, and a sound image signal localized in the front of the right speaker SaR. This is an equalizer EQ3.
- the sound image localization is determined by the sound pressure difference and time difference of sound waves that reach the sound receiving point from the left and right speakers SaL and SaR.
- the sound image signal that is localized in front of the left speaker SaL is output only from the left speaker SaL, and the sound pressure of the right speaker SaR is set to zero so that the sound image is localized.
- a sound image signal that is localized in front of the right speaker SaR is output from only the right speaker SaR, and the sound pressure of the left speaker SaL is set to zero so that the sound image is localized.
- the actual listening environment is a listening environment having a positional relationship between a speaker that actually reproduces an acoustic signal and a sound receiving point.
- the assumed listening environment is an environment desired by the user, for example, an original sound field, a reference environment defined by ITU-R, a recommended environment recommended by THX, or an environment assumed by a producer such as a mixer. This environment has a positional relationship between the speaker and the sound receiving point in these environments.
- the sound wave signal that the user's left ear hears at the sound receiving point is the sound wave signal DeL of the following equation (1)
- the sound wave signal that the user's right ear listens at the sound receiving point is represented by the following equation (2). It becomes a sound wave signal DeR.
- the output sound of the left speaker SeL reaches the right ear and the output sound of the right speaker SeR also reaches the left ear.
- the sound wave signal that the user's left ear hears at the sound receiving point is a sound wave signal DaL of the following equation (3)
- the sound wave signal that the user's right ear listens at the sound receiving point is represented by the following equation (4). It becomes the sound wave signal DaR.
- the above equations (1) and (2) in the assumed listening environment Can be expressed as the following expression (5)
- the above expressions (3) and (4) in the actual listening environment can be expressed as the following expression (6).
- the sound receiving point is assumed to be located on a line orthogonal to the line segment connecting the pair of speakers and passing through the midpoint of the line segment.
- the sound processing device reproduces, in an actual listening environment, the timbre represented by the above formula (5) when each sound image signal localized at the center is heard at the sound receiving point. That is, the equalizer EQ2 has a transfer function H1 represented by the following expression (7) and is convolved with the sound image signal A to be localized at the center. Then, the equalizer EQ2 equally inputs the sound image signal A after convolution of the transfer function H1 to both adders 10 and 20.
- the sound processing device reproduces the timbres of the above formulas (8) and (9) in the actual listening environment when the sound image signal that is localized in front of the left speaker SeL is heard at the sound receiving point. That is, the equalizer EQ1 convolves a transfer function H2 represented by the following equation (12) with the sound image signal A to be heard by the left ear, and is represented by the following equation (13) for the sound image signal A to be heard by the right ear.
- the transfer function H3 to be performed is convolved.
- An equalizer EQ1 that processes a sound image signal that is localized in front of the left speaker has the transfer functions H2 and H3, and the transfer functions H2 and H3 with respect to the sound image signal A at a constant ratio ⁇ (0 ⁇ ⁇ ⁇ 1).
- the signal is input to the adder 10 that generates the acoustic signal of the left channel after convolution.
- the equalizer EQ1 has a transfer function H4 of the following equation (14).
- a sound image signal that is localized in front of the right speaker is output only from the right speaker SeR and the right speaker SaR, for example, in the assumed listening environment and the actual listening environment.
- the sound wave signal DeL and the sound wave signal DaL heard in the left ear in the assumed listening environment and the actual listening environment, and the sound wave signal DeR and the sound wave signal DaR heard in the right ear in the assumed listening environment and the actual listening environment are expressed by the following equations: (15) to (18).
- the equalizer EQ3 that processes the sound image signal that is localized in the front of the right speaker has the transfer functions H5 and H6, and the transfer functions H5 and H6 with respect to the sound image signal B at a constant ratio ⁇ (0 ⁇ ⁇ ⁇ 1).
- the signal is input to the adder 20 that generates the acoustic signal of the right channel by convolution.
- the equalizer EQ3 has a transfer function H7 of the following equation (21).
- the inventors measured the impulse response to the left ear and the 30 ° spread 60 ° spread speaker set and the sound image signal where the sound image was localized in front of the left speaker, and calculated the head-related transfer function.
- the analysis results in the time domain and the frequency domain are shown in FIG.
- the sound image localization of the sound image signal was changed to the center, and the impulse response was recorded in the same way.
- the analysis results in the time domain and frequency domain of the recording results are shown in FIG. In FIGS. 3A and 3B, each upper diagram is a time domain, and each lower diagram is a frequency domain.
- the frequency characteristic of the impulse response changes with the change of the speaker set. Further, as can be seen from the difference between (a) and (b) of FIG. 3, it can be seen that the degree of change in the frequency characteristics varies depending on the direction of sound image localization.
- the sound processing device is a device that corrects differences in timbres heard in different environments, and the frequency characteristics of sound waves when the same sound is heard in one environment are the other.
- Equalizers EQ1, EQ2, and EQ3 for adjusting the frequency characteristics are provided so that the frequency characteristics of the sound waves when the sound is heard in the environment of
- a plurality of equalizers EQ1, EQ2, and EQ3 are provided corresponding to a plurality of sound image signals that are localized in different directions, and perform a specific frequency characteristic changing process on the corresponding sound image signals.
- the sound processing apparatus according to the second embodiment is a generalized timbre correction process for each sound image signal, and performs a specific timbre correction process on a sound image signal having an arbitrary sound image localization direction.
- the transfer function of the frequency change given by the transfer path leading from the left speaker SeL to the left ear is CeLL
- the transfer function of the frequency change given by the transfer path leading from the left speaker SeL to the right ear is shown.
- the transfer function of the frequency change given by CeLR and the transfer path from the right speaker SeR to the left ear is CeRL
- the transfer function of the frequency change given by the transfer path from the right speaker SeR to the right ear is CeRR.
- the sound image signal S that is localized in a predetermined direction becomes a sound wave signal SeL of the following expression (22) in the assumed listening environment and is heard by the user's left ear, and the sound wave signal of the following expression (23) in the assumed listening environment. SeR is heard by the user's right ear.
- Fa and Fb are transfer functions for each channel that change the amplitude and delay difference of the sound image signal in order to provide sound image localization in a predetermined direction.
- Fa is a transfer function that is convoluted with the sound image signal S output from the left speaker SeL
- Fb is a transfer function that is convoluted with the sound image signal S output from the left speaker SeL.
- the transfer function of the frequency change given by the transfer path from the left speaker SaL to the left ear is CaLL
- the transfer function of the frequency change given by the transfer path from the left speaker SaL to the right ear is CaLR
- the right speaker SaR is CaRL
- the transfer function of the frequency change given by the transfer path leading from the right to the left ear is CaRL
- the transfer function of the frequency change given by the transfer path leading from the right speaker SaR to the right ear is CaRR.
- the sound image signal S that is localized in a predetermined direction becomes a sound wave signal SaL of the following equation (24) in the assumed listening environment and is heard by the user's left ear, and the sound wave signal of the following equation (25) in the assumed listening environment. SaR is heard by the user's right ear.
- the above formulas (22) to (25) are generalizations of the above formulas (1) to (4), formulas (8) to (11), and formulas (15) to (18).
- the transfer function Fa transfer function Fb
- equations (22) to (25) become equations (1) to (4).
- FIG. 5 is a configuration diagram showing the configuration of the sound processing apparatus based on the above.
- the sound processing apparatus includes equalizers EQ1, EQ2, EQ3,... EQn corresponding to the number of sound image signals S1, S2, S3,... Sn, and equalizers EQ1, EQ2, EQ3,.
- the adders 10, 20,... are provided in the subsequent stage of EQn corresponding to the number of channels.
- Each of the equalizers EQ1, EQ2, EQ3... EQn is based on transfer functions H10 and H11, and includes transfer functions Fa and transfer functions Fb that give amplitude differences and time differences to the processed sound image signals S1, S2, S3. With the identified transfer functions H10 i and H11 i .
- the equalizer EQi applies specific transfer functions H10 i and H11 i to the sound image signal Si, and applies the sound image signal H10 i ⁇ Si to the channel adder 10 for the left speaker SaL.
- the sound image signal H11 i ⁇ Si is input to the channel adder 20 for the right speaker SaR.
- the adder 10 connected to the left speaker SaL adds the sound image signals H10 1 and S1, the sound image signals H10 2 and S2, ... the sound image signals H10 n and Sn, and generates an acoustic signal output from the left speaker SaL. And output to the left speaker SaL.
- the adder 20 connected to the right speaker SaR generates the acoustic signal output from the right speaker SaR by adding the sound image signals H11 1 and S1, the sound image signals H11 2 and S2, ... the sound image signals H11 n and Sn. And output to the right speaker SaR.
- the sound image processing apparatus includes an equalizer EQ1, EQ2, EQ3... EQn according to the first and second embodiments, a sound source separation unit 30i, and a sound image localization setting.
- the unit 40i is provided.
- the amplitude difference and phase difference between channels are analyzed, statistical analysis, frequency analysis, complex analysis, etc. are performed to detect the difference in waveform structure, and the specific frequency based on the detection result
- the band sound image signal may be emphasized.
- the first filter 310 is an LC circuit or the like, which gives a certain delay time to the acoustic signal of one channel and always delays the acoustic signal of one channel with respect to the acoustic signal of the other channel. That is, the first filter delays longer than the time difference set between the channels for sound image localization. As a result, all sound image components contained in the sound signal of the other channel are advanced with respect to all sound image signals contained in the sound signal of the one channel.
- the coefficient update circuit 330 uses the error signal e (k) as a function of the coefficient m (k ⁇ 1) and calculates a recurrence formula between adjacent binomials of the coefficient m (k) including the error signal e (k). The coefficient m (k) that minimizes the error signal e (k) is searched. The coefficient determination circuit 330 updates the coefficient m (k) in such a direction as to decrease the coefficient m (k) as the time difference is generated between the channels of the acoustic signal by this calculation process. Output close to.
- the synthesis circuit 340 receives the coefficient m (k) of the coefficient determination circuit 330 and the acoustic signals of both channels.
- the synthesis circuit 340 may multiply the acoustic signals of both channels by a coefficient m (k) at an arbitrary ratio, add them at an arbitrary ratio, and output a specific sound image signal as a result.
- the speaker set connected to the sound processing device may be any one that includes two or more speakers such as a stereo speaker, a 5.1 channel speaker, and the like.
- the equalizer EQi may be provided with a transfer function that takes into account the amplitude difference and the time difference. Further, each equalizer EQ1, EQ2, EQ3,..., EQn prepares a plurality of types of transfer functions according to some aspects of the speaker set, and applies them according to the selection of the speaker set by the user. May be determined.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Acoustics & Sound (AREA)
- General Physics & Mathematics (AREA)
- Algebra (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Stereophonic System (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Description
第1の実施形態に係る音響処理装置について図面を参照しつつ詳細に説明する。図1に示すように、音響処理装置は、前段側に3種類のイコライザEQ1、EQ2、EQ3を備え、後段側に2チャンネル分の加算器10、20を備え、左スピーカSaL及び右スピーカSaRに接続される。前段側は、回路上、左スピーカSaL及び右スピーカSaRに遠い側である。左スピーカSaL及び右スピーカSaRは、信号に従って音波を発生させる振動源である。左スピーカSaL及び右スピーカSaRが再生、すなわち音波を発生し、その音波が聴取者の両耳に届き、聴取者は音像を知覚する。
第2の実施形態に係る音響処理装置について図面を参照しながら詳細に説明する。第2の実施形態に係る音響処理装置は、音像信号ごとの音色補正処理を一般化したものであり、任意の音像定位方向を持つ音像信号に対して特有の音色補正処理を行う。
第3の実施形態に係る音像処理装置は、図6に示すように、第1及び第2の実施形態に係るイコライザEQ1、EQ2、EQ3・・・EQnの他、音源分離部30i及び音像定位設定部40iを備える。
本明細書においては、本発明に係る実施形態を説明したが、この実施形態は例として提示したものであって、発明の範囲を限定することを意図していない。実施形態で開示の構成の全て又は何れかを組み合わせたものも包含される。以上のような実施形態は、その他の様々な形態で実施されることが可能であり、発明の範囲を逸脱しない範囲で、種々の省略や置き換え、変更を行うことができる。この実施形態やその変形は、発明の範囲や要旨に含まれると同様に、特許請求の範囲に記載された発明とその均等の範囲に含まれるものである。
10、20 加算器
301、302、303、・・・30n 音源分離部
310 第1フィルタ
320 第2フィルタ
330 係数決定回路
340 合成回路
401、402、403、・・・40n 音像位置設定部
SaL スピーカ
SaR スピーカ
Claims (17)
- 異環境で聴取される音色の相違を補正する音響処理装置であって、
同一音が一方の環境で聴取されたときの音波の周波数特性に他方の環境で聴取されたときの音波の周波数特性が倣うように、周波数特性を調整するイコライザを備え、
前記イコライザは、
異なる方向に音像定位する複数の音像信号に対応して複数設けられ、
対応の音像信号に対して特有の周波数特性変更処理を行うこと、
を特徴とする音響処理装置。 - 前記各イコライザは、音像定位の方向ごとに特有の伝達関数を有し、対応の音像信号に対して前記特有の伝達関数を適用すること、
を特徴とする請求項1記載の音響処理装置。 - 前記イコライザが有する伝達関数は、対応の音像信号を音像定位させるために発生させるチャンネル間の相違に基づくこと、
を特徴とする請求項2記載の音響処理装置。 - 前記チャンネル間の相違は、出力の際に音像定位の方向に従ってチャンネル間に与えられる振幅差、時間差、又はこれらの両方であること、
を特徴とする請求項3記載の音響処理装置。 - 前記イコライザが有する伝達関数は、前記一方の環境及び他方の環境での各耳に到達する音波の各頭部伝達関数に更に基づくこと、
を特徴とする請求項2乃至4の何れかに記載の音響処理装置。 - 音像信号を音像定位させるためにチャンネル間に相違を与える音像定位設定手段を更に備え、
前記イコライザが有する伝達関数は、前記音像定位設定手段が与える相違に基づくこと、
を特徴とする請求項3乃至5の何れかに記載の音響処理装置。 - 音像定位方向が異なる複数の音像成分を含む音響信号から各音像成分を分離して各音像信号を生成する音源分離手段を更に備え、
前記イコライザは、前記音源分離手段が生成した前記音像信号に対して特有の周波数特性変更処理を行うこと、
を特徴とする請求項1乃至6の何れかに記載の音響処理装置。 - 前記音源分離手段は、
各音像成分に対応して複数設けられ、
前記音響信号の一方のチャンネルについて特定時間遅延させて、対応の音像成分を同振幅同位相に調整するフィルタと、
前記音響信号の一方のチャンネルに係数mを乗じた上でチャンネル間の誤差信号を生成し、この誤差信号を含む係数mの漸化式を演算する係数決定手段と、
前記係数mを前記音響信号に乗じる合成手段と、
を備えること、
を特徴とする請求項7記載の音響処理装置。 - 異環境で聴取される音色の相違を補正する音響処理方法であって、
同一音が一方の環境で聴取されたときの音波の周波数特性に他方の環境で聴取されたときの音波の周波数特性が倣うように、周波数特性を調整する調整ステップを有し、
前記調整ステップは、
異なる方向に音像定位する複数の音像信号に対応して特有に行われ、対応の音像信号に対して特有の周波数特性変更処理を行うこと、
を特徴とする音響処理方法。 - コンピュータに異環境で聴取される音色の相違を補正する機能を実現させる音響処理プログラムであって、
前記コンピュータを、
同一音が一方の環境で聴取されたときの音波の周波数特性に他方の環境で聴取されたときの音波の周波数特性が倣うように、周波数特性を調整するイコライザとして機能させ、
前記イコライザは、
異なる方向に音像定位する複数の音像信号に対応して複数設けられ、
対応の音像信号に対して特有の周波数特性変更処理を行うこと、
を特徴とする音響処理プログラム。 - 前記各イコライザは、音像定位の方向ごとに特有の伝達関数を有し、対応の音像信号に対して前記特有の伝達関数を適用すること、
を特徴とする請求項10記載の音響処理プログラム。 - 前記イコライザが有する伝達関数は、対応の音像信号を音像定位させるために発生させるチャンネル間の相違に基づくこと、
を特徴とする請求項11記載の音響処理プログラム。 - 前記チャンネル間の相違は、出力の際に音像定位の方向に従ってチャンネル間に与えられる振幅差、時間差、又はこれらの両方であること、
を特徴とする請求項12記載の音響処理プログラム。 - 前記イコライザが有する伝達関数は、前記一方の環境及び他方の環境で各耳に到達する異環境での音波の各伝達関数に更に基づくこと、
を特徴とする請求項11乃至13の何れかに記載の音響処理プログラム。 - 音像信号を音像定位させるためにチャンネル間に相違を与える音像定位設定部として更に機能させ、
前記イコライザが有する伝達関数は、前記音像定位設定部が与える相違に基づくこと、
を特徴とする請求項12乃至14の何れかに記載の音響処理プログラム。 - 音像定位方向が異なる複数の音像成分を含む音響信号から各音像成分を分離して各音像信号を生成する音源分離手段として更に機能させ、
前記イコライザは、前記音源分離手段が生成した前記音像信号に対して特有の周波数特性変更処理を行うこと、
を特徴とする請求項10乃至16の何れかに記載の音響処理プログラム。 - 前記音源分離手段は、
各音像成分に対応して複数回機能され、
前記音響信号の一方のチャンネルについて特定時間遅延させて、対応の音像成分を同振幅同位相に調整するフィルタと、
前記音響信号の一方のチャンネルに係数mを乗じた上でチャンネル間の誤差信号を生成し、この誤差信号を含む係数mの漸化式を演算する係数決定手段と、
前記係数mを前記音響信号に乗じる合成手段と、
を備えること、
を特徴とする請求項16記載の音響処理プログラム。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015533883A JP6161706B2 (ja) | 2013-08-30 | 2013-08-30 | 音響処理装置、音響処理方法、及び音響処理プログラム |
EP13892221.6A EP3041272A4 (en) | 2013-08-30 | 2013-08-30 | Sound processing apparatus, sound processing method, and sound processing program |
CN201380079120.9A CN105556990B (zh) | 2013-08-30 | 2013-08-30 | 音响处理装置及音响处理方法 |
PCT/JP2013/073255 WO2015029205A1 (ja) | 2013-08-30 | 2013-08-30 | 音響処理装置、音響処理方法、及び音響処理プログラム |
US15/053,097 US10524081B2 (en) | 2013-08-30 | 2016-02-25 | Sound processing device, sound processing method, and sound processing program |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2013/073255 WO2015029205A1 (ja) | 2013-08-30 | 2013-08-30 | 音響処理装置、音響処理方法、及び音響処理プログラム |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/053,097 Continuation US10524081B2 (en) | 2013-08-30 | 2016-02-25 | Sound processing device, sound processing method, and sound processing program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015029205A1 true WO2015029205A1 (ja) | 2015-03-05 |
Family
ID=52585821
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/073255 WO2015029205A1 (ja) | 2013-08-30 | 2013-08-30 | 音響処理装置、音響処理方法、及び音響処理プログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US10524081B2 (ja) |
EP (1) | EP3041272A4 (ja) |
JP (1) | JP6161706B2 (ja) |
CN (1) | CN105556990B (ja) |
WO (1) | WO2015029205A1 (ja) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104064191B (zh) * | 2014-06-10 | 2017-12-15 | 北京音之邦文化科技有限公司 | 混音方法及装置 |
US9820073B1 (en) | 2017-05-10 | 2017-11-14 | Tls Corp. | Extracting a common signal from multiple audio signals |
JP6988904B2 (ja) * | 2017-09-28 | 2022-01-05 | 株式会社ソシオネクスト | 音響信号処理装置および音響信号処理方法 |
CN110366068B (zh) * | 2019-06-11 | 2021-08-24 | 安克创新科技股份有限公司 | 音频调节方法、电子设备以及装置 |
CN112866894B (zh) * | 2019-11-27 | 2022-08-05 | 北京小米移动软件有限公司 | 声场控制方法及装置、移动终端、存储介质 |
CN113596647B (zh) * | 2020-04-30 | 2024-05-28 | 深圳市韶音科技有限公司 | 声音输出装置及调节声像的方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08182100A (ja) * | 1994-10-28 | 1996-07-12 | Matsushita Electric Ind Co Ltd | 音像定位方法および音像定位装置 |
JP2001224100A (ja) | 2000-02-14 | 2001-08-17 | Pioneer Electronic Corp | 自動音場補正システム及び音場補正方法 |
JP2001346299A (ja) * | 2000-05-31 | 2001-12-14 | Sony Corp | 音場補正方法及びオーディオ装置 |
WO2006009004A1 (ja) | 2004-07-15 | 2006-01-26 | Pioneer Corporation | 音響再生システム |
JP2010021982A (ja) * | 2008-06-09 | 2010-01-28 | Mitsubishi Electric Corp | 音響再生装置 |
WO2013105413A1 (ja) * | 2012-01-11 | 2013-07-18 | ソニー株式会社 | 音場制御装置、音場制御方法、プログラム、音場制御システム及びサーバ |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AUPO099696A0 (en) * | 1996-07-12 | 1996-08-08 | Lake Dsp Pty Limited | Methods and apparatus for processing spatialised audio |
JP2003087899A (ja) * | 2001-09-12 | 2003-03-20 | Sony Corp | 音響処理装置 |
JP4821250B2 (ja) * | 2005-10-11 | 2011-11-24 | ヤマハ株式会社 | 音像定位装置 |
CN101529930B (zh) * | 2006-10-19 | 2011-11-30 | 松下电器产业株式会社 | 声像定位装置、声像定位系统、声像定位方法、程序及集成电路 |
KR101567461B1 (ko) * | 2009-11-16 | 2015-11-09 | 삼성전자주식회사 | 다채널 사운드 신호 생성 장치 |
JP2013110682A (ja) * | 2011-11-24 | 2013-06-06 | Sony Corp | 音響信号処理装置、音響信号処理方法、プログラム、および、記録媒体 |
KR101871234B1 (ko) * | 2012-01-02 | 2018-08-02 | 삼성전자주식회사 | 사운드 파노라마 생성 장치 및 방법 |
JP2015509212A (ja) * | 2012-01-19 | 2015-03-26 | コーニンクレッカ フィリップス エヌ ヴェ | 空間オーディオ・レンダリング及び符号化 |
CN104067632B (zh) * | 2012-01-27 | 2018-04-06 | 共荣工程株式会社 | 指向性控制方法及装置 |
CN102711032B (zh) * | 2012-05-30 | 2015-06-03 | 蒋憧 | 一种声音处理再现装置 |
-
2013
- 2013-08-30 JP JP2015533883A patent/JP6161706B2/ja active Active
- 2013-08-30 EP EP13892221.6A patent/EP3041272A4/en not_active Ceased
- 2013-08-30 CN CN201380079120.9A patent/CN105556990B/zh active Active
- 2013-08-30 WO PCT/JP2013/073255 patent/WO2015029205A1/ja active Application Filing
-
2016
- 2016-02-25 US US15/053,097 patent/US10524081B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08182100A (ja) * | 1994-10-28 | 1996-07-12 | Matsushita Electric Ind Co Ltd | 音像定位方法および音像定位装置 |
JP2001224100A (ja) | 2000-02-14 | 2001-08-17 | Pioneer Electronic Corp | 自動音場補正システム及び音場補正方法 |
JP2001346299A (ja) * | 2000-05-31 | 2001-12-14 | Sony Corp | 音場補正方法及びオーディオ装置 |
WO2006009004A1 (ja) | 2004-07-15 | 2006-01-26 | Pioneer Corporation | 音響再生システム |
JP2010021982A (ja) * | 2008-06-09 | 2010-01-28 | Mitsubishi Electric Corp | 音響再生装置 |
WO2013105413A1 (ja) * | 2012-01-11 | 2013-07-18 | ソニー株式会社 | 音場制御装置、音場制御方法、プログラム、音場制御システム及びサーバ |
Non-Patent Citations (1)
Title |
---|
See also references of EP3041272A4 |
Also Published As
Publication number | Publication date |
---|---|
US20160286331A1 (en) | 2016-09-29 |
CN105556990B (zh) | 2018-02-23 |
EP3041272A1 (en) | 2016-07-06 |
EP3041272A4 (en) | 2017-04-05 |
JP6161706B2 (ja) | 2017-07-12 |
US10524081B2 (en) | 2019-12-31 |
CN105556990A (zh) | 2016-05-04 |
JPWO2015029205A1 (ja) | 2017-03-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9918179B2 (en) | Methods and devices for reproducing surround audio signals | |
KR101368859B1 (ko) | 개인 청각 특성을 고려한 2채널 입체 음향 재생 방법 및장치 | |
JP6161706B2 (ja) | 音響処理装置、音響処理方法、及び音響処理プログラム | |
KR100739798B1 (ko) | 청취 위치를 고려한 2채널 입체음향 재생 방법 및 장치 | |
KR101567461B1 (ko) | 다채널 사운드 신호 생성 장치 | |
CN113207078B (zh) | 在扬声器的任意集合上的基于对象的音频的虚拟渲染 | |
US8605914B2 (en) | Nonlinear filter for separation of center sounds in stereophonic audio | |
JP2008522483A (ja) | 多重チャンネルオーディオ入力信号を2チャンネル出力で再生するための装置及び方法と、これを行うためのプログラムが記録された記録媒体 | |
EP3613219B1 (en) | Stereo virtual bass enhancement | |
US9607622B2 (en) | Audio-signal processing device, audio-signal processing method, program, and recording medium | |
RU2006126231A (ru) | Способ и устройство для воспроизведения обширного монофонического звука | |
EP2484127B1 (en) | Method, computer program and apparatus for processing audio signals | |
US9510124B2 (en) | Parametric binaural headphone rendering | |
JP4951985B2 (ja) | 音声信号処理装置、音声信号処理システム、プログラム | |
JP6124143B2 (ja) | サラウンド成分生成装置 | |
CN110312198B (zh) | 用于数字影院的虚拟音源重定位方法及装置 | |
JP7332745B2 (ja) | 音声処理方法及び音声処理装置 | |
Cecchi et al. | Crossover Networks: A Review | |
US11039266B1 (en) | Binaural reproduction of surround sound using a virtualized line array | |
JP2011015118A (ja) | 音像定位処理装置、音像定位処理方法およびフィルタ係数設定装置 | |
JP2006042316A (ja) | 音像上方拡大回路 | |
JP2004166212A (ja) | ヘッドホン再生方法及び装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201380079120.9 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13892221 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015533883 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REEP | Request for entry into the european phase |
Ref document number: 2013892221 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013892221 Country of ref document: EP |