WO2006067893A1 - 音像定位装置 - Google Patents

音像定位装置 Download PDF

Info

Publication number
WO2006067893A1
WO2006067893A1 PCT/JP2005/015724 JP2005015724W WO2006067893A1 WO 2006067893 A1 WO2006067893 A1 WO 2006067893A1 JP 2005015724 W JP2005015724 W JP 2005015724W WO 2006067893 A1 WO2006067893 A1 WO 2006067893A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound image
image localization
parameter
listener
target position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2005/015724
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
Kazuhiro Iida
Gempo Ito
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Holdings Corp
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Priority to CN2005800479265A priority Critical patent/CN101116374B/zh
Priority to US11/722,586 priority patent/US20080219454A1/en
Priority to EP05781488A priority patent/EP1830604A1/en
Publication of WO2006067893A1 publication Critical patent/WO2006067893A1/ja
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic

Definitions

  • the present invention relates to a sound image localization apparatus that localizes a sound image at an arbitrary position in a three-dimensional space.
  • the acoustic transfer characteristics include space transfer functions representing transfer characteristics by reflection, diffraction, scattering, etc. on a wall, etc., and transfer characteristics by reflection, diffraction, scattering, etc. from the head or torso of a listener. It can be divided into head transfer functions etc.
  • the conventional sound image localization apparatus using this head related transfer function accurately measures the head related transfer function of the listener and reproduces it faithfully to perform sound image localization, or a standard sound source localization apparatus.
  • FIG. 14 is a block diagram showing a conventional sound image localization apparatus.
  • the conventional sound image localization apparatus stores a head related transfer function created for each direction in which the sound image is to be localized as a coefficient of an FIR (Finite Impulse Response) filter.
  • a head-related transfer function selection unit 62 for selecting a head-related transfer function based on target position information for localizing a sound image, and sound image localization for performing filter processing based on the selected head-related transfer function
  • a processing unit 63 for processing a head related transfer function created for each direction in which the sound image is to be localized as a coefficient of an FIR (Finite Impulse Response) filter.
  • FIR Finite Impulse Response
  • the head related transfer functions stored in the head related transfer function storage unit 61 are those of the listener However, it may be a standard commonly used for all listeners.
  • the input sound source signal is sound image localized by convoluting the selected head related transfer function based on the input target position information.
  • the sound image localization signal which is a signal, is output to an audio reproduction device such as a headphone or a speaker.
  • sound image localization can be performed using the listener's own or a standard head transfer function.
  • Patent Document 1 Japanese Patent Application Laid-Open No. 2000-23299
  • Patent Document 2 Japanese Patent Application Laid-Open No. 2001-16697
  • Non-patent literature 1 Jens Brauert, Masayuki Morimoto, Toshiyuki Goto "Spatial Acoustics” Kajima Publishing Co., July 10, 1981
  • the peak transfer function is included in the amplitude frequency characteristic of the head transfer function because the head transfer function is simulated with only a single IIR filter. There is a problem that only one of the mountain and the dip (valley) can be reproduced, and sound image localization can not be performed correctly! If the amplitude frequency characteristic of the head related transfer function is faithfully simulated, a large number of IIR filters are required, and there is a problem that the required amount of data and the amount of calculation increase as in the conventional example described above.
  • the present invention has been made to solve the conventional problems, and it is possible to reduce the amount of data required and the amount of calculation, and to easily and correctly localize the sound image for many listeners. It is an object of the present invention to provide a sound image localization apparatus that can
  • the sound image localization apparatus of the present invention has a configuration for processing the sound source signal so as to reproduce the structural feature of the head related transfer function corresponding to the input target position!
  • parameter setting means for setting parameters for reproducing structural features of the head related transfer function, sound image localization processing for the sound source signal according to the parameters and sound image localization signal for outputting a sound image localization signal And the processing means.
  • the parameter setting means may be based on the input listener information !, the listener It is configured to set the above-mentioned parameters that are adapted to the information.
  • a parameter adapted to the input listener information is set. Therefore, the sound image localization can be easily performed on more people.
  • the listener information is configured to be physical feature information related to a physical feature of the listener.
  • physical feature extraction means which extracts and outputs physical feature information of the listener from information including the input physical features of the listener.
  • the physical feature information is extracted from the information including the input physical features of the listener, and a parameter adapted to the extracted physical feature information is set. Therefore, sound image localization can be easily performed on more people.
  • the information including the physical characteristics of the listener is configured to be image information of the listener.
  • the listener information is a head-related transfer function obtained by actual measurement or numerical calculation of a listener.
  • the listener information is configured to be attribute information of a listener.
  • the sound image can be localized correctly to more people easily.
  • the listener information is configured to be information relating to auditory characteristics of the listener.
  • the parameter setting means may be a function representing a relationship between a target position and the parameter. And the input target position force is also calculated by the function.
  • the target position force can also be easily set as a parameter, and the necessary data amount and calculation amount can be reduced.
  • the parameter setting means holds a parameter table for storing the parameter corresponding to the target position, and selects the parameter corresponding to the input target position from the parameter table.
  • the target position force can also be easily set as a parameter, and the required data amount and calculation amount can be reduced.
  • the parameter setting unit holds a function representing the relationship between the listener information, the target position, and the parameter, and calculates the parameter by the function from the input target position and the listener information. It was composition.
  • the target position force can also be easily set as a parameter, and the required data amount and calculation amount can be reduced.
  • the parameter setting means holds a parameter table storing the listener information and the parameter corresponding to the target position, and the input target position and the parameter corresponding to the listener information are the parameters.
  • Parameter Table Force Selectable configuration is the parameter table storing the listener information and the parameter corresponding to the target position, and the input target position and the parameter corresponding to the listener information are the parameters.
  • the target position force can also be easily set as a parameter, and the required data amount and calculation amount can be reduced.
  • the parameter setting unit is configured to obtain the parameter of the target position by interpolation also for the parameter force at a position close to the target position.
  • This configuration can reduce the amount of data required.
  • the parameter setting means sets a parameter for reproducing only selected one of peak, dip, high band attenuation and low band attenuation included in the amplitude frequency characteristic of the head related transfer function.
  • the parameter setting means is configured to set a parameter for reproducing at least one of the time difference and the level difference of the left and right ears of the head related transfer function.
  • the sound image localization processing means includes a plurality of IIR filters, and the parameter setting means sets parameters for reproducing the peak, dip, high band attenuation and low band attenuation in the IIR filter. did.
  • the sound image localization processing means includes at least one of a delay and a level adjuster, and the parameter setting means sets a parameter for reproducing the time difference between the left and right ears as the delay, and A parameter for reproducing the ear level difference is set in the level adjuster.
  • the number of structural features of the head related transfer function to be reproduced is changed.
  • the number of structural features of the head related transfer function to be reproduced is changed.
  • the computer may be selected from among peak, dip, high band attenuation, and low band attenuation included in the amplitude frequency characteristic of the head transfer function corresponding to the input target position.
  • Set at least one of the parameters parameters that reproduce only the parameters, parameters that reproduce the time difference between the left and right ears of the head related transfer function, and parameters that reproduce the level difference between the left and right ears of the head related transfer function It is configured to function as parameter setting means and sound image localization processing means for performing sound image localization processing on a sound source signal according to the parameters and outputting a sound image localization signal.
  • the selected peak, dive, high band attenuation and low band attenuation included in the amplitude frequency characteristic of the head related transfer function, the time difference between the left and right ears of the head related transfer function, the head related transfer function By reproducing at least one of the level differences between the left and right ears, sound image localization can be easily performed correctly, and the amount of data and calculation required can be reduced.
  • the present invention it is possible to reduce the amount of data required and the amount of calculation by reproducing only the structural features of the head related transfer function corresponding to the input target position. Sound localization can be easily performed correctly for many listeners.
  • FIG. 1 is a block diagram of a sound image localization apparatus according to a first embodiment of the present invention.
  • FIG. 2 is a diagram showing structural features in amplitude frequency characteristics of head related transfer functions.
  • Fig. 3 is a diagram showing the interaural time difference and interaural level difference of the head related transfer function.
  • FIG. 4 is a block diagram of a sound image localization processing unit of the sound image localization apparatus according to the first embodiment of the present invention.
  • FIG. 5 is a view showing another method of reproducing peaks and dips in the amplitude frequency characteristics of the sound image localization apparatus in the first embodiment of the present invention.
  • FIG. 6 is a block diagram showing an example using a parameter setting function of the parameter setting unit of the sound image localization apparatus according to the first embodiment of the present invention.
  • FIG. 7 is a block diagram showing an example using a parameter table of a parameter setting unit of the sound image localization apparatus according to the first embodiment of the present invention.
  • FIG. 8 is a block diagram of a sound image localization apparatus in a second embodiment of the present invention.
  • FIG. 9 is a block diagram showing an example using a parameter setting function of the parameter setting unit of the sound image localization apparatus according to the second embodiment of the present invention.
  • FIG. 10 is a block diagram showing an example using a parameter table of a parameter setting unit of the sound image localization apparatus according to the second embodiment of the present invention.
  • FIG. 11 is a block diagram of a sound image localization apparatus in a third embodiment of the present invention.
  • FIG. 12 is a block diagram of a physical feature extraction unit of the sound image localization apparatus in the third embodiment of the present invention.
  • FIG. 13 is a diagram showing the left-right symmetry of head-related transfer functions.
  • FIG. 14 is a block diagram of a conventional sound image localization apparatus.
  • the hand for sound image localization has a strong head. It is considered to be contained in the transfer function.
  • the cues related to the localization in the front and back and up and down directions are included in the amplitude frequency characteristics of the head related transfer function. It is considered to be included in structural features such as high frequency or low frequency attenuation.
  • the clues related to the localization in the left and right direction are the time difference between the left and right (ITA (Interaural Time Difference)) and the level difference (interaural level difference: ILD (Interaural Level Difference) included in the head transfer function. And !, included in the glacial structural features! It is believed that
  • the present inventors analyzed for each subject the structural features of the head related transfer function that is a clue for sound image localization in the front and back and up and down directions. As a result, to reproduce all structural features (peaks, dips, high-frequency or low-frequency attenuation) included in the head related transfer function, to reproduce some of them (for example, five or six). Revealed that the sound image can be localized correctly.
  • the localization in the left-right direction of the sound image can be controlled independently of the localization in the front-rear and up-down directions by using ITD and ILD (see, for example, Japanese Patent No. 33 88235). Therefore, using the ITD and ILD as a signal that reproduces the structural features of the head-related transfer function as a clue for sound image localization in the longitudinal and vertical directions described above. Can control the left and right direction of
  • FIG. 1 is a view showing a sound image localization apparatus according to a first embodiment of the present invention.
  • the sound image localization apparatus of the present embodiment holds parameters for reproducing the structural features of the head related transfer function for performing sound image localization corresponding to the target position, and the input target
  • a sound image localization process is performed based on the parameter set unit 11 which sets corresponding parameters in the sound image localization processing unit 12 based on the position information, and the sound source signal to be input based on the parameter set by the parameter setting unit 11.
  • a sound image localization processing unit 12 for outputting a sound image localization signal subjected to the sound image localization processing to a sound reproduction device such as a headphone or a speaker (not shown).
  • the knurl setting unit 11 For each target position to which the sound image is to be localized, as shown in FIG. 2, the knurl setting unit 11 has an amplitude frequency characteristic of a standard head related transfer function corresponding to the target position
  • a standard head related transfer function (left ear: hl (t), right ear: hr (t)) corresponding to the target position for each target position where the sound image is to be localized.
  • the parameters to reproduce the structural features of the ITD and ILD of the HQ, that is, the delay amount and level adjustment amount are set.
  • the sound image localization processing unit 12 sets the peak of the head-related transfer function to the input signal based on the set center frequency fc, sharpness Q, and level L parameters for the left ear.
  • a plurality of IIR filters 121L a to 12 lLz that perform filtering processing that reproduces D, D, high-pass attenuation or low-pass attenuation, and delay 122 L that delays the input signal based on the set delay amount
  • Level adjuster 12 3 L that adjusts the level of the input signal based on the level adjustment amount that is set, and to the input signal based on the set center frequency fc, sharpness Q, and level L parameters for the right ear
  • a plurality of IIR filters 121Ra to 121Rz that perform filter processing that reproduces the peak, dip, high band attenuation or low band attenuation of the head related transfer function, and the delay amount that is set
  • the parameter setting unit 11 detects each of the left ear and the right ear corresponding to the input target position information.
  • the parameters (center frequency, sharpness Q, level L) are read out and set, and the parameters for several minutes are set, and the parameters for the left ear are IIR filters for the left ear 12 lLa to 121 Lz, for the right ear
  • the parameters are set such that one parameter corresponds to one IIR filter for each of the right ear IIR filters 121Ra to 121Rz.
  • the delay amount for the left ear and the right ear corresponding to the input target position information is set to the delay amount for the left ear
  • the delay amount for the left ear is set to the delay 122L
  • the delay amount for the right ear is set to the right ear.
  • the IIR filters 121La to 121Lz, 121Ra to 121Rz, the delay 122r and 122R, and the level adjusters 123L and 123R are respectively applied to the sound source signals divided into those for the left ear and for the right ear.
  • the left and right ear signals are processed according to the parameters set in the parameter setting unit 11, and the left ear (Lch) sound image localization signal and the right ear (Rch) sound image localization signal whose sound image is localized are processed.
  • Output the left ear (Lch) sound image localization signal and the right ear (Rch) sound image localization signal whose sound image is localized are processed.
  • the present embodiment it is impossible to faithfully reproduce the head related transfer function.
  • selection among peak, dip, high band attenuation, and low band attenuation Since only those that are reproduced can be used to reduce the amount of data and computation required, sound localization can be easily performed correctly for many listeners.
  • one peak or dip is reproduced by one IIR filter, as shown in FIG. 5, one peak P1, and two dips Dl Three peaks and two dips can also be reproduced by combining ,, and D2. That is, five peaks and dips can be reproduced by three IIR filters reproducing Pl, Dl, D2, and the number of used IIR filters can be reduced. More than Thus, a plurality of peaks and dips included in head related transfer functions can be realized by a combination of a smaller number of IIR filters.
  • the parameter setting unit 11 has a parameter calculation unit 111, holds in advance a function representing the relationship between the target position and the value of the parameter, and the parameter calculation unit 111.
  • the parameter corresponding to the input target position information may be calculated by this function.
  • the parameter setting unit 11 has a parameter selection unit 112, holds in advance a parameter table storing parameters corresponding to the target position, and the parameter selection unit 112
  • the parameter corresponding to the input target position information may also be selected.
  • parameters of the target position can also be obtained from interpolation parameters such as linear interpolation generally used from parameters of positions close to the target position.
  • FIG. 8 is a view showing a sound image localization apparatus according to a second embodiment of the present invention. Since this embodiment is configured substantially the same as the above-described first embodiment, the same reference numerals are given to the same components and only the characteristic parts will be described.
  • the parameter setting unit 21 adds physical feature information that affects sound image localization, such as the size or shape of the listener's head or ear, in addition to the target position information.
  • parameters to be set in the sound image localization processing unit 12 are determined based on the physical feature information and the target position information. /.
  • the parameter setting unit 21 includes physical characteristic information (for example, the size of the pinna or the size of the concha cavity) regarding the shape of the pinna etc.
  • the parameter setting unit 21 sets a target position at which the sound image is to be localized for each value of physical feature information (for example, head size in which frontal force is also seen, etc.) regarding the size of the head and the like. Every The head related transfer function (left ear: hl (t), right ear: hr (t)) corresponding to the size of the head, etc. Parameters (delay amount and level adjustment amount) are set.
  • the parameter setting unit 21 receives the input target position information and Read out the parameters (center frequency fc, sharpness Q, level L) for the left ear and right ear corresponding to the shape of the pinna, etc.
  • the parameters for the set number of minutes are the parameters for the left ear.
  • the parameters for the right ear are set to correspond to the IIR filters for the left ear 121La to 121Lz, and the parameters for the right ear to the IR filters 121Ra to 121Rz, respectively, so that one parameter corresponds to one IIR filter.
  • the delay amount for the left ear and for the right ear corresponding to the input target position information and the size of the head, etc., and the delay amount for the left ear to the delay 122L for the left ear is set to the delay 122R for the right ear, and the level adjustment amount for the left ear and the right ear corresponding to the input target position information and the size of the head, etc.
  • the IIR filters 121La to 121Lz, 121Ra to 121Rz, the delay 122r and 122R, and the level adjusters 123L and 123R are respectively applied to the sound source signals divided into those for the left ear and for the right ear.
  • the left and right ear signals are processed according to the parameters set in the parameter setting unit 21, and the left ear (Lch) sound image localization signal and the right ear (Rch) sound image localization signal whose sound image is localized are processed. Output.
  • the peak, dip, high band attenuation, and low band attenuation which are structural features of the head related transfer function corresponding to the physical feature information of the listener and the target position. Since only selected ones of these are reproduced, it is possible to reduce the amount of data required and the amount of calculation, and to easily localize the sound image correctly to many listeners.
  • the parameter setting unit 21 has a parameter calculation unit 211 as shown in FIG. 9, and holds in advance a function representing the relationship between the value representing the physical feature and the target position and the value of the parameter.
  • the parameter calculation unit 211 corresponds to the input target position information and physical feature information. Let's calculate the parameters by using this function.
  • parameter setting unit 21 has parameter selecting unit 212, and for each value representing a physical feature, a parameter table for storing a parameter corresponding to the target position is provided.
  • the parameter selection unit 212 may also select the parameter corresponding to the parameter corresponding to the input physical feature information and target position information.
  • parameters of the target position can be obtained from parameters of positions close to the target position using interpolation processing such as linear interpolation generally used.
  • the parameters are set based on the physical feature information.
  • the parameters are set based on the head-related transfer function obtained by actual measurement or numerical calculation of the listener. You may do so.
  • the peak and dip of the amplitude frequency characteristic, high frequency attenuation, low frequency attenuation, ITD and ILD may be extracted from the head-related transfer function of the listener, and the parameters may be set based on these.
  • the parameters may be set based on attribute information such as the age and gender of the listener.
  • the parameters may be set based on the information on the auditory characteristics of the listener such as the direction determination band and the hearing, which are described in detail in Non-Patent Document 1.
  • FIG. 11 is a view showing a sound image localization apparatus according to a third embodiment of the present invention. Since this embodiment is configured substantially the same as the above-described second embodiment, the same reference numerals are given to the same components and only the characteristic parts will be described.
  • the sound image localization apparatus of the present embodiment is a physical feature extraction unit 31 that extracts physical feature information and also outputs information characteristic information including the input physical features of the listener to the parameter setting unit 21. It is characterized in that the parameters to be set in the sound image localization processing unit 12 are determined based on the physical feature information extracted by the physical feature extraction unit 31 and the target position information.
  • the parameter setting unit 21 includes physical feature information related to the shape of the pinna etc. (eg, the size of the pinna and the size of the concha cavity) The peak-to-peak dip included in the shape of the auricle shape etc. and the head transfer function corresponding to the target position for each target position where the sound image is to be localized, etc. High frequency attenuation and Among structural features such as low-pass attenuation, parameters (center frequency, sharpness Q, level L) corresponding to the selected plural are set.
  • the parameter setting unit 21 sets a target position at which the sound image is to be localized for each value of physical feature information (for example, head size in which frontal force is also seen).
  • physical feature information for example, head size in which frontal force is also seen.
  • Parameters delay amount and level adjustment amount
  • the physical feature extraction unit 31 receives image information of the ear captured by a camera or the like, image information of the entire head, and the like.
  • the physical feature extraction unit 31 uses the image recognition unit 311 to perform feature extraction, pattern matching, and image recognition techniques, and the input image information also has the size of the auricle or head, or the auricle. Extract physical feature information such as the shape of the character and output it to the parameter setting unit 21
  • the parameter setting unit 21 reads out and sets parameters (center frequency fc, sharpness Q, level L) for the left ear and the right ear corresponding to the input target position information and the shape of the pinna etc.
  • Parameters for the left ear are for the left ear IIR filter 121 La to 121 Lz for the left ear, and for the right ear are the parameters for the right ear II R filter 121 Ra to 121 Rz, one parameter each. Is set to correspond to one IIR filter.
  • the delay amount for the ear is set to the delay 122R for the right ear, and the level adjustment amount for the left ear and the right ear corresponding to the input target position information and the size of the head, etc.
  • the IIR filters 121La to 121Lz, 121Ra to 121Rz, the delay 122r and 122R, and the level adjusters 123L and 123R are respectively applied to the sound source signals divided into those for the left ear and for the right ear.
  • the left and right ear signals are processed according to the parameters set in the parameter setting unit 21, and the left ear (Lch) sound image localization signal and the right ear (Rch) sound image localization signal whose sound image is localized are processed.
  • Output As described above, in the present embodiment, physical feature information is extracted from information including the physical features of the listener such as image information, and the extracted physical feature information corresponds to the target position.
  • a center frequency representing peak, dip, high range attenuation, and low range attenuation in the nolameter setting unit can be performed only by the delay and the level adjuster, by setting the level and the sharpness and not providing the IIR filter in the sound image localization processing unit.
  • the difference between the amplitude frequency characteristics of the left and right head transfer functions is small. Therefore, when localization should be performed only in the vicinity of the median surface, The same effect can be obtained even if the same processing is performed for the left and right ears by providing only one row without dividing them for the ear and the right ear.
  • the sound image localization processing unit is an offset of the delay or level adjuster.
  • the same effect can be obtained by setting the delay or level adjuster in the sound image localization processing unit to either ITD or ILD and performing sound image localization processing. Can.
  • the head transfer function Hr (f; ⁇ ) of the right ear at a position from the normal surface to the angle ⁇ The structural features included in the left to right transfer function HI (f; ⁇ ⁇ ) at symmetrical positions should be regarded as substantially identical. Force S can.
  • the structural functions included in the head transfer function HI (f; ⁇ ) of the left ear at the position of the angle ⁇ and the head transfer function Hr (f; ⁇ ⁇ ) for the right ear in the position of left-right symmetry The features can be regarded as almost identical.
  • Hr (f; ⁇ ) are the structural features of Hl (f; ⁇ ⁇ ) in FIG. 13, and the structural features of Hl (f; ⁇ ) are Hr (f; — Even if it is used as a structural feature of ⁇ )), equivalent effects can be obtained. The same is true if only the information on the left half position is held.
  • the number of structural features of the head related transfer function necessary to perform sound image localization needs to be constant at all times.
  • the direction in which sound image localization is performed, the amount of processing assigned to the listener, or sound image localization It may be changed manually or automatically depending on the situation.
  • the processing amount is limited as long as only structural elements that play an important role in sound image localization are reproduced. Degradation of the sound image localization effect in
  • the above-described process may be performed using another means having the same function as the sound image localization process performed using an IIR filter, a delay, and a level adjuster.
  • the above-described processing may be performed by a program using a DSP (Digital Signal Processor) or the like.
  • each of the parameter setting unit and the physical feature extraction unit may be used as a sound image localization auxiliary device for setting parameters for sound image localization, and parameters for sound image localization are provided by communication or the like. It may be a sound image localization information server.
  • the sound image localization processing unit may be a sound image localization processing device that performs sound image localization processing based on parameters for sound image localization.
  • a known crosstalk cancellation device is connected to the sound image localization device of each of the above-described embodiments, and the speaker is subjected to crosstalk cancellation processing. It is obvious that it should be made to reproduce by etc.
  • the sound image localization apparatus can reduce the amount of necessary data and the amount of calculation, and can easily and properly sound image localize to many listeners.
  • sound image localization processing is performed on all devices that perform sound reproduction and the like, such as mobile phones, sound reproduction devices, sound recording devices, information terminal devices, game machines, conference devices, communication and broadcasting systems, etc. It is useful.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)
PCT/JP2005/015724 2004-12-24 2005-08-30 音像定位装置 Ceased WO2006067893A1 (ja)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN2005800479265A CN101116374B (zh) 2004-12-24 2005-08-30 声像定位装置
US11/722,586 US20080219454A1 (en) 2004-12-24 2005-08-30 Sound Image Localization Apparatus
EP05781488A EP1830604A1 (en) 2004-12-24 2005-08-30 Acoustic image locating device

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2004-373883 2004-12-24
JP2004373883 2004-12-24
JP2005161602A JP2006203850A (ja) 2004-12-24 2005-06-01 音像定位装置
JP2005-161602 2005-06-01

Publications (1)

Publication Number Publication Date
WO2006067893A1 true WO2006067893A1 (ja) 2006-06-29

Family

ID=36601504

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2005/015724 Ceased WO2006067893A1 (ja) 2004-12-24 2005-08-30 音像定位装置

Country Status (4)

Country Link
US (1) US20080219454A1 (enExample)
EP (1) EP1830604A1 (enExample)
JP (1) JP2006203850A (enExample)
WO (1) WO2006067893A1 (enExample)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8116458B2 (en) * 2006-10-19 2012-02-14 Panasonic Corporation Acoustic image localization apparatus, acoustic image localization system, and acoustic image localization method, program and integrated circuit
JP2010272911A (ja) * 2009-05-19 2010-12-02 Nissan Motor Co Ltd 音情報提供装置及び音情報提供方法
JP5540240B2 (ja) * 2009-09-25 2014-07-02 株式会社コルグ 音響装置
JP5672741B2 (ja) * 2010-03-31 2015-02-18 ソニー株式会社 信号処理装置および方法、並びにプログラム
JP5891438B2 (ja) * 2012-03-16 2016-03-23 パナソニックIpマネジメント株式会社 音像定位装置、音像定位処理方法、および音像定位処理プログラム
RU2591179C2 (ru) 2012-03-23 2016-07-10 Долби Лабораторис Лайсэнзин Корпорейшн Способ и система для генерирования передаточной функции головы путем линейного микширования передаточных функций головы
US9264812B2 (en) * 2012-06-15 2016-02-16 Kabushiki Kaisha Toshiba Apparatus and method for localizing a sound image, and a non-transitory computer readable medium
JP5651813B1 (ja) 2013-06-20 2015-01-14 パナソニックIpマネジメント株式会社 音声信号処理装置、および音声信号処理方法
JP6233023B2 (ja) * 2014-01-06 2017-11-22 富士通株式会社 音響処理装置、音響処理方法および音響処理プログラム
JP2015211235A (ja) * 2014-04-23 2015-11-24 国立研究開発法人情報通信研究機構 立体音再生装置およびプログラム
CN104410939B (zh) * 2014-10-16 2017-12-29 华为技术有限公司 声像方位感处理方法和装置
JP6613737B2 (ja) * 2015-09-07 2019-12-04 カシオ計算機株式会社 音源信号変換装置、方法、及びプログラム
EP3304929B1 (en) 2015-10-14 2021-07-14 Huawei Technologies Co., Ltd. Method and device for generating an elevated sound impression
JP6730568B2 (ja) * 2015-10-28 2020-07-29 国立研究開発法人情報通信研究機構 立体音再生装置およびプログラム
CN108370485B (zh) * 2015-12-07 2020-08-25 华为技术有限公司 音频信号处理装置和方法
JP6732464B2 (ja) * 2016-02-12 2020-07-29 キヤノン株式会社 情報処理装置および情報処理方法
JP2020500492A (ja) * 2016-11-13 2020-01-09 エンボディーヴィーアール、インコーポレイテッド 空間的アンビエントアウェア型の個人用オーディオ供給デバイス
JP6926640B2 (ja) 2017-04-27 2021-08-25 ティアック株式会社 目標位置設定装置及び音像定位装置
EP3720148B1 (en) * 2017-12-01 2025-05-21 Socionext Inc. Signal processing device and signal processing method
US10932083B2 (en) * 2019-04-18 2021-02-23 Facebook Technologies, Llc Individualization of head related transfer function templates for presentation of audio content
CN113238189B (zh) * 2021-05-24 2023-03-10 清华大学 基于阵列测量和稀疏先验信息的声源辨识方法、系统
JP7770680B2 (ja) * 2022-02-18 2025-11-17 学校法人千葉工業大学 頭部伝達関数生成装置、プログラム及び頭部伝達関数生成方法

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH099398A (ja) * 1995-06-20 1997-01-10 Matsushita Electric Ind Co Ltd 音像定位装置
JPH1042397A (ja) * 1996-07-23 1998-02-13 Kawai Musical Instr Mfg Co Ltd 音像定位装置及びステレオ音像拡大装置
JPH1127800A (ja) * 1997-07-03 1999-01-29 Fujitsu Ltd 立体音響処理システム
JPH11220797A (ja) * 1998-02-03 1999-08-10 Sony Corp ヘッドホン装置
JP2000324590A (ja) * 1999-05-13 2000-11-24 Mitsubishi Electric Corp 音響再生装置
JP2001016697A (ja) * 1999-05-29 2001-01-19 Central Res Lab Ltd 原頭部関連伝達関数を修正する方法および装置
JP2002281599A (ja) * 2001-03-22 2002-09-27 Victor Co Of Japan Ltd マルチチャンネルオーディオ再生装置
JP2003032776A (ja) * 2001-07-17 2003-01-31 Matsushita Electric Ind Co Ltd 再生システム
JP2003230199A (ja) * 2002-02-04 2003-08-15 Yamaha Corp バーチャルスピーカアンプ

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0747039Y2 (ja) * 1989-05-16 1995-10-25 ヤマハ株式会社 ヘッドホン受聴補正装置
JPH06245300A (ja) * 1992-12-21 1994-09-02 Victor Co Of Japan Ltd 音像定位制御装置
JP3367625B2 (ja) * 1995-01-26 2003-01-14 日本ビクター株式会社 音像定位制御装置
DE69637736D1 (de) * 1995-09-08 2008-12-18 Fujitsu Ltd Dreidimensionaler akustischer Prozessor mit Anwendung von linearen prädiktiven Koeffizienten
JPH10136497A (ja) * 1996-10-24 1998-05-22 Roland Corp 音像定位装置
US6181800B1 (en) * 1997-03-10 2001-01-30 Advanced Micro Devices, Inc. System and method for interactive approximation of a head transfer function
JPH1127799A (ja) * 1997-07-02 1999-01-29 Sanyo Electric Co Ltd 音像制御装置
EP0905933A3 (de) * 1997-09-24 2004-03-24 STUDER Professional Audio AG Verfahren und Vorrichtung zum Mischen von Tonsignalen
TW410527B (en) * 1998-01-08 2000-11-01 Sanyo Electric Co Stereo sound processing device
GB9805534D0 (en) * 1998-03-17 1998-05-13 Central Research Lab Ltd A method of improving 3d sound reproduction
JP4240683B2 (ja) * 1999-09-29 2009-03-18 ソニー株式会社 オーディオ処理装置
JP4264686B2 (ja) * 2000-09-14 2009-05-20 ソニー株式会社 車載用音響再生装置
JP2003153398A (ja) * 2001-11-09 2003-05-23 Nippon Hoso Kyokai <Nhk> ヘッドホンによる前後方向への音像定位装置およびその方法
US20050228518A1 (en) * 2002-02-13 2005-10-13 Applied Neurosystems Corporation Filter set for frequency analysis
JP3999689B2 (ja) * 2003-03-17 2007-10-31 インターナショナル・ビジネス・マシーンズ・コーポレーション 音源位置取得システム、音源位置取得方法、該音源位置取得システムに使用するための音反射要素および該音反射要素の形成方法
US6937737B2 (en) * 2003-10-27 2005-08-30 Britannia Investment Corporation Multi-channel audio surround sound from front located loudspeakers
US7680289B2 (en) * 2003-11-04 2010-03-16 Texas Instruments Incorporated Binaural sound localization using a formant-type cascade of resonators and anti-resonators
US7634092B2 (en) * 2004-10-14 2009-12-15 Dolby Laboratories Licensing Corporation Head related transfer functions for panned stereo audio content

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH099398A (ja) * 1995-06-20 1997-01-10 Matsushita Electric Ind Co Ltd 音像定位装置
JPH1042397A (ja) * 1996-07-23 1998-02-13 Kawai Musical Instr Mfg Co Ltd 音像定位装置及びステレオ音像拡大装置
JPH1127800A (ja) * 1997-07-03 1999-01-29 Fujitsu Ltd 立体音響処理システム
JPH11220797A (ja) * 1998-02-03 1999-08-10 Sony Corp ヘッドホン装置
JP2000324590A (ja) * 1999-05-13 2000-11-24 Mitsubishi Electric Corp 音響再生装置
JP2001016697A (ja) * 1999-05-29 2001-01-19 Central Res Lab Ltd 原頭部関連伝達関数を修正する方法および装置
JP2002281599A (ja) * 2001-03-22 2002-09-27 Victor Co Of Japan Ltd マルチチャンネルオーディオ再生装置
JP2003032776A (ja) * 2001-07-17 2003-01-31 Matsushita Electric Ind Co Ltd 再生システム
JP2003230199A (ja) * 2002-02-04 2003-08-15 Yamaha Corp バーチャルスピーカアンプ

Also Published As

Publication number Publication date
JP2006203850A (ja) 2006-08-03
EP1830604A1 (en) 2007-09-05
US20080219454A1 (en) 2008-09-11

Similar Documents

Publication Publication Date Title
JP4694590B2 (ja) 音像定位装置
WO2006067893A1 (ja) 音像定位装置
EP3311593B1 (en) Binaural audio reproduction
JP4584416B2 (ja) 位置調節が可能な仮想音像を利用したスピーカ再生用多チャンネルオーディオ再生装置及びその方法
US7231054B1 (en) Method and apparatus for three-dimensional audio display
JP2008522483A (ja) 多重チャンネルオーディオ入力信号を2チャンネル出力で再生するための装置及び方法と、これを行うためのプログラムが記録された記録媒体
AU2002234849A1 (en) A method and system for simulating a 3D sound environment
WO2002071797A2 (en) A method and system for simulating a 3d sound environment
JP2000152397A (ja) 複数の聴取者用3次元音響再生装置及びその方法
CN106664499A (zh) 音频信号处理装置
KR20080060640A (ko) 개인 청각 특성을 고려한 2채널 입체 음향 재생 방법 및장치
JP2009077379A (ja) 立体音響再生装置、立体音響再生方法及びコンピュータプログラム
US20200059750A1 (en) Sound spatialization method
JP6515720B2 (ja) 頭外定位処理装置、頭外定位処理方法、及びプログラム
KR20130080819A (ko) 다채널 음향 신호의 정위 방법 및 장치
JP4744695B2 (ja) 仮想音源装置
WO1999035885A1 (fr) Dispositif de localisation d&#39;images sonores
Sunder Binaural audio engineering
JPWO2000045619A1 (ja) 仮想音源装置及びこれを用いた音響装置
JPH09327099A (ja) 音響再生装置
JP2004128854A (ja) 音響再生装置
JP2008502200A (ja) ワイドステレオ再生方法及びその装置
DK180449B1 (en) A method and system for real-time implementation of head-related transfer functions
KR20050012085A (ko) 3차원 입체 음향 재생 방법 및 장치
Klunk Spatial Evaluation of Cross-Talk Cancellation Performance Utilizing In-Situ Recorded BRTFs

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 11722586

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2005781488

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 200580047926.5

Country of ref document: CN

WWP Wipo information: published in national office

Ref document number: 2005781488

Country of ref document: EP