WO2005025270A1 - 音像制御装置の設計ツールおよび音像制御装置 - Google Patents
音像制御装置の設計ツールおよび音像制御装置 Download PDFInfo
- Publication number
- WO2005025270A1 WO2005025270A1 PCT/JP2004/013091 JP2004013091W WO2005025270A1 WO 2005025270 A1 WO2005025270 A1 WO 2005025270A1 JP 2004013091 W JP2004013091 W JP 2004013091W WO 2005025270 A1 WO2005025270 A1 WO 2005025270A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- head
- sound
- image control
- control device
- transfer function
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S1/00—Two-channel systems
- H04S1/002—Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
- H04S1/005—For headphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S5/00—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R5/00—Stereophonic arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S5/00—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation
- H04S5/02—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation of the pseudo four-channel type, e.g. in which rear channel signals are derived from two-channel stereo signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/01—Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
Definitions
- the present invention relates to a sound image control device that uses a sound converter such as a speaker or a headphone to localize a sound image at a position other than where the sound converter exists, and a design tool for the sound image control device.
- a sound converter such as a speaker or a headphone
- This head-related transfer function is a function that represents how the sound is transmitted to the ear when the speaker (sound source) is emitting sound.
- HRTF head-related transfer function
- FIG. 1 (a) is a diagram showing an example of a conventional method for obtaining a head-related transfer function by actual measurement.
- head-related transfer functions are measured in standard rooms called subjects or dummy heads in an anechoic chamber free of reflections from walls and floors. This is performed using a measuring mannequin having In Fig. 1 (a), a speaker for measurement is installed at a position about 1 m away from the dummy head, and the transfer function from the speaker to the left and right ears of the dummy head is measured.
- Microphones are installed in the ears (or ear canal) of the dummy head, and these microphones receive specific impulses generated from the speakers.
- A is the response of the ear far from the speaker (distal ear response)
- S is the response of the ear near the speaker (proximal ear response).
- FIG. 1 (b) is a block diagram showing a configuration of a conventional sound image control device.
- this sound image control device modifies the measured head related transfer function as shown in Fig. 1 (a) using time-domain and frequency-domain signal processing.
- the resonance frequency of the frequency response characteristics of the proximal ear response and the distal ear response is lowered according to the ratio, and the head has a head larger than the standard size.
- the delay time is delayed according to the ratio to respond to individual differences among listeners.
- Japanese Patent Application Laid-Open No. 2001-16697 page 9
- FIG. 2 is a diagram showing a conventional example of calculating head-related transfer functions for a plurality of sound sources using a three-dimensional head model expressed on a computer.
- the three-dimensional shape of the head such as a dummy head, is taken into the computer and used as a head model.
- Each intersection of the mesh drawn on the surface of the head model is called a “node”.
- Each of these nodes is specified by three-dimensional coordinates.
- the head-related transfer function is obtained when the sound source is placed at 0 °, 30 °, 60 °, and 90 ° with respect to the right ear of the head model.
- the potential at each node when the sound source is placed at 0 ° the potential at each node when the sound source is placed at 30 °
- the potential at each node when the sound source is placed at 60 ° By calculating the potential of the node and the potential of each node when the sound source was placed at 90 °, the sound source was placed in the directions of 0 °, 30 °, 60 °, and 90 °.
- the head related transfer function can be calculated.
- the transfer function correction based on the size of the head size adjusts the interaural delay time when the head is treated as a simple sphere.
- the change in frequency characteristics due to interference between different sounds cannot be reproduced, and the individual differences in the effects of sound image control cannot be reduced.
- the present invention has been made to solve the above-described problems, and has as its object to accurately obtain a huge variety of transfer functions for changes in azimuth, elevation, and distance under the same conditions.
- the first transfer function indicating the transfer characteristic of the sound from the sound source to the head receiving point is filtered, and the first transfer function is filtered from the target sound source at a position different from the sound source to the head receiving point.
- the head related transfer function is calculated by a computer, it is possible to achieve sound generation from an ideal point sound source and complete omnidirectional sound reception, which are impossible with actual measurement. Can also be calculated accurately. This makes it possible to achieve more accurate sound image localization.
- the rendition of the ear canal or the eardrum is used as the sounding point, a highly accurate transfer function can be obtained even when the acoustic transducer is near the head. Even when used, accurate sound image localization can be obtained.
- the characteristic function is calculated based on a plurality of types of head models in which the size of each part of the head is different
- the characteristic function storage means includes: Storing a characteristic function
- the sound image control device further includes element input means for receiving an input of an element for determining one of the types from a listener
- the second transfer function generating means includes: The second transfer function is generated by using the characteristic function corresponding to the type determined as above. Therefore, the listener can adapt to the interference of sound due to the size of the head size and individual differences in the shape of the ear canal by inputting the element indicating the type that best matches the shape of the head Thus, individual differences in the effect of sound image control can be reduced.
- the present invention can be realized not only as a design tool and a sound image control device for such a sound image control device, but also for a characteristic feature of the design tool and the sound image control device for such a sound image control device. It can be realized as a design method and a sound image control method of a sound image control device having steps as means, or as a program for causing a computer to execute those steps. It goes without saying that such a program can be distributed via a recording medium such as CD-ROM or a transmission medium such as the Internet.
- FIG. 1 (a) is a diagram showing an example of a conventional method for obtaining a head-related transfer function by actual measurement.
- FIG. 1 (b) is a block diagram showing a configuration of a conventional sound image control device.
- FIG. 2 is a diagram showing a conventional example of calculating head-related transfer functions for a plurality of sound sources using a three-dimensional head model expressed on a computer.
- FIG. 3 (a) is a diagram showing an example of an actual dummy head for calculating a head-related transfer function.
- Fig. 3 (b) is a front view of the head model.
- FIG. 4 (a) is an enlarged front view of the right pinna of the head model according to the first embodiment.
- FIG. 4B is an enlarged top view of the right pinna of the head model according to the first embodiment.
- FIG. 5 is a diagram illustrating an example of a method of calculating a head-related transfer function according to the first embodiment.
- Fig. 6 (a) is a diagram showing a calculation model of the transfer function from the position of the acoustic transducer to the ear canal insertion liro.
- Fig. 6 (b) is a diagram showing a calculation model of the transfer function from the position of the target sound image to the rehabilitation of the ear canal.
- FIG. 7 is a basic block diagram of a sound image control device using a correction filter.
- FIG. 8 is a diagram illustrating an example in which a listener uses a portable device equipped with an acoustic converter for sound image control using the calculation method according to the first embodiment.
- FIG. 9 (a) is a graph showing frequency characteristics of the transfer function H1 and the transfer function H4.
- FIG. 9 (b) is a graph showing the frequency characteristics of the transfer function H2 and the transfer function H3.
- FIG. 9 (c) is a graph showing the frequency characteristic of the transfer function H5.
- Fig. 9 (d) shows the frequency characteristics of the transfer function H6. It is a graph.
- FIG. 10 (a) is a graph showing the frequency characteristic of the characteristic function E1.
- FIG. 11 is a diagram showing a calculation model of a transfer function from the acoustic transducer to the ear canal entrance of the sound image control device according to the second embodiment.
- FIG. 12 is a diagram showing basic blocks of a sound image control device using a transfer function obtained from the relationship shown in FIG.
- FIG. 13 (a) is a front view of the right pinna of the head model 3, and FIG.
- (b) is a top view of the right pinna of the head model 3.
- FIG. 14 is a diagram illustrating an example of a calculation model of a transfer function from the acoustic transducer of the sound image control device to the eardrum using the head model 3 of FIG.
- FIG. 15 is a diagram showing an example of a calculation model of a transfer function from the eardrum to the sound receiving point 10 defined on the target sound source 11.
- Figure 16 shows the transfer function H obtained from the relationships shown in Figures 14 and 15.
- FIG. 11 is a diagram showing basic blocks of a sound image control device using H1 to H16.
- FIG. 17 is a diagram illustrating an example of a calculation model of a transfer function from the acoustic transducer to the eardrum of the sound image control device according to the fourth embodiment.
- Fig. 18 shows the basic blocks of a sound image controller using the transfer functions H17, H18, H15, and H16 obtained from the relationship shown in Fig. 17.
- FIG. 18 shows the basic blocks of a sound image controller using the transfer functions H17, H18, H15, and H16 obtained from the relationship shown in Fig. 17.
- FIG. 19 (a) is a front view of head model 30 for obtaining the transfer function of the sound image control device according to the fifth embodiment.
- FIG. 19 (b) is a side view of the head model 30.
- FIG. 20 is a perspective view showing the size of another part of the head model.
- Figure 21 is a graph showing the variation in ear length and tragus spacing between men and women.
- FIG. 22 is a table showing a specific classification of a population to which the sound image control device according to the sixth embodiment is provided.
- FIG. 23 shows a block diagram configured to switch the correction filter characteristics based on the average value of the population and the specific classification.
- FIG. 24 (a) is a table showing examples of head models M51 to M59 classified into a group having a head width w1.
- FIG. 24 (b) is a table showing examples of head models M61 to M69 classified into a group having a head width of w2.
- FIG. 24 (c) is a table showing examples of head models M71 to M79 classified into a group whose head width is w3.
- Fig. 25 shows a block with a configuration that switches the correction filter characteristics corresponding to the head model according to the specific classification classified into 27 types as shown in Figs. 24 (a) to (c).
- FIG. 25 shows a block with a configuration that switches the correction filter characteristics corresponding to the head model according to the specific classification classified into 27 types as shown in Figs. 24 (a) to (c).
- FIG. 26 (a) is a front view showing the pinna part in detail.
- FIG. 26 (b) [FIG. 26 is a top view showing the pinna in detail.
- FIG. 27 is a table illustrating still another example of the specific classification of the population to which the sound image control device according to the seventh embodiment is provided.
- FIG. 28 shows a block diagram in which the correction filter characteristic corresponding to the head model is switched according to the specific classification classified into nine types as shown in FIG.
- FIG. 29 is a diagram illustrating a processing procedure in the sound image control device when a set of potential data for a plurality of types of head models is stored in the sound image control device.
- FIG. 30 shows a characteristic function setting process in a case where the sound image control device of the present invention or an audio device including the same is provided with a setting input unit that receives setting inputs of a plurality of elements for determining the type of a head model.
- FIG. 9 is a diagram showing an example of a procedure.
- FIG. 31 is a diagram illustrating an example of a procedure when a listener performs a setting input while listening to sound from a speaker in the sound image control device including the setting input unit illustrated in FIG. 30.
- FIG. 32 is a diagram showing an example of assisting input of the setting input unit shown in FIG. 31 from an image of a person's face taken by a mobile phone.
- Figure 33 shows an example in which input assistance is performed based on a photograph of the pinna to compensate for the difficulty in capturing the shape of the ear in a normal portrait photograph taken from the front. It is.
- FIG. 34 is a diagram illustrating an example in which a stereo camera or two shots of the same ear are taken three-dimensionally.
- FIG. 35 is a diagram illustrating an example of a processing procedure in a case where the sound image control device or the acoustic device including the sound image control device holds the characteristic function of the correction filter for each element input and set.
- Fig. 36 shows a mobile phone equipped with a sound image control device, which transmits data input from a setting input unit, etc., to a server on the Internet and receives optimal parameters based on the transmitted data.
- FIG. 36 shows a mobile phone equipped with a sound image control device, which transmits data input from a setting input unit, etc., to a server on the Internet and receives optimal parameters based on the transmitted data.
- Figure 37 shows an example of a case where a mobile phone equipped with a sound image control device sends image data captured by a built-in camera, etc., to a server on the Internet and receives supply of optimal parameters based on the transmitted image data.
- FIG. 37 shows an example of a case where a mobile phone equipped with a sound image control device sends image data captured by a built-in camera, etc., to a server on the Internet and receives supply of optimal parameters based on the transmitted image data.
- FIG. 38 is a diagram illustrating an example of a case where a mobile phone or the like including a sound image control device includes a display unit that displays individual elements of a listener for parameter setting.
- Fig. 39 (a) is a graph showing the waveform and phase characteristics of the transfer function by the simulation used in the above-mentioned embodiments "! ⁇ 8.
- (b) is a graph showing the waveform and phase characteristics of the transfer function obtained by actual measurement as in the past.
- the sound image control device is based on a boundary element method in a calculation model in which a transfer function is inverted by using a three-dimensional head model of a human body shape expressed on a computer and the positions of a sound source and a sound receiving point are reversed.
- a transfer function is inverted by using a three-dimensional head model of a human body shape expressed on a computer and the positions of a sound source and a sound receiving point are reversed.
- Non-patent Document 1 J It is introduced in detail.
- Non-Patent Document 2 Using this boundary element method, for example, “The Acoustical Society of Japan, Autumn Meeting 2001, Lecture Papers (pp. 403-404)” (hereinafter referred to as “Non-Patent Document 2”) The calculation as shown in can be performed.
- Non-Patent Document 2 a real model corresponding to a three-dimensional model represented on a computer is manufactured with high accuracy, and a transfer function from a sound source to the entrance of the ear canal is calculated using the real model. Good agreement is shown when compared with the results calculated by the boundary element method.
- the frequency range is 7.3 kHz or less.However, by increasing the accuracy of the model on the computer and making the intervals between nodes smaller, actual measurements and numerical calculations can be performed over the entire human audible band. It is clear that the results of the above agree.
- FIG. 3 shows a head model for obtaining a transfer function in the sound image control device according to the first embodiment.
- Figure 3 (a) shows the calculation of the HRTF
- FIG. 6 is a diagram showing an example of an actual dummy head for the purpose.
- the actual dummy head shown in Fig. 3 (a) is accurately measured in three dimensions using a laser scanner or the like.
- the head model is constructed based on data from magnetic resonance images and X-ray computed tomography machines in the medical field.
- FIG. 3 (b) shows a front view of the head model obtained in this manner, and details of the right pinna of the head indicated by a broken line in this figure are shown below.
- the potential is calculated for each node of the mesh on the head model shown in FIG.
- FIG. 4A is an enlarged front view of the right pinna of the head model according to the first embodiment
- FIG. 4B is a front view illustrating the right pinna of the head model according to the first embodiment. It is the top view which expanded the part.
- the bottom surfaces of the left and right ear canal insertion lillos 1 and 2 and the entire bottom of the head model are closed by lids.
- FIG. 5 is a diagram illustrating an example of a method of calculating a head-related transfer function according to the first embodiment.
- the same head-related transfer function can be obtained even if the pronunciation point and the sound receiving point are switched.
- one sound source is installed at each of the left and right ear canal entrances of the head model. In this way, since the sound sources are fixed to the left and right ear canal insertion lillos, the potential on each node needs to be calculated once for each sound source, that is, only twice.
- the microphone for receiving the impulse from the sound source is moved to the desired azimuth, elevation, and distance centered on the head model, and the sound is received from each ear canal entrance relo.
- the transfer function to the microphone which is a point, is calculated.
- the transfer function calculated each time the sound receiving point moves is calculated by combining the sound pressure of the potential already determined for each node. And can be calculated.
- the sound pressure on the sphere can be calculated at one time by using the boundary element method.
- Fig. 6 (a) shows the calculation model of the transfer function from the position of the acoustic transducer to the entrance of the ear canal
- Fig. 6 (b) shows the calculation model of the transfer function from the position of the target sound image to the entrance of the ear canal.
- the head model 3 is the same as the head model shown in FIG. 3 (b).
- the pronunciation point 4 indicates the pronunciation point defined at the left ear canal entrance of the head model 3
- the pronunciation point 5 indicates the pronunciation point defined at the right ear canal entrance of the head model 3.
- the sound receiving points 6 and 7 are sound receiving points such as microphones defined in the acoustic transducers 8 and 9 installed near the head model 3.
- the acoustic transducer 8 and the sound receiving point 6 are located near the left ear canal of the head model 3, and the acoustic transducer 9 and the sound receiving point 7 are located near the right ear canal of the head model 3.
- the transfer function from pronunciation point 4 to receiving point 6 is H1
- the transfer function from pronunciation point 4 to receiving point 7 is H3
- the transfer function from pronunciation point 5 to receiving point 7 is Is H2
- the transfer function from pronunciation point 5 to sound reception point 7 is H4.
- sound receiving point 10 is a sound receiving point defined as target sound source 11 which is a virtual sound transducer.
- the transfer function from the pronunciation point 4 to the sound receiving point 10 is H5, and the transfer function from the pronunciation point 5 to the sound receiving point 10 is H6.
- the number of nodes of the head model 3 of the first embodiment is 15052, and the calculation time by sound pressure synthesis from each node is about 1/100 of the potential calculation time. It turned out to be enough.
- the sound pressure at the sounding point 4 is defined as “1” for the amplitude and “0” for the phase
- the sound pressure at the sound receiving point 6 becomes the transmission function and H 1 is obtained.
- the transfer function H3 and the transfer function H5 are obtained from the sound pressures at the sound receiving points 7 and 10.
- the same sound pressure is defined for the sounding point 5, and the transfer function H2, transfer function H4, and transfer function H6 are determined based on the sound pressures of the receiving points 6, 6, and 10. Is required.
- FIG. 7 is a basic block diagram of a sound image control device using a correction filter.
- the correction filter 13 and the correction filter 14 are used for the acoustic converter 8 and the acoustic converter 9 to realize the sound image of the target sound source 11 by using filtering.
- the characteristics of the correction filter 13 are E1, and the correction filter is If the characteristic of 14 is E 2, the following (Equation 1) holds from the condition that the transfer function from the input terminal 12 to the entrance of the ear canal is equal to the transfer function from the target sound source 11.
- Equation 1 Therefore, the characteristic function E 1 and the characteristic function E 2 are obtained by the following (Equation 2), which is a modification of (Equation 1).
- each transfer function H 1 H 6 is a complex number at a discrete frequency obtained by numerical calculation, in order to use the characteristic functions E 1 and E 2 in the frequency domain, the signal at the input port 1 The signal is once transformed into the frequency domain by a Fourier transform (FFT), multiplied by the characteristic function E 1 and the characteristic function E 2, and then the signal is subjected to an inverse fast Fourier transform (IFFT). And output to the acoustic converter 9.
- FFT Fourier transform
- IFFT inverse fast Fourier transform
- each transfer function H 1 H 6 is first converted to a response in the time domain by IFFT, and is disclosed in, for example, Japanese Patent No. 2548103 (hereinafter referred to as “Patent Document 2”). It is also possible to realize the characteristic functions E 1 and E 2 as filter characteristics in the time domain by using such a design technique in the time domain.
- the sound image by the signal of the input terminal 12 is reliably localized at the position of the target sound source 11. Can be.
- FIG. 8 is a diagram illustrating an example in which a listener uses a portable device equipped with an acoustic converter for sound image control using the calculation method according to the first embodiment.
- broken lines 16 are straight lines connecting the left and right ear canals, that is, the sounding points 4 and 5.
- the dashed line 17 is a straight line that passes through the center 15 of the head and indicates a direction angle of 0 °.
- An alternate long and short dash line 18 is a straight line connecting the centers of the acoustic transducers 8 and 9 and the center 15 of the head.
- the acoustic transducer 8 is located at a distance of 0.4 m from the center 15 of the head, at an azimuth angle of 10 ° and an elevation angle of 20 °, and the acoustic transducer 9 is at an azimuth angle of 10 ° and an elevation angle of 1 °. It is located at 20 °.
- the target sound source 11 has an azimuth angle of 90 °, an elevation angle of 15 °, and a head center of 15 °. It is located at a distance of 0.2 from.
- FIG. 9 is a diagram showing a calculation example under the conditions shown in FIG. In FIG. 8, since the acoustic transducers 8 and 9 are symmetrical with respect to the head model 3, the transfer functions HI and H4, and the transfer functions H2 and H3 are Each has the same frequency characteristics.
- FIG. 9A is a graph showing frequency characteristics of the transfer function H1 and the transfer function H4.
- FIG. 9B is a graph showing frequency characteristics of the transfer function H2 and the transfer function H3.
- FIG. 9C is a graph showing the frequency characteristic of the transfer function H5.
- FIG. 9D is a graph showing the frequency characteristic of the transfer function H6.
- FIG. 10 is a graph showing the frequency characteristics of the characteristic functions E 1 and E 2 obtained from the transfer functions H 1 to H 6 obtained as shown in FIG.
- FIG. 10 (a) is a graph showing the frequency characteristics of the characteristic function E1.
- FIG. 10 (b) is a graph showing the frequency characteristics of the characteristic function E2.
- the listener can clearly perceive the sound image of the target sound source 11 even if the positions of the sound transducers 8 and 9 and the target sound source 11 are close to the head. And correct sound image localization can be obtained.
- the combination of the correction filters 13 and 14 is used as the target sound source. It is possible if you prepare the same number.
- the target sound source moves, it is possible to switch the characteristics of the correction filters for a plurality of directions and distances according to the movement path.
- the transfer function and correction filter characteristics are extremely short due to the sound pressure synthesis from that potential. Can be requested.
- the transfer function close to the head, where the speaker or microphone affects the sound field in the conventional transfer function measurement must also be determined with high accuracy by numerical calculations in which the size of the sounding point and the sound receiving point can be ignored. Thus, accurate sound image control can be performed by the correction filter characteristics calculated from these.
- FIG. 11 is a diagram showing a calculation model of a transfer function from the acoustic transducer to the ear canal entrance of the sound image control device according to the second embodiment.
- the same components as those in FIG. 6 are denoted by the same reference numerals, and description thereof will be omitted.
- a sound transducer 8 and a sound transducer 9 are installed near both ears of the head model 3, respectively, and a calculation model corresponding to so-called headphone listening is shown. That is, the sound pressure generated at the sound receiving point 7 of the acoustic transducer 9 by the sound generating point 4 installed in the left ear canal can be ignored.
- the transfer function H7 from the acoustic transducer 8 is obtained as the sound pressure at the sound receiving point 6. Further, the transfer function H 8 from the acoustic converter 9 is obtained as the sound pressure at the sound receiving point 7.
- FIG. 12 is a diagram showing basic blocks of a sound image control device using a transfer function obtained from the relationship shown in FIG.
- the correction filters 13 and 14 use an acoustic converter 8 and an acoustic converter 9.
- This is a correction filter that realizes the target sound source 11 by using E3 as the characteristic of the correction filter 13 and E4 as the characteristic of the correction filter 14.
- the transfer function from the input terminal 12 to the ear canal liro (left ear canal liro 1, right ear canal liro 2) is from the target sound source 11 to the ear canal entrance (left ear canal liro 1, (Equation 3) holds from the condition that the transfer function is equal to the side ear canal entrance 2).
- the transfer function from the target sound source 11 is realized by the rehabilitation of the ear canal of the listener, so that an accurate sound image is localized at the position of the target sound source 11. be able to.
- the transfer function to the target sound source is obtained by setting the sounding point at the eardrum, When positioning a more accurate sound image Will be described.
- FIG. 13 is a diagram showing a more detailed 3D shape of the right pinna of the head model 3.
- FIG. 13 (a) is a front view of the right pinna of the head model 3
- FIG. 13 (b) is a top view of the right pinna of the head model 3.
- the eardrum 23 is formed from the ear canal liro 1 via the ear canal 21.
- the third embodiment is the same as the first embodiment except that the end of the auditory canal is closed by the eardrum on both the left and right sides of the head model 3.
- FIG. 14 is a diagram illustrating an example of a calculation model of a transfer function from the acoustic transducer of the sound image control device to the eardrum using the head model 3 of FIG.
- an eardrum 22 is formed at the end of the left ear canal 20, and a pronunciation point 4 is defined on the eardrum 22.
- An eardrum 23 is formed at the end of the right ear canal 21, and a sounding point 5 is defined on the eardrum 23.
- the transfer functions to the sound receiving points 6 and 7 defined in the acoustic transducers 8 and 9 shown in FIG. 6 (a) are calculated.
- the transfer function from pronunciation point 4 to sound receiving point 6 is HI1
- the transfer function from sounding point 4 to sound receiving point 7 is H1
- the transfer function from sounding point 5 to sound receiving point 6 is Let H 13 be the transfer function from the sounding point 5 to the sound receiving point 7.
- FIG. 15 is a diagram showing an example of a calculation model of a transfer function from the eardrum to the sound receiving point 10 defined on the target sound source 11.
- the transfer function from the sounding point 4 to the sound receiving point 10 is H15
- the transfer function from the sounding point 5 to the sound receiving point 10 is H16.
- FIG. 16 is a diagram showing basic blocks of a sound image control device using transfer functions H 11 to H 16 obtained from the relationships shown in FIGS. 14 and 15. same
- the characteristics of the correction filter 13 and the correction filter 14 can be obtained as (Equation 5) as a characteristic E 11 and a characteristic E 12, respectively.
- Embodiment 2 sound points are set at the left and right auditory canal entrances of head model 3, and sound image localization during listening using headphones is described.
- the sound point is defined on the eardrum of Model 3 and the localization of the sound image during listening using headphones is explained.
- FIG. 17 is a diagram illustrating an example of a calculation model of a transfer function from the acoustic transducer to the eardrum of the sound image control device according to the fourth embodiment.
- the same components as those in FIG. FIG. 17 shows a calculation model in which the acoustic transducer 8 and the acoustic transducer 9 are respectively installed near both ears of the head model 3 and correspond to so-called headphone listening.
- the transfer function from sounding point 4 to sound receiving point 6 on acoustic transducer 8 is obtained as transfer function HI 7 which is the sound pressure at sound receiving point 6.
- a transfer function from the sounding point 5 to the sound receiving point 7 on the acoustic converter 9 is obtained as a transfer function H 18 which is a sound pressure at the sound receiving point 7.
- FIG. 18 shows the transfer function H17 and the transfer function obtained from the relationship shown in Fig. 17.
- FIG. 4 is a diagram showing basic blocks of a sound image control device using a transfer function H 18, a transfer function H 15, and a transfer function H 16.
- the characteristics of the correction filter 13 and the correction filter 14 are a characteristic function E 13 and a characteristic function E 14, they can be calculated according to the following (Equation 6).
- the transfer function from the eardrum of the listener to the target sound source 11 can be calculated, so that the sound image can be accurately localized at the position of the target sound source.
- the sound image control apparatus transforms a head model for calculating a transfer function into an average value of head sizes of listeners of a population to which the sound image control apparatus is provided. Explain where to reduce individual differences in sound image localization effects among the group of listeners.
- the dummy head of the head model 3 used in the first to fourth embodiments is formed in a predetermined size and shape.
- the size of the dummy head, the shape of the ear, the length of the ear, and the tragus The shape of every part of the head model, such as the interval and the length of the face, is captured as data for each node. Therefore, the shape of every part of the head model is reflected in the transfer function calculated using such a head model.
- FIG. 19 (a) shows the transfer function of the sound image control device in the fifth embodiment.
- FIG. 19B is a front view of the head model 30 to be obtained, and FIG. 19B is a side view of the head model 30.
- 31 indicates the width of the head
- 32 indicates the height of the head
- 33 indicates the depth of the head.
- the head width of the dummy head shown in FIG. 3 (a) is W d
- the head height is H d
- D d the head depth
- the average value of those belonging to the population is calculated, and the width of the head is Wa and the height of the head is respectively Is H a and the depth of the head is D a.
- the dimensions of the head model on the computer shown in Fig. 3 (b) are as follows: head width is WaZWd, head height is HaZHd, and head depth is DaZDd.
- Deform at the ratio of In other words, even if the size of the dummy head measured first is deviated from the average value of the head size of the population to which the present sound image control device is provided, such a deformation (hereinafter referred to as “morphing process”). By performing), a head model of the average head size of the population can be realized on a computer.
- each transfer function is obtained by numerical calculation, and the characteristics E 1 a and E 2 a of the correction filter are obtained in the same manner as in the first embodiment. Individual differences in the effect of sound image control on the listeners belonging to the population to which the present sound image control device is provided can be minimized.
- FIG. 20 is a perspective view showing the size of another part of the head model.
- the size of the dummy head is changed according to the ratio of the size of the dummy head measured first and the average value of the size of the head of the population. You may make it happen.
- the head width 3 1 may be the tragus spacing
- the head height 3 2 may be the total head height
- the head depth 3 3 may be the head length .
- Figure 21 is a graph showing the variation in ear length and tragus spacing between men and women.
- the tragus spacing for men is about 130 mm to 170 mm, while that for women is about 129 mm to 158 mm.
- the ear length of men is about 53 mm to 78 mm, while that of women is about 50 mm to 70 mm.
- sound image control devices are often designed using values corresponding to the positions of the stars in the figure, but the sound image control effect is about 90% with average design values.
- FIG. 22 is a table showing a specific classification of a population to which the sound image control device according to the sixth embodiment is provided.
- head model 35 shows the male average in the population, the head width is W m, the head height is H m, and the head depth is D m.
- the head model 36 shows the average of females in the population. The width of the head is W w, the height of the head is H w, The depth of the part is Dw.
- the head model 37 shows the average of the low-age group (for example, children aged 7 to 15 years) in the population, the width of the head is Wc, and the height of the head is high. Is H c and head depth is D c
- the lid shown in Fig. 3 (a) is replaced by a lid.
- the width of the head of the head model 3 is W d and the height of the head is
- head model 3 5 has a width of mouth and head of WmZWd, a height of head of H mZ H d, and a head height of head model 3 with respect to head model 3.
- the depth of the part is transformed by the ratio of D m ZD d.
- the head model 36 changes the head width by Ww / Wd, the head height by HwZHd, and the head depth by DwDd.
- the head model 37 has a head width W c Wd, a head height He e Hd, and a head depth D c
- the head model 3 5, the head model 3 6, and the head model thus transformed
- FIG. 23 shows a block diagram configured to switch the correction filter characteristics based on the average value of these populations and the specific classification.
- the sound image control apparatus newly stores a characteristic storage memory 40 storing the characteristics of the correction filter between the average value of the population and the specific classification, the average value a of the population, and the specific classification ( Switch 41 for selecting one of male, m, specific classification (female) w, and specific classification (child) c, and characteristic storage memory 40 from switch 41 depending on the state of switch 41
- a filter setting unit 42 is provided for selecting a correction filter characteristic and setting the correction filter 13 and the correction filter 14.
- the head model for calculating the transfer function is modified to the size of the head of the listener of a specific classification of the population to which the sound image control device is provided, and the listener A case in which individual differences in sound image localization effects are reduced by selecting a specific class to which the image belongs will be described.
- FIG. 24 shows a specific classification of a population to which the sound image control device according to the seventh embodiment is provided.
- the head models are classified into three groups according to the width of the head.
- FIG. 24 (a) is a table showing examples of head models M51 to M59 classified into a group having a head width w1.
- FIG. 24 (b) is a table showing an example of head models M61 to M69 classified into a group whose head width is w2.
- FIG. 24 (c) is a table showing an example of head models M71 to M79 classified into a group whose head width is w3.
- the head model whose head width is w1 has head heights h1, h2, and h3, and head depths d1, d2, and d. It is classified into 9 types according to 3.
- the head model whose head width is w2 is classified into nine types according to the above three head heights and three head depths.
- a head model with a head width of w3 has nine Classified into types.
- head models M51 to M5 are obtained by deforming head model 3 in advance according to the dimensions of FIGS. 24 (a) to (c). Using M79, each transfer function is obtained by numerical calculation, and the correction filter characteristics E1-51, E2-51, ..., E1-79, and E2-79 are obtained.
- Fig. 25 shows a configuration in which the correction filter characteristics corresponding to the head model are switched according to the specific classification classified into 27 types as shown in Figs. 24 (a) to (c). A block diagram is shown.
- the sound image control device calculates correction filter characteristics E 1 — 51, E 2 — 51, calculated for the 27 head models of FIGS. 24 (a) to (c).
- a characteristic storage memory storing the 79, 3 types of heads 81, 3 types of switches for switching the compensation filter according to the width of the head
- Switch 82 that switches the correction filter according to the height of the head
- Switch 83 that switches the correction filter according to the depth of the three types of head 83 and Switch 81
- the switch A filter setting section 84 is provided for selecting a correction filter characteristic from the characteristic storage memory 80 according to the state of the switch 82 and the switch 83 and setting the correction filter 13 and the correction filter 14.
- the listener selects the filter that is most suitable for him from the combination of switches 81, 82 and 83, which causes the size of the listener's head to increase. It is possible to reduce individual differences in the effect of sound image control.
- FIG. 26 shows a population provided with the sound image control device according to the eighth embodiment.
- FIG. 4 is a diagram showing an auricle defining a specific classification of FIG.
- FIG. 26 (a) is a front view showing the pinna in detail
- FIG. 26 (b) is a top view showing the pinna in detail.
- FIG. 26 is the height of the auricle, and 91 is the width of the auricle expressed as the distance from the head surface to the farthest position.
- FIG. 27 is a table showing still another example of the specific classification of the population provided with the sound image control device of the seventh embodiment.
- the head models M91 to M99 classify the height of the pinna into three types, eh1, eh2, and eh3, and the width of the pinna is ed1, ed2, It is defined by classifying into three types of ed 3.
- each transfer function is calculated by numerical calculation using head models M91 to M99 in which head model 3 is deformed in advance according to the dimensions in Fig. 27. Find and calculate the correction filter characteristics E 1 — 91, E 2 — 91, ..., E 1 — 99, E 2 — 99 and store them in memory.
- FIG. 28 shows a block diagram in which the correction filter characteristic corresponding to the head model is switched according to the specific classification classified into nine types as shown in FIG.
- the sound image control device calculates the correction filter characteristics E 191, E 2—91,, E 1—99, and E 9 calculated for the nine types of head models in FIG.
- E 2 99 Characteristic storage memory in which 9 is stored 9 3, Switches 9 4, 3 for switching the correction filter according to the three types of pinna heights eh 1, eh 2, eh 3 Switch 95, which switches the correction filter according to the type of pinna width ed1, ed2, ed3, and correction corresponding to the state of switches 94, 95
- a filter setting unit 96 is provided for selecting a filter characteristic from the characteristic storage memory 93 and setting the correction filter 13 and the correction filter 14. The listener selects the most suitable correction filter characteristic from the combination of the switch 94 and the switch 95, thereby controlling the sound image control caused by the height and width of the pinna. Individual differences in effects can be reduced.
- Embodiments 1 to 8 described above a huge amount of calculation is required to calculate the potential of each node on the head model, so that the calculation of the potential data on the node is performed offline in advance. It is supposed to be done. Then, the obtained potential is temporarily stored in an external database or the like, and then the transfer function is calculated using the potential, and the characteristic function of the correction filter is calculated using an external tool. Therefore, in the above-described sound image control device, the characteristic function of the correction filter is simply stored in a memory such as a ROM and used. That is, at the present stage, the sound image control device mounted on a portable device such as a mobile phone or a headphone stereo cannot keep up with the computational capacity of the sound image control device. Therefore, in the near future, it is conceivable that more processing will be performed by the sound image controller built into the portable device.
- FIG. 29 is a diagram illustrating a processing procedure in the sound image control device when a set of potential data for a plurality of types of head models is stored in the sound image control device.
- the listener selects the most suitable head model as shown in Embodiments 5 to 8 while looking at the menu screen of the sound image control device.
- detailed conditions such as the positional relationship between the speaker and both ears and the positional relationship between the target sound source and both ears may be input.
- the sound image control device reads the potential data corresponding to the selected head model from the ROM storing the potential data, and generates a predetermined transfer function.
- This transfer function may be generated as the positional relationship between the loudspeaker and the binaural ears and the positional relationship between the target sound source and the binaural ears are determined in advance.
- Data such as the positional relationship between the target sound source and both ears may be input first, and the transfer function may be calculated based on the input data.
- the parameters of the correction filter are calculated from the obtained transfer function.
- the meter (characteristic function) is calculated and set to the correction filter. In this way, by making it possible to calculate the characteristic function of the correction filter in the sound image control device using the potential data held inside, the correction filter can be flexibly adjusted according to various conditions at each time. By correcting the characteristics of the sound image, the sound image can be localized more accurately.
- FIG. 30 shows a characteristic function setting process in a case where the sound image control device of the present invention or an audio device including the same is provided with a setting input unit that receives a setting input of a plurality of elements for determining the type of a head model.
- FIG. 9 is a diagram showing an example of a procedure. Also, as another example, the age and gender of the listener, which is a factor for determining the type of the head model, from a setting input unit provided in a sound image control device or an audio device incorporating the same, etc. A case will be described in which an input such as an inter-ear distance and an ear size is received.
- the sound image control device determines a set of parameters (characteristic functions) (E 1, E 2) for each element such as the listener's age, gender, interaural distance, and ear size.
- the parameters (E1, E2) are stored in advance in a table, etc.
- One set of parameters according to the element is determined, and then the determined set of characteristic functions is read out from the ROM and set in the correction filters 13 and 14.
- the sound image control is performed.
- FIG. 31 is a diagram illustrating an example of a procedure when a listener performs a setting input while listening to sound from a speaker in the sound image control device including the setting input unit illustrated in FIG. 30.
- Setting input shall be accepted in the order of more influential elements.
- the type of head model for example, age, gender, interaural distance, and ear size
- Setting input is accepted in the order of ear size setting. In this order, the listener inputs the settings while listening to the sound of the speaker.
- FIG. 32 is a diagram illustrating an example of assisting the input of the setting input unit illustrated in FIG. 31 from an image of a person's face captured by a mobile phone. For example, from the picture shown in the figure, it is not possible to expect a completely accurate value, but it is necessary to determine the distance between the listener's ears, the distance between the terminal and the user (listener), age, gender, etc. can do.
- the data that can be determined from the photograph may be such that the set of parameters is determined using the data obtained from the photograph without requiring the listener to input the setting.
- the sound image controller morphs the head model based on the image captured by the camera built into the mobile phone, calculates the potential at each node, and stores it in memory or the like.
- the sound image controller can also calculate the head related transfer function using the stored potentials, calculate the characteristic function that is optimal for the person in the photo, and set the calculated characteristic function in the correction filter. It becomes.
- Fig. 33 shows an example in which input assistance is performed based on a photograph of the pinna to compensate for the difficulty in capturing the shape of the ear in a normal portrait photograph taken from the front. .
- a photograph of a person taken from the front as shown in Fig. 32 the shape of the person's ears (auricles), the length of the ears, and the head are affected by the photographing angle of the person's hair and ears.
- the angle of the pinna with respect to the head and the position of the ear with respect to the head cannot be determined from photographs.
- only the ear of the person is separately photographed and combined with data obtained from a photograph taken from the front in Fig. 32 to assist in setting input for determining the set of correction filter parameters. Is also good.
- a set of correction filter parameters may be determined only by data obtained from these two photographs.
- FIG. 34 is a diagram illustrating an example in which a stereo camera or two shots of the same ear are taken three-dimensionally.
- three-dimensional data of the pinna can be acquired by taking a stereo camera or taking two photographs. This makes it possible to obtain more effective data than the one-shot photograph of the pinna shown in FIG. In this case as well, in combination with the data obtained from the photograph taken from the front in Fig. 32, it can be used as an aid for setting input for determining the set of correction filter parameters. Alternatively, a set of parameters of the correction filter may be determined only by data obtained from two photographs. Of course, it is possible to obtain more accurate data by taking pictures three or more times.
- FIG. 35 is a diagram illustrating an example of a processing procedure in a case where the sound image control device or the acoustic device incorporating the same holds a characteristic function of a correction filter for each element set and input.
- the interaural distance A set of parameters corresponding to the interaural distance “150 mm” is read out of the set of parameters (characteristic functions) corresponding to, and set in the “filter for interaural distance” of the correction filter. Is done. For example, after completing the setting input so far, if the recipient feels that the setting has been adjusted properly, the setting input is ended here, and the remaining (setting 4) ears are set.
- the set of parameters according to the size setting is the differential that was originally set in the "filter for ear size”. OK with the ortho value.
- the sound image control device sets the filter for age, the filter for gender, and the filter for interaural distance set in the correction filter.
- ”And“ Ear size filter ” are combined to generate a set of parameters (characteristic functions), which are set in the correction filters 13 and 14. By doing so, it is not necessary to keep all the parameter sets determined by the sets of factors such as age and gender, and the memory capacity of the sound image controller can be saved.
- Figure 36 shows that a mobile phone equipped with a sound image controller transmits data input from a setting input unit to a server on the Internet, and supplies optimal parameters based on the transmitted data. It is a figure showing an example when receiving.
- a mobile phone or the like having a sound image control device values such as age, gender, interaural distance, and ear size are input from a setting input unit or the like.
- the sound image controller connects to a server such as a vendor on the Internet via a communication line such as a mobile phone network, and the age, gender, and Upload data such as inter-ear distance and ear size to the server.
- the server determines parameters that are determined to be optimal for a listener having the uploaded settings, and transmits the determined set of parameters to a database in the server. And download it to the mobile phone.
- the sound image control device does not need to hold many sets of parameters, and the load on the memory can be reduced.
- the server since the server has a large computer system, the database can hold more detailed data for each element.
- the setting is to increment by 5 years old, such as, while the database in the server may have a setting that assigns a different parameter to each age. it can. Therefore, there is an effect that a mobile phone does not require a large amount of memory and can obtain a more suitable set of parameters.
- Figure 37 shows a case where a mobile phone or the like equipped with a sound image controller transmits image data captured by a built-in camera or the like to a server on the Internet and receives optimal parameter supply based on the transmitted image data. It is a figure showing an example. As shown in Fig. 37, even if the image data of the photograph taken by the mobile phone is transmitted to the server instead of setting the age, gender, interaural distance, etc. However, computer resources such as memory capacity and CPU processing speed are inferior to servers. Therefore, even if the same image data is analyzed by a cellular phone or the like, it is not possible to obtain such detailed and accurate data as compared with the case of analyzing by the server. On the other hand, as in the case of Fig.
- the computer system on the server side is provided with sufficient software and the like to obtain more accurate data from the uploaded image data. . Therefore, by doing so, the mobile phone equipped with the sound image control device can acquire a set of parameters with higher accuracy while saving resources as a computer, and can obtain a sound image with higher accuracy. This has the effect of being able to be localized.
- FIG. 38 is a diagram illustrating an example of a case where a mobile phone or the like including a sound image control device includes a display unit that displays individual elements of a listener for parameter setting.
- a display unit that displays individual elements of a listener for parameter setting.
- the user may set his / her own setting element which has determined the set of correction filter parameters (characteristic function).
- FIG. 39 (a) is a graph showing the waveform and phase characteristics of the transfer function obtained by the simulation used in the first to eighth embodiments.
- Figure 39 (b) is a graph showing the waveform and phase characteristics of the transfer function obtained by actual measurement as in the past.
- the input sound for measurement in Figs. 39 (a) and (b) is flat white noise for all frequencies.
- Fig. 39 (a) in the case of the original HRTF, the sound pressure becomes very small at a certain frequency, even with white noise, as in this simulation.
- the measured graph in Fig. 39 (b) variation is seen around this frequency. This means that such errors are included in actual measurements.
- the simulation requires only about 14 taps of the actual measurement for the characteristic function of the correction filter for outputting the input white noise as white noise at the position of the target sound source. .
- the sound image control device of the present invention is useful as a portable device such as a mobile phone or a PDA equipped with an audio reproducing device. Further, the sound image control device of the present invention is useful as a sound image control device built in a game machine for performing a virtual game or the like.
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/554,595 US7664272B2 (en) | 2003-09-08 | 2004-09-02 | Sound image control device and design tool therefor |
CN2004800109456A CN1778143B (zh) | 2003-09-08 | 2004-09-02 | 声像控制装置的设计工具及声像控制装置 |
EP04772905A EP1667487A4 (en) | 2003-09-08 | 2004-09-02 | AUDIO PICTURE DEVICE DEVICE TOOL AND AUDIO PICTURE CONTROLLER |
JP2005513719A JPWO2005025270A1 (ja) | 2003-09-08 | 2004-09-02 | 音像制御装置の設計ツールおよび音像制御装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003-315393 | 2003-09-08 | ||
JP2003315393 | 2003-09-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005025270A1 true WO2005025270A1 (ja) | 2005-03-17 |
Family
ID=34269828
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/013091 WO2005025270A1 (ja) | 2003-09-08 | 2004-09-02 | 音像制御装置の設計ツールおよび音像制御装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US7664272B2 (ja) |
EP (1) | EP1667487A4 (ja) |
JP (1) | JPWO2005025270A1 (ja) |
KR (1) | KR20060059866A (ja) |
CN (1) | CN1778143B (ja) |
WO (1) | WO2005025270A1 (ja) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009508385A (ja) * | 2005-09-13 | 2009-02-26 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 3次元音声を生成する方法及び装置 |
US8137836B2 (en) | 2006-10-24 | 2012-03-20 | Samsung Sdi Co., Ltd. | Lithium rechargeable battery with resin molding part and outer covers |
JP2013524711A (ja) * | 2010-04-12 | 2013-06-17 | アルカミス | データベース内の知覚的に最適なhrtfフィルタを形態学的パラメータに従って選択するための方法 |
JP2013168924A (ja) * | 2011-12-30 | 2013-08-29 | Gn Resound As | 頭部伝達関数を決定するシステムおよび方法 |
CN103987002A (zh) * | 2013-03-23 | 2014-08-13 | 卫晟 | 全息录音技术 |
CN104284286A (zh) * | 2013-07-04 | 2015-01-14 | Gn瑞声达A/S | 个体hrtf的确定 |
WO2017047309A1 (ja) * | 2015-09-14 | 2017-03-23 | ヤマハ株式会社 | 耳形状解析方法、耳形状解析装置および耳形状モデル生成方法 |
WO2017047116A1 (ja) * | 2015-09-14 | 2017-03-23 | ヤマハ株式会社 | 耳形状解析装置、情報処理装置、耳形状解析方法、および情報処理方法 |
WO2017195616A1 (ja) * | 2016-05-11 | 2017-11-16 | ソニー株式会社 | 情報処理装置および方法 |
WO2018235880A1 (ja) * | 2017-06-22 | 2018-12-27 | 花王株式会社 | 温熱具 |
KR20190130845A (ko) * | 2018-05-15 | 2019-11-25 | 한국표준과학연구원 | 양이 응답 측정을 위한 더미헤드 장치 및 시스템 |
JP2019536395A (ja) * | 2016-11-13 | 2019-12-12 | エンボディーヴィーアール、インコーポレイテッド | 耳介の画像を撮像し、耳介の画像を用いて人間の聴覚の解剖学的構造の特徴を示すためのシステムおよび方法 |
Families Citing this family (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006180467A (ja) * | 2004-11-24 | 2006-07-06 | Matsushita Electric Ind Co Ltd | 音像定位装置 |
FR2880755A1 (fr) * | 2005-01-10 | 2006-07-14 | France Telecom | Procede et dispositif d'individualisation de hrtfs par modelisation |
WO2007048900A1 (fr) * | 2005-10-27 | 2007-05-03 | France Telecom | Individualisation de hrtfs utilisant une modelisation par elements finis couplee a un modele correctif |
US8483416B2 (en) * | 2006-07-12 | 2013-07-09 | Phonak Ag | Methods for manufacturing audible signals |
KR101368859B1 (ko) * | 2006-12-27 | 2014-02-27 | 삼성전자주식회사 | 개인 청각 특성을 고려한 2채널 입체 음향 재생 방법 및장치 |
DK2109934T3 (en) * | 2007-01-04 | 2016-08-15 | Cvf Llc | CUSTOMIZED SELECTION OF AUDIO PROFILE IN SOUND SYSTEM |
KR100954385B1 (ko) * | 2007-12-18 | 2010-04-26 | 한국전자통신연구원 | 개인화된 머리전달함수를 이용한 3차원 오디오 신호 처리장치 및 그 방법과, 그를 이용한 고현장감 멀티미디어 재생시스템 |
US8489371B2 (en) * | 2008-02-29 | 2013-07-16 | France Telecom | Method and device for determining transfer functions of the HRTF type |
US8428269B1 (en) * | 2009-05-20 | 2013-04-23 | The United States Of America As Represented By The Secretary Of The Air Force | Head related transfer function (HRTF) enhancement for improved vertical-polar localization in spatial audio systems |
US20120183161A1 (en) * | 2010-09-03 | 2012-07-19 | Sony Ericsson Mobile Communications Ab | Determining individualized head-related transfer functions |
DE102011003450A1 (de) * | 2011-02-01 | 2012-08-02 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Erzeugung von benutzerangepassten Signalverarbeitungsparametern |
US20130177166A1 (en) * | 2011-05-27 | 2013-07-11 | Sony Ericsson Mobile Communications Ab | Head-related transfer function (hrtf) selection or adaptation based on head size |
US9025779B2 (en) * | 2011-08-08 | 2015-05-05 | Cisco Technology, Inc. | System and method for using endpoints to provide sound monitoring |
EP2611216B1 (en) * | 2011-12-30 | 2015-12-16 | GN Resound A/S | Systems and methods for determining head related transfer functions |
KR101977329B1 (ko) | 2012-07-30 | 2019-05-13 | 삼성전자주식회사 | 음성 신호 출력 제어 방법 및 장치 |
CN103634561A (zh) * | 2012-08-21 | 2014-03-12 | 徐丙川 | 会议通信装置和系统 |
WO2014146668A2 (en) * | 2013-03-18 | 2014-09-25 | Aalborg Universitet | Method and device for modelling room acoustic based on measured geometrical data |
JP5762505B2 (ja) * | 2013-10-23 | 2015-08-12 | 京セラ株式会社 | 耳型部、人工頭部及びこれらを用いた測定システムならびに測定方法 |
CN104064191B (zh) * | 2014-06-10 | 2017-12-15 | 北京音之邦文化科技有限公司 | 混音方法及装置 |
US9424828B2 (en) * | 2014-08-01 | 2016-08-23 | Bose Corporation | System and method of microphone placement for noise attenuation |
CN104135653A (zh) * | 2014-08-15 | 2014-11-05 | 魏鹏 | 一种声源定位与视频联动平台 |
EP3001701B1 (en) | 2014-09-24 | 2018-11-14 | Harman Becker Automotive Systems GmbH | Audio reproduction systems and methods |
CN104581602B (zh) * | 2014-10-27 | 2019-09-27 | 广州酷狗计算机科技有限公司 | 录音数据训练方法、多轨音频环绕方法及装置 |
KR20160122029A (ko) * | 2015-04-13 | 2016-10-21 | 삼성전자주식회사 | 스피커 정보에 기초하여, 오디오 신호를 처리하는 방법 및 장치 |
US9838646B2 (en) * | 2015-09-24 | 2017-12-05 | Cisco Technology, Inc. | Attenuation of loudspeaker in microphone array |
US9591427B1 (en) * | 2016-02-20 | 2017-03-07 | Philip Scott Lyren | Capturing audio impulse responses of a person with a smartphone |
JP6701824B2 (ja) * | 2016-03-10 | 2020-05-27 | 株式会社Jvcケンウッド | 測定装置、フィルタ生成装置、測定方法、及びフィルタ生成方法 |
CN107786936A (zh) * | 2016-08-25 | 2018-03-09 | 中兴通讯股份有限公司 | 一种声音信号的处理方法及终端 |
US20180115854A1 (en) * | 2016-10-26 | 2018-04-26 | Htc Corporation | Sound-reproducing method and sound-reproducing system |
US10701506B2 (en) | 2016-11-13 | 2020-06-30 | EmbodyVR, Inc. | Personalized head related transfer function (HRTF) based on video capture |
EP3588987A4 (en) * | 2017-02-24 | 2020-01-01 | JVC KENWOOD Corporation | FILTER GENERATION DEVICE, FILTER GENERATION METHOD AND PROGRAM |
JP7047850B2 (ja) * | 2017-12-19 | 2022-04-05 | 株式会社ソシオネクスト | 音響システム、音響制御装置及び制御プログラム |
WO2019138647A1 (ja) * | 2018-01-11 | 2019-07-18 | ソニー株式会社 | 音響処理装置と音響処理方法およびプログラム |
WO2019146254A1 (ja) * | 2018-01-29 | 2019-08-01 | ソニー株式会社 | 音響処理装置、音響処理方法及びプログラム |
CN108430003B (zh) * | 2018-03-30 | 2020-01-14 | Oppo广东移动通信有限公司 | 音效补偿方法和装置、可读存储介质、终端 |
EP3827603A1 (en) | 2018-07-25 | 2021-06-02 | Dolby Laboratories Licensing Corporation | Personalized hrtfs via optical capture |
US11115773B1 (en) * | 2018-09-27 | 2021-09-07 | Apple Inc. | Audio system and method of generating an HRTF map |
US10957299B2 (en) * | 2019-04-09 | 2021-03-23 | Facebook Technologies, Llc | Acoustic transfer function personalization using sound scene analysis and beamforming |
JP7342451B2 (ja) * | 2019-06-27 | 2023-09-12 | ヤマハ株式会社 | 音声処理装置および音声処理方法 |
US11778408B2 (en) | 2021-01-26 | 2023-10-03 | EmbodyVR, Inc. | System and method to virtually mix and audition audio content for vehicles |
WO2023143727A1 (en) * | 2022-01-28 | 2023-08-03 | Huawei Technologies Co., Ltd. | Devices and methods for personalized binaural audio rendering |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2548103B2 (ja) * | 1982-05-26 | 1996-10-30 | 松下電器産業株式会社 | 音響再生装置 |
JPH09298800A (ja) * | 1996-05-08 | 1997-11-18 | Hitachi Ltd | 音響音声再生装置 |
JP2001016697A (ja) * | 1999-05-29 | 2001-01-19 | Central Res Lab Ltd | 原頭部関連伝達関数を修正する方法および装置 |
JP2001285998A (ja) * | 2000-03-29 | 2001-10-12 | Oki Electric Ind Co Ltd | 頭外音像定位装置 |
JP2002095098A (ja) * | 2000-09-19 | 2002-03-29 | Matsushita Electric Ind Co Ltd | 音響再生装置 |
JP2002095097A (ja) * | 2000-09-19 | 2002-03-29 | Oki Electric Ind Co Ltd | 適応信号処理システム |
JP2003102099A (ja) * | 2001-07-19 | 2003-04-04 | Matsushita Electric Ind Co Ltd | 音像定位装置 |
JP2003230199A (ja) * | 2002-02-04 | 2003-08-15 | Yamaha Corp | バーチャルスピーカアンプ |
JP2004526364A (ja) * | 2001-03-05 | 2004-08-26 | ビーイーフォー リミッテッド | 3次元音響環境をシミュレートするための方法とシステム |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DK0912077T3 (da) * | 1994-02-25 | 2002-02-18 | Henrik Moller | Binaural syntese, head-related transfer functions samt anvendelser deraf |
JP3266020B2 (ja) * | 1996-12-12 | 2002-03-18 | ヤマハ株式会社 | 音像定位方法及び装置 |
GB9726338D0 (en) * | 1997-12-13 | 1998-02-11 | Central Research Lab Ltd | A method of processing an audio signal |
GB9805534D0 (en) * | 1998-03-17 | 1998-05-13 | Central Research Lab Ltd | A method of improving 3d sound reproduction |
US6996244B1 (en) * | 1998-08-06 | 2006-02-07 | Vulcan Patents Llc | Estimation of head-related transfer functions for spatial sound representative |
US6223090B1 (en) * | 1998-08-24 | 2001-04-24 | The United States Of America As Represented By The Secretary Of The Air Force | Manikin positioning for acoustic measuring |
EP1028583A1 (en) * | 1999-02-12 | 2000-08-16 | Hewlett-Packard Company | Digital camera with sound recording |
JP2001067463A (ja) * | 1999-06-22 | 2001-03-16 | Nadeisu:Kk | 異なる視点からの複数のフェイシャル画像に基づき新たな視点からのフェイシャル画像を生成するフェイシャル画像生成装置及び方法並びにその応用装置及び記録媒体 |
JP3624771B2 (ja) * | 1999-12-08 | 2005-03-02 | 三菱ふそうトラック・バス株式会社 | 車体の吸遮音性能計測装置 |
JP2002043224A (ja) | 2000-07-18 | 2002-02-08 | Applied Materials Inc | アダプター、チャンバ及びプラズマ処理装置 |
EP1368986B1 (en) | 2001-03-02 | 2011-12-28 | 3Shape A/S | Method for modelling customised earpieces |
US6956955B1 (en) * | 2001-08-06 | 2005-10-18 | The United States Of America As Represented By The Secretary Of The Air Force | Speech-based auditory distance display |
US6961439B2 (en) * | 2001-09-26 | 2005-11-01 | The United States Of America As Represented By The Secretary Of The Navy | Method and apparatus for producing spatialized audio signals |
-
2004
- 2004-09-02 WO PCT/JP2004/013091 patent/WO2005025270A1/ja active Application Filing
- 2004-09-02 US US10/554,595 patent/US7664272B2/en active Active
- 2004-09-02 JP JP2005513719A patent/JPWO2005025270A1/ja active Pending
- 2004-09-02 CN CN2004800109456A patent/CN1778143B/zh active Active
- 2004-09-02 KR KR1020057018071A patent/KR20060059866A/ko not_active Application Discontinuation
- 2004-09-02 EP EP04772905A patent/EP1667487A4/en not_active Withdrawn
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2548103B2 (ja) * | 1982-05-26 | 1996-10-30 | 松下電器産業株式会社 | 音響再生装置 |
JPH09298800A (ja) * | 1996-05-08 | 1997-11-18 | Hitachi Ltd | 音響音声再生装置 |
JP2001016697A (ja) * | 1999-05-29 | 2001-01-19 | Central Res Lab Ltd | 原頭部関連伝達関数を修正する方法および装置 |
JP2001285998A (ja) * | 2000-03-29 | 2001-10-12 | Oki Electric Ind Co Ltd | 頭外音像定位装置 |
JP2002095098A (ja) * | 2000-09-19 | 2002-03-29 | Matsushita Electric Ind Co Ltd | 音響再生装置 |
JP2002095097A (ja) * | 2000-09-19 | 2002-03-29 | Oki Electric Ind Co Ltd | 適応信号処理システム |
JP2004526364A (ja) * | 2001-03-05 | 2004-08-26 | ビーイーフォー リミッテッド | 3次元音響環境をシミュレートするための方法とシステム |
JP2003102099A (ja) * | 2001-07-19 | 2003-04-04 | Matsushita Electric Ind Co Ltd | 音像定位装置 |
JP2003230199A (ja) * | 2002-02-04 | 2003-08-15 | Yamaha Corp | バーチャルスピーカアンプ |
Non-Patent Citations (2)
Title |
---|
"nen shuki kenkyu happyokai koen ronbunshu -I- (JP)", THE ACOUSTICAL SOCIETY OF JAPAN (ASJ), 2 October 2001 (2001-10-02), pages 403 - 404, XP002985764 * |
See also references of EP1667487A4 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009508385A (ja) * | 2005-09-13 | 2009-02-26 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 3次元音声を生成する方法及び装置 |
US8137836B2 (en) | 2006-10-24 | 2012-03-20 | Samsung Sdi Co., Ltd. | Lithium rechargeable battery with resin molding part and outer covers |
JP2013524711A (ja) * | 2010-04-12 | 2013-06-17 | アルカミス | データベース内の知覚的に最適なhrtfフィルタを形態学的パラメータに従って選択するための方法 |
JP2013168924A (ja) * | 2011-12-30 | 2013-08-29 | Gn Resound As | 頭部伝達関数を決定するシステムおよび方法 |
CN103987002A (zh) * | 2013-03-23 | 2014-08-13 | 卫晟 | 全息录音技术 |
CN104284286B (zh) * | 2013-07-04 | 2019-01-04 | Gn瑞声达A/S | 个体hrtf的确定 |
CN104284286A (zh) * | 2013-07-04 | 2015-01-14 | Gn瑞声达A/S | 个体hrtf的确定 |
WO2017047309A1 (ja) * | 2015-09-14 | 2017-03-23 | ヤマハ株式会社 | 耳形状解析方法、耳形状解析装置および耳形状モデル生成方法 |
WO2017047116A1 (ja) * | 2015-09-14 | 2017-03-23 | ヤマハ株式会社 | 耳形状解析装置、情報処理装置、耳形状解析方法、および情報処理方法 |
US10390167B2 (en) | 2015-09-14 | 2019-08-20 | Yamaha Corporation | Ear shape analysis device and ear shape analysis method |
WO2017195616A1 (ja) * | 2016-05-11 | 2017-11-16 | ソニー株式会社 | 情報処理装置および方法 |
US10798516B2 (en) | 2016-05-11 | 2020-10-06 | Sony Corporation | Information processing apparatus and method |
JP2019536395A (ja) * | 2016-11-13 | 2019-12-12 | エンボディーヴィーアール、インコーポレイテッド | 耳介の画像を撮像し、耳介の画像を用いて人間の聴覚の解剖学的構造の特徴を示すためのシステムおよび方法 |
WO2018235880A1 (ja) * | 2017-06-22 | 2018-12-27 | 花王株式会社 | 温熱具 |
JPWO2018235880A1 (ja) * | 2017-06-22 | 2020-05-21 | 花王株式会社 | 温熱具 |
JP7062650B2 (ja) | 2017-06-22 | 2022-05-06 | 花王株式会社 | 温熱具 |
TWI788374B (zh) * | 2017-06-22 | 2023-01-01 | 日商花王股份有限公司 | 溫熱器具 |
KR20190130845A (ko) * | 2018-05-15 | 2019-11-25 | 한국표준과학연구원 | 양이 응답 측정을 위한 더미헤드 장치 및 시스템 |
KR102053995B1 (ko) * | 2018-05-15 | 2019-12-12 | 한국표준과학연구원 | 양이 응답 측정을 위한 더미헤드 장치 및 시스템 |
Also Published As
Publication number | Publication date |
---|---|
US20060274901A1 (en) | 2006-12-07 |
EP1667487A1 (en) | 2006-06-07 |
CN1778143A (zh) | 2006-05-24 |
KR20060059866A (ko) | 2006-06-02 |
EP1667487A4 (en) | 2010-07-14 |
JPWO2005025270A1 (ja) | 2006-11-16 |
US7664272B2 (en) | 2010-02-16 |
CN1778143B (zh) | 2010-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2005025270A1 (ja) | 音像制御装置の設計ツールおよび音像制御装置 | |
US10939225B2 (en) | Calibrating listening devices | |
US11706582B2 (en) | Calibrating listening devices | |
CN108885690B (zh) | 用于产生头相关传递函数滤波器的方法、设备和计算机可读介质 | |
US8270616B2 (en) | Virtual surround for headphones and earbuds headphone externalization system | |
CN106664501A (zh) | 基于所通知的空间滤波的一致声学场景再现的系统、装置和方法 | |
CN111295896B (zh) | 在扬声器的任意集合上的基于对象的音频的虚拟渲染 | |
WO2006067893A1 (ja) | 音像定位装置 | |
US7921016B2 (en) | Method and device for providing 3D audio work | |
JP6730568B2 (ja) | 立体音再生装置およびプログラム | |
JP2021523392A (ja) | シミュレーションを使用した頭部伝達関数の個人化 | |
Kates et al. | Externalization of remote microphone signals using a structural binaural model of the head and pinna | |
Harder et al. | A framework for geometry acquisition, 3-D printing, simulation, and measurement of head-related transfer functions with a focus on hearing-assistive devices | |
US20220279304A1 (en) | Method and system for head-related transfer function adaptation | |
Vorländer | Virtual acoustics: opportunities and limits of spatial sound reproduction | |
Duraiswami et al. | Capturing and recreating auditory virtual reality | |
CN111213390B (zh) | 声音转换器 | |
Dodds et al. | Full Reviewed Paper at ICSA 2019 | |
Otani et al. | Dynamic crosstalk cancellation for spatial audio reproduction | |
WO2023143727A1 (en) | Devices and methods for personalized binaural audio rendering | |
CN117202001A (zh) | 一种基于骨导设备的声像虚拟外化方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 2005513719 Country of ref document: JP |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2004772905 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020057018071 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20048109456 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006274901 Country of ref document: US Ref document number: 10554595 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 1020057018071 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 2004772905 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 10554595 Country of ref document: US |