WO2017159587A1 - Dispositif de lecture acoustique, procédé de lecture acoustique et programme - Google Patents

Dispositif de lecture acoustique, procédé de lecture acoustique et programme Download PDF

Info

Publication number
WO2017159587A1
WO2017159587A1 PCT/JP2017/009883 JP2017009883W WO2017159587A1 WO 2017159587 A1 WO2017159587 A1 WO 2017159587A1 JP 2017009883 W JP2017009883 W JP 2017009883W WO 2017159587 A1 WO2017159587 A1 WO 2017159587A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
otoacoustic emission
data
processing means
audio data
Prior art date
Application number
PCT/JP2017/009883
Other languages
English (en)
Japanese (ja)
Inventor
宏之 中島
Original Assignee
合同会社ディメンションワークス
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 合同会社ディメンションワークス filed Critical 合同会社ディメンションワークス
Publication of WO2017159587A1 publication Critical patent/WO2017159587A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/04Circuits for transducers, loudspeakers or microphones for correcting frequency response
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control

Definitions

  • the present invention relates to a device that reproduces sound with a sense of presence, and more particularly to a sound reproduction device, a sound reproduction method, and a program that enhance the sense of presence using the principle of otoacoustic emission.
  • a gyro sensor is provided in an earphone to detect the rotation of the user's head, an acceleration sensor is provided to detect the inclination of the gyro sensor, and a detection output of the gyro sensor is detected by a sound image localization correction unit.
  • a sound processing apparatus is disclosed in which the sound image localization process is performed by correcting the detected position of the sound image so that the localization position of the sound image is constant.
  • Patent Document 1 merely adjusts the localization position of the sound image by sound image localization processing, and the technical idea of incorporating the principle of otoacoustic emission to enhance the sense of reality is also disclosed. There was no suggestion.
  • the otoacoustic emission includes “evoked otoacoustic emission”, “spontaneous otoacoustic emission”, and “distortion component otoacoustic emission”.
  • Evoked otoacoustic emission refers to an acoustic response in which a signal is detected with a delay of around 10 ms with respect to a stimulus using a click sound.
  • Spontaneous Otoacoustic Emission refers to an acoustic reaction in which a signal emitted spontaneously from the cochlea is detected without external stimulation.
  • DPOAE Distortion Product Otoacoustic Emission
  • nf1 ⁇ mf2 n and m are integers
  • the present invention has been made in view of such a problem, and provides an audio reproducing device, an audio reproducing method, and a program for reproducing stereoscopic sound with enhanced presence using the principle of otoacoustic emission. Objective.
  • the sound reproduction apparatus adds the effects of evoked otoacoustic emission and distortion component otoacoustic emission to input audio data.
  • the sound reproducing device is the sound reproducing device according to the first aspect, wherein the sound data after the processing by the first otoacoustic emission processing unit is performed based on a predetermined head related transfer function.
  • Head transmission adjustment processing means for adjusting the transmission delay of the sound to the head is further provided, and the sound output means converts the sound data processed by the head transmission adjustment processing means into a sound signal and outputs the sound signal.
  • the sound reproduction apparatus is the sound reproduction apparatus according to the first aspect, wherein the second otoacoustic further adds the effect of the spontaneous otoacoustic emission to the audio data processed by the first otoacoustic emission processing means.
  • Radiation processing means is further provided, and the sound output means converts the sound data processed by the second otoacoustic radiation processing means into a sound signal and outputs the sound signal.
  • the sound reproduction apparatus is the sound reproduction device according to any one of the first to third aspects, wherein the first otoacoustic emission processing means has a predetermined frequency for the audio data based on distance data to an actual object.
  • the band volume is adjusted, the sound pressure is adjusted based on the distance data, and a 10 ms delay effect is added.
  • the sound reproducing device is the sound reproducing device according to the fourth aspect, wherein the first otoacoustic emission processing means adjusts a predetermined amplitude based on the distance data after the volume adjustment. To compensate and amplify the overall decrease in volume.
  • the sound reproduction apparatus in the fifth aspect, is configured such that the first otoacoustic emission processing unit adjusts the volume of a predetermined frequency band based on heartbeat data, thereby providing a psychological effect. To increase.
  • the sound reproduction apparatus is the sound reproduction apparatus according to the third aspect, wherein the second otoacoustic emission processing means further adds the effect of the spontaneous otoacoustic emission and then based on the latent memory of the user. Add more sampling sound.
  • the sound reproduction method includes a first step of receiving input of audio data, and adding the effects of evoked otoacoustic emission and distortion component otoacoustic emission to the input audio data.
  • the sound reproduction method according to the ninth aspect of the present invention is the sound reproduction method according to the eighth aspect, based on a predetermined head-related transfer function for the sound data after the first otoacoustic emission process in the second step.
  • a sound reproduction method is the sound reproduction method according to the ninth aspect, in which the effect of spontaneous otoacoustic radiation is further added to the audio data after the first otoacoustic radiation processing in the second step.
  • a fifth step of performing otoacoustic emission processing is further provided, and in the third step, the audio data after the second otoacoustic emission processing in the fifth step is converted into an audio signal and output.
  • the sound data is based on distance data to an actual object.
  • the volume of a predetermined frequency band is adjusted, the sound pressure is adjusted based on the distance data, and a 10 ms delay effect is added.
  • a first otoacoustic emission processing means for adding an effect of evoked otoacoustic emission and distortion component otoacoustic emission to input audio data
  • the first The sound data processed by the single ear sound radiation processing means is made to function as sound output means for converting the sound data into a sound signal
  • the first ear sound radiation processing means converts the sound data into distance data to the sound source. Based on this, the volume of a predetermined frequency band is adjusted, the sound pressure is adjusted based on the distance data, and a 10 ms delay effect is added.
  • the present invention it is possible to provide a sound reproducing device, a sound reproducing method, and a program for reproducing a three-dimensional sound with an enhanced sense of reality using the principle of otoacoustic emission.
  • the sound reproducing apparatus is used for, for example, a head mounted display (HMD), headphones, and the like, and reproduces three-dimensional sound.
  • HMD head mounted display
  • FIG. 1 shows and describes the configuration of a sound reproducing device according to the first embodiment of the present invention.
  • the sound reproducing device 1 is configured by a computer and includes a control unit 2 including a CPU (Central Processing Unit) or the like.
  • a sound source 3 that outputs a sound source signal is connected to the control unit 2 via an A / D converter 4 or directly.
  • the sound source signal is an analog signal, it is converted into a digital signal by the A / D converter 4 and then input to the control unit 2.
  • the sound source signal is a digital signal, it is directly input to the control unit 2.
  • the sound source 3 outputs audio signals related to the left and right stereo sound, and may be a storage medium (HDD, RAM, etc.) provided in the computer or an external storage medium (optical disk, USB memory, etc.).
  • the sound source may be acquired via a communication environment such as the Internet.
  • the acceleration sensor 5 is connected to the control unit 2 via the A / D converter 6, the gyro sensor 7 is connected via the A / D converter 8, and the distance sensor 9 connects the A / D converter 10.
  • the geomagnetic sensor 16 is connected via an A / D converter 17.
  • a heart rate sensor 18 is also connected to the control unit 2.
  • the control unit 2 is connected to an input unit 11 including an input device such as a keyboard and a mouse.
  • a storage unit 15 is connected to the control unit 2.
  • the storage unit 15 stores a program 19 executed by the control unit 2, and a database (hereinafter referred to as DB) 15a relating to a head related transfer function is also logically constructed.
  • DB database
  • control unit 2 reads and executes the program 19 in the storage unit 15 to thereby execute a main control unit 2a, a noise reduction processing unit (noise reduction) 2b, a reverberant sound invalidation processing unit (reverb reduction) 2c, and frequency averaging. It functions as a processing unit (graphic equalizer) 2d, a first otoacoustic emission processing unit 2e, a head-related transmission adjustment processing unit 2f, a reverberation adjustment processing unit (reverb) 2g, and a second otoacoustic emission processing unit 2g.
  • processing unit graphics equalizer
  • the output of the control unit 2 is connected to the audio output unit 14L via the D / A converter 12, and is connected to the audio output unit 14R via the D / A converter 13.
  • the acceleration sensor 5 is mounted on, for example, an HMD or a headphone, detects 360-degree acceleration of the user's head, and converts the acceleration signal into digital acceleration data by the A / D converter 6. After conversion, the data is input to the control unit 2.
  • the main control unit 2a calculates the moving direction and moving amount of the head based on the acceleration data.
  • the gyro sensor 7 is mounted on, for example, an HMD or headphones, detects the rotation angle of the user's head in the vertical and horizontal directions, and converts the angular velocity signal into digital acceleration by the A / D converter 8. After conversion to data, the data is input to the control unit 2. In the control unit 2, the main control unit 2a calculates the rotation angle of the head based on the angular velocity data.
  • the acceleration sensor 5 and the gyro sensor 7 can be selectively used to detect rotation of the user's head, and any one of them may be mounted.
  • the distance sensor 9 measures the distance to the actual object, and the sensor signal is converted into distance data by the A / D converter 10 and then input to the control unit 2.
  • the distance data is used in various processes to be described later in order to enhance the sense of reality.
  • Various sensors such as an infrared sensor, an ultrasonic sensor, a distance measuring sensor, a laser, or a sound wave sensor can be used as the distance sensor 9.
  • data may be input from the input unit 11.
  • the actual object is a main body that generates sound or the like. For example, in the case of a concert venue, the musician on the stage is the actual object.
  • the geomagnetic sensor 16 outputs azimuth data, and the azimuth data output via the A / D converter 17 is input to the control unit 2.
  • the direction data is used for recognizing the direction of movement of the user's head.
  • the heart rate sensor 18 outputs heart rate data related to the user's heart rate, and the output heart rate data is input to the control unit 2.
  • the input unit 11 is used to input various setting data.
  • Ambience data can be input as setting data.
  • the ambience data is data for determining how much reverberation sound is added, and preset data may be selectively input according to the size of the space. The preset data may be finely adjusted by the user. Further, distance data may be input from the input unit 11. Based on each sensor output mentioned above and the input data of the input part 11, a stereophonic sound is produced
  • each unit operates as follows.
  • the noise reduction processing unit 2b performs noise reduction processing on the audio data related to the input stereo sound.
  • the reverberant sound invalidation processing unit 2c invalidates the sound data related to the stereo sound when there is an element of the reverberant sound.
  • the frequency averaging processing unit 2d changes the frequency characteristics of the audio data and averages the overall sound quality. That is, in the audio data, the protruding portions are lowered, and the few are raised, and the overall sound frequency is averaged. This means averaging of sound pressure by frequency.
  • the first otoacoustic emission processing unit 2e adds the effects of evoked otoacoustic emission (TEOAE) and distortion component otoacoustic emission (DPOAE) to audio data related to stereo sound.
  • TEOAE evoked otoacoustic emission
  • DPOAE distortion component otoacoustic emission
  • the first otoacoustic emission processing unit 2e includes a frequency adjustment processing unit (parametric equalizer) 20, a sound pressure adjustment processing unit (compressor) 21, an amplitude adjustment processing unit (amplifier). ) 22 and a delay adjustment processing unit (delay) 23.
  • the frequency adjustment processing unit 20 adjusts the volume of a predetermined frequency band (5.28 Hz to 20 KHz) of audio data related to stereo sound based on the distance data. This is to add the effect of evoked otoacoustic emission, and based on the distance data, processing is performed so that the volume increases as the distance approaches.
  • a predetermined frequency band 5.28 Hz to 20 KHz
  • the sound pressure adjusting unit 21 adjusts the sound pressure of the sound data related to the stereo sound based on the distance data. For example, when the volume exceeds the threshold, the maximum volume of the volume that has changed is reduced by suppressing the excess volume with the set compression ratio and releasing it within the set time. Thereby, the dynamic range of the maximum volume and the minimum volume is compressed. This adds an effect of evoked otoacoustic emission, and the sound pressure adjusting unit 21 lowers the sound pressure as the distance is longer and increases the sound pressure as the distance is closer based on the distance data.
  • the amplitude adjustment processing unit 22 adjusts the amplitude from 10 dB to 20 dB in this example, for example, based on the distance data. This means that the volume is reduced as a whole by the sound pressure adjustment, so that the reduced amount is compensated and amplified. This means adding the effects of both evoked otoacoustic emission and distortion component otoacoustic emission.
  • the amplitude adjustment processing unit 22 has an arbitrary configuration.
  • the delay adjustment processing unit 23 adds a 10 ms delay effect to the audio data related to stereo sound.
  • evoked otoacoustic emission is an acoustic reaction in which a signal is detected with a delay of about 10 ms with respect to an input voice stimulus, and means that such an effect is realized in a pseudo manner. To do.
  • the head-related transmission adjustment processing unit 2f adjusts the transmission delay of the sound to the head based on the head-related transfer function.
  • the head-related transfer function represents a change in sound generated by peripheral objects including the ear shell, the human head, and the shoulder as a transfer function.
  • the right ear (R) and left ear (L) pairs are held in a table format in the DB 15a of the storage unit 15. The reason why the right ear and the left ear are separated from each other is that the sound arrival time differs depending on the head position.
  • the head-related adjustment processing unit 2f performs adaptation by referring to the table and performing convolution calculation on the audio data by filtering based on the head-related transfer function corresponding to the depth of the auricle.
  • the reverberant adjustment processing unit 2g adds a reverberant sound suitable for the space designated by the preset to the audio data. This reflects, for example, bounce sound from the boundary in the space in the audio data, and the magnitude of the reverberant sound to be added differs depending on the size of the space. In addition, as the time difference between the left and right audio data is smaller, a reverberant sound that assumes a larger space is added.
  • the second otoacoustic emission processing unit 2h adds the effect of spontaneous otoacoustic emission (S0AE) to the audio data.
  • S0AE spontaneous otoacoustic emission
  • Sampling sound or colored noise is added to the frequency band of 1 KHz to 2 KHz of the audio data. What kind of sampling sound and colored noise are added may be selected at a preset stage.
  • the audio data relating to the stereophonic sound generated by the above processing is converted to an analog signal via the D / A converter 12 for the right ear and then output from the audio output unit 14R, and the audio data for the left ear is D / After being converted to an analog signal via the A converter 13, it is output from the audio output unit 14L. In this way, sound related to stereophonic sound is reproduced.
  • the noise reduction processing unit 2b Noise reduction processing is performed on the audio data related to the input stereo sound (S2).
  • the reverberant sound invalidation processing unit 2c invalidates the sound data related to the stereo sound when there is an element of the reverberant sound (S3).
  • the frequency averaging processing unit 2d changes the frequency characteristics of the audio data and averages the overall sound quality (S4).
  • the head transmission adjustment processing unit 2f adjusts the transmission delay of the sound to the head based on the head transfer function (S6). More specifically, with reference to the table, adaptation is achieved by performing convolution calculation of audio data by filtering based on the head-related transfer function corresponding to the depth of the pinna.
  • the reverberant adjustment processing unit 2g adds a reverberant sound suitable for the space designated by the preset to the audio data (S7). Then, the second otoacoustic emission processing unit 2h adds the effect of spontaneous otoacoustic emission (S0AE) to the audio data (S8). More specifically, sampling sound or colored noise is added to the frequency band of 1 KHz to 2 KHz of the audio data.
  • the audio data for the right ear is converted to an analog signal via the D / A converter 12 and then output from the audio output unit 14R, and for the left ear is converted to an analog signal via the D / A converter 13. After the conversion, it is output from the audio output unit 14L (S9).
  • FIG. 4 shows and describes the detailed processing procedure of the first otoacoustic emission processing executed in step S4 of FIG.
  • the frequency adjustment processing unit 20 adjusts the volume of a predetermined frequency band (5.28 Hz to 20 KHz) of audio data related to stereo sound based on the distance data. This is to add the effect of evoked otoacoustic emission, and based on the distance data, processing is performed so that the volume increases as the distance approaches (S11).
  • the sound pressure adjusting unit 21 adjusts the sound pressure of the sound data related to the stereo sound based on the distance data (S12). That is, based on the distance data, the sound pressure adjustment unit 21 artificially reflects the effect of evoked otoacoustic emission in the sound data by lowering the sound pressure as the distance is longer and increasing the sound pressure as the distance is closer.
  • the amplitude adjustment processing unit 22 compensates and amplifies the overall decrease in volume by adjusting the amplitude of, for example, 10 dB to 20 dB based on the distance data (S13). That is, the effects of both evoked otoacoustic emission and distortion component otoacoustic emission are reflected in the audio data.
  • the delay adjustment processing unit 23 adds a 10 ms delay effect to the audio data related to the stereo sound (S14). This reflects in the audio data the acoustic response of the evoked otoacoustic emission that a signal is detected with a delay of around 10 ms with respect to the input audio stimulus. Thus, the process returns to step S6 and subsequent steps in FIG.
  • FIG. 5A is a characteristic diagram of input voice data
  • FIG. 5B is a characteristic chart of output voice data.
  • FIG. 5A is a diagram in which physical parameters related to OAE are dynamically changed with respect to the input voice data shown in FIG. 5A to simulate an auditory structure in human spatial recognition and induce an illusion. Audio data as shown in 5 (b) is output. It is possible to perceive more realistic 3D sound compared to binaural sound.
  • the first embodiment of the present invention it is possible to reproduce realistic 3D sound by artificially reflecting the effect of otoacoustic emission on the input audio data. It becomes possible. Moreover, since the effects of evoked otoacoustic emission (TEOAE), spontaneous otoacoustic emission (SOAE), and distortion component otoacoustic emission (DPOAE) can be experienced as otoacoustic emission, the sense of reality is further increased. In addition to the effect of otoacoustic emission, adjustment based on head transmission is also performed, so that more realistic 3D sound reproduction can be realized.
  • TOAE evoked otoacoustic emission
  • SOAE spontaneous otoacoustic emission
  • DPOAE distortion component otoacoustic emission
  • the sound reproducing apparatus is different from the first embodiment in the configuration of the first otoacoustic emission processing unit 2e. Although details will be described later, the frequency adjustment processing is performed using heartbeat data based on psychoacoustics. Since other configurations are the same as those of the first embodiment, the same components as those in FIGS. 1 and 2 are denoted by the same reference numerals, and different portions will be mainly described.
  • FIG. 6 shows a detailed configuration of the first otoacoustic emission processing unit 2e of the control unit 2 of the sound reproducing device according to the second embodiment of the present invention.
  • the first otoacoustic emission processing unit 2e includes a first frequency adjustment processing unit (parametric equalizer) 30, a second frequency adjustment processing unit (parametric equalizer) 31, and a sound pressure adjustment processing unit (compressor). 32, an amplitude adjustment processing unit (amplifier) 33, and a delay adjustment processing unit (delay) 34.
  • the first frequency adjustment processing unit 30 adjusts the volume of a predetermined frequency band (500 Hz to 20 KHz) of audio data related to stereo sound based on the distance data. This is to add the effect of evoked otoacoustic emission, and based on the distance data, processing is performed so that the volume increases as the distance approaches.
  • a predetermined frequency band 500 Hz to 20 KHz
  • the second frequency adjustment processing unit 31 adjusts the volume of a predetermined frequency band (400 Hz to 10 KHz) of audio data related to stereo sound based on heartbeat data based on the concept of psychoacoustics. This is based on acoustic psychology, comparing the heart rate data with a reference value, and if there is an edge in the heart rate, recognizes that there is a user psychological change, and increases the volume of the predetermined frequency so as to further promote To raise. Acoustic psychology has various psychological effects on human perception of sound by changing the physical parameters of the sound.
  • the sound pressure adjustment unit 32 adjusts the sound pressure of the sound data related to the stereo sound based on the distance data. For example, when the volume exceeds the threshold, the maximum volume of the volume that has changed is reduced by suppressing the excess volume with the set compression ratio and releasing it within the set time. Thereby, the dynamic range of the maximum volume and the minimum volume is compressed. This adds an effect of evoked otoacoustic emission, and the sound pressure adjusting unit 21 lowers the sound pressure as the distance is longer and increases the sound pressure as the distance is closer based on the distance data.
  • the amplitude adjustment processing unit 33 adjusts the amplitude from 10 dB to 20 dB, for example, in this example, based on the distance data. This means that the volume is reduced as a whole by the sound pressure adjustment, so that the reduced amount is compensated and amplified. This means adding the effects of both evoked otoacoustic emission and distortion component otoacoustic emission.
  • the amplitude adjustment processing unit 33 has an arbitrary configuration.
  • the delay adjustment processing unit 34 adds a 10 ms delay effect to the audio data related to the stereo sound.
  • evoked otoacoustic emission is an acoustic reaction in which a signal is detected with a delay of about 10 ms with respect to an input voice stimulus, and means that such an effect is realized in a pseudo manner. To do.
  • the processing procedure by the sound reproducing device according to the second embodiment is substantially the same as that in FIG. 3, and only the first otoacoustic emission processing in step S5 in FIG. 3 is different. Therefore, referring to FIG. Only the processing procedure of the first otoacoustic emission processing according to the form will be described.
  • the first frequency adjustment processing unit 30 adjusts the volume of a predetermined frequency band (5.28 Hz to 20 KHz) of audio data related to stereo sound based on the distance data. This is to add the effect of evoked otoacoustic emission, and based on the distance data, processing is performed so that the volume increases as the distance approaches (S21).
  • a predetermined frequency band 5.28 Hz to 20 KHz
  • the second frequency adjustment processing unit 31 adjusts the volume of a predetermined frequency band (400 Hz to 10 KHz) of the audio data related to the stereo sound based on the heartbeat data based on the concept of acoustic psychology (S22).
  • the sound pressure adjusting unit 32 adjusts the sound pressure of the sound data related to the stereo sound based on the distance data (S23). That is, the sound pressure adjustment unit 32 reflects the effect of evoked otoacoustic emission on the sound data in a pseudo manner by lowering the sound pressure as the distance is longer and increasing the sound pressure as the distance is closer based on the distance data.
  • the amplitude adjustment processing unit 33 compensates and amplifies the entire decrease in volume by adjusting the amplitude of, for example, 10 dB to 20 dB based on the distance data (S24). That is, the effects of both evoked otoacoustic emission and distortion component otoacoustic emission are reflected in the audio data.
  • the delay adjustment processing unit 34 adds a 10 ms delay effect to the audio data related to the stereo sound (S25). This reflects in the audio data the acoustic response of the evoked otoacoustic emission that a signal is detected with a delay of around 10 ms with respect to the input audio stimulus. Thus, the process returns to step S6 and subsequent steps in FIG.
  • the sound reproducing apparatus is different from the first embodiment in the configuration of the second otoacoustic radiation processing unit 2h. Although details will be described later, the presence of a latent memory sound enhances the sense of reality. Since other configurations are the same as those of the first embodiment, the same components as those in FIGS. 1 and 2 are denoted by the same reference numerals, and different portions will be mainly described.
  • the latent memory sound refers to a universal environmental sound that is conventionally masked in the auditory sense, and is added and changed as a sampling sound. This promotes priming effects in sound perception and encourages psychological changes.
  • FIG. 8 shows a detailed configuration of the second otoacoustic emission processing unit 2h of the control unit 2 of the sound reproducing device according to the second embodiment of the present invention.
  • the second otoacoustic emission processing unit 2h includes a spontaneous otoacoustic emission processing unit 40 and a latent memory sound addition processing unit 41.
  • the spontaneous otoacoustic emission processing unit 40 adds the effect of the spontaneous otoacoustic emission (S0AE) to the audio data.
  • S0AE spontaneous otoacoustic emission
  • Sampling sound or colored noise is added to the frequency band of 1 KHz to 2 KHz of audio data. What kind of sampling sound and colored noise are added may be selected at a preset stage.
  • the latent memory sound addition processing unit 41 adds a sampling sound based on the latent memory. This is because, for example, in the outdoor stage of the plateau, you can enjoy the music of the musicians on the stage while feeling the sound of the wind, but by adding such potentially perceived sound as a sampling sound, The presence can be enhanced.
  • the processing procedure by the sound reproducing device according to the third embodiment is substantially the same as that in FIG. 3 and only the second otoacoustic emission processing in step S8 in FIG. 3 is different, so the third embodiment will be described below with reference to FIG. Only the processing procedure of the second otoacoustic emission processing according to the form will be described.
  • the spontaneous otoacoustic emission processing unit 40 adds the effect of the spontaneous otoacoustic emission (S0AE) to the audio data (S31). Then, the latent memory sound addition processing unit 41 adds a sampling sound based on the latent memory (S32). Thus, the processing returns to step S9 and subsequent steps in FIG.
  • Input unit 12 ... D / A conversion unit, 13 ... D / A converter, 14L, 14R ... audio output unit, 15 ... storage unit, 16 ... geomagnetic sensor, 17 ... A / D converter, 18 ... heart rate sensor, 19 ... program, 20 ... frequency adjustment processing unit, 21 ... sound pressure adjustment processing unit, 22 ... Amplitude adjustment processing unit, 23 ... Delay adjustment processing unit, 30 ... First frequency adjustment processing unit, 31 ... First Frequency adjustment processing unit, 32 ... sound pressure adjustment unit, 33 ... amplitude adjustment processing section, 34 ... delay adjusting section, 40 ... spontaneous otoacoustic emissions processing unit, 41 ... implicit memory sound adding section.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Stereophonic System (AREA)

Abstract

La présente invention concerne un dispositif de lecture acoustique, un procédé de lecture acoustique et un programme servant à lire un son stéréophonique de façon à donner une sensation de présence améliorée au moyen des principes de l'émission otoacoustique. Le dispositif de lecture acoustique comprend : une première unité de traitement d'émission otoacoustique 2e qui ajoute les effets des émissions otoacoustiques évoquées de manière transitoire et des émissions otoacoustiques de produits de distorsion à des données audio d'entrée ; et une unité de traitement de réglage de transfert lié à la tête 2f qui ajuste un retard dans le transfert audio des données audio, qui ont été soumises au type de traitement indiqué ci-dessus, vers la tête d'un utilisateur selon une fonction de transfert prédéfinie liée à la tête, et convertit les données audio, qui ont été soumises aux types de traitement indiqués ci-dessus, en signaux audio pour produire un son stéréophonique donnant une sensation de présence.
PCT/JP2017/009883 2016-03-14 2017-03-13 Dispositif de lecture acoustique, procédé de lecture acoustique et programme WO2017159587A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-049210 2016-03-14
JP2016049210A JP6094844B1 (ja) 2016-03-14 2016-03-14 音響再生装置、音響再生方法、及びプログラム

Publications (1)

Publication Number Publication Date
WO2017159587A1 true WO2017159587A1 (fr) 2017-09-21

Family

ID=58281065

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/009883 WO2017159587A1 (fr) 2016-03-14 2017-03-13 Dispositif de lecture acoustique, procédé de lecture acoustique et programme

Country Status (2)

Country Link
JP (1) JP6094844B1 (fr)
WO (1) WO2017159587A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102114052B1 (ko) 2017-12-22 2020-05-25 한국항공우주산업 주식회사 항공기용 입체 음향 장치 및 이의 출력 방법
WO2023199746A1 (fr) * 2022-04-14 2023-10-19 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Procédé de reproduction acoustique, programme informatique et dispositif de reproduction acoustique

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10153946A (ja) * 1996-11-25 1998-06-09 Mitsubishi Electric Corp 感覚情報提示装置
JP2009514313A (ja) * 2005-11-01 2009-04-02 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ 耳音響放射を用いて補聴器装置を調節するための方法、並びに対応する補聴器システム及び補聴器装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5402493A (en) * 1992-11-02 1995-03-28 Central Institute For The Deaf Electronic simulator of non-linear and active cochlear spectrum analysis
AU5545294A (en) * 1992-11-02 1994-05-24 Central Institute For The Deaf Electronic simulator of non-linear and active cochlear signal processing
JP3385725B2 (ja) * 1994-06-21 2003-03-10 ソニー株式会社 映像を伴うオーディオ再生装置
US5601091A (en) * 1995-08-01 1997-02-11 Sonamed Corporation Audiometric apparatus and association screening method
US6406438B1 (en) * 1997-12-19 2002-06-18 Medical Research Counsel Method and apparatus for obtaining evoked otoacoustic emissions
JP6069830B2 (ja) * 2011-12-08 2017-02-01 ソニー株式会社 耳孔装着型収音装置、信号処理装置、収音方法
US9654876B2 (en) * 2012-08-06 2017-05-16 Father Flanagan's Boys' Home Multiband audio compression system and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10153946A (ja) * 1996-11-25 1998-06-09 Mitsubishi Electric Corp 感覚情報提示装置
JP2009514313A (ja) * 2005-11-01 2009-04-02 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ 耳音響放射を用いて補聴器装置を調節するための方法、並びに対応する補聴器システム及び補聴器装置

Also Published As

Publication number Publication date
JP6094844B1 (ja) 2017-03-15
JP2017168887A (ja) 2017-09-21

Similar Documents

Publication Publication Date Title
JP4584416B2 (ja) 位置調節が可能な仮想音像を利用したスピーカ再生用多チャンネルオーディオ再生装置及びその方法
JP4602621B2 (ja) 音響補正装置
JP5499513B2 (ja) 音響処理装置、音像定位処理方法および音像定位処理プログラム
JP5533248B2 (ja) 音声信号処理装置および音声信号処理方法
JP4924119B2 (ja) アレイスピーカ装置
US10685641B2 (en) Sound output device, sound output method, and sound output system for sound reverberation
Ranjan et al. Natural listening over headphones in augmented reality using adaptive filtering techniques
US8442244B1 (en) Surround sound system
US11902772B1 (en) Own voice reinforcement using extra-aural speakers
JP2007266967A (ja) 音像定位装置およびマルチチャンネルオーディオ再生装置
JP6246922B2 (ja) 音響信号処理方法
JP5986426B2 (ja) 音響処理装置、音響処理方法
JP2006303658A (ja) 再生装置および再生方法
JP2005223713A (ja) 音響再生装置、音響再生方法
JP4150749B2 (ja) 立体音響再生システムおよび立体音響再生装置
US6990210B2 (en) System for headphone-like rear channel speaker and the method of the same
US20170272889A1 (en) Sound reproduction system
JP2006279863A (ja) 頭部伝達関数の補正方法
WO2017159587A1 (fr) Dispositif de lecture acoustique, procédé de lecture acoustique et programme
JP2005157278A (ja) 全周囲音場創生装置、全周囲音場創生方法、及び全周囲音場創生プログラム
KR20180018464A (ko) 입체 영상 재생 방법, 입체 음향 재생 방법, 입체 영상 재생 시스템 및 입체 음향 재생 시스템
KR20160136716A (ko) 오디오 신호 처리 방법 및 장치
US7050596B2 (en) System and headphone-like rear channel speaker and the method of the same
US6999590B2 (en) Stereo sound circuit device for providing three-dimensional surrounding effect
JP2011259299A (ja) 頭部伝達関数生成装置、頭部伝達関数生成方法及び音声信号処理装置

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17766587

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17766587

Country of ref document: EP

Kind code of ref document: A1