WO2011158506A1 - 補聴器、信号処理方法及びプログラム - Google Patents
補聴器、信号処理方法及びプログラム Download PDFInfo
- Publication number
- WO2011158506A1 WO2011158506A1 PCT/JP2011/003426 JP2011003426W WO2011158506A1 WO 2011158506 A1 WO2011158506 A1 WO 2011158506A1 JP 2011003426 W JP2011003426 W JP 2011003426W WO 2011158506 A1 WO2011158506 A1 WO 2011158506A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sound
- hearing aid
- scene
- unit
- sound source
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 14
- 238000001514 detection method Methods 0.000 claims abstract description 91
- 230000005236 sound signal Effects 0.000 claims description 38
- 238000004364 calculation method Methods 0.000 claims description 26
- 230000005540 biological transmission Effects 0.000 claims description 9
- 210000005069 ears Anatomy 0.000 claims description 9
- 230000002269 spontaneous effect Effects 0.000 abstract description 9
- 230000004069 differentiation Effects 0.000 abstract 1
- 238000000034 method Methods 0.000 description 21
- 238000010586 diagram Methods 0.000 description 10
- 238000002474 experimental method Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 2
- 230000002354 daily effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 206010011878 Deafness Diseases 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 210000000481 breast Anatomy 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000012850 discrimination method Methods 0.000 description 1
- 210000000883 ear external Anatomy 0.000 description 1
- 210000003027 ear inner Anatomy 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000010370 hearing loss Effects 0.000 description 1
- 231100000888 hearing loss Toxicity 0.000 description 1
- 208000016354 hearing loss disease Diseases 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013077 scoring method Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/40—Arrangements for obtaining a desired directivity characteristic
- H04R25/407—Circuits for combining signals of a plurality of transducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/50—Customised settings for obtaining desired overall acoustical characteristics
- H04R25/505—Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/41—Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/43—Signal processing in hearing aids to enhance the speech intelligibility
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2460/00—Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
- H04R2460/13—Hearing devices using bone conduction transducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/40—Arrangements for obtaining a desired directivity characteristic
- H04R25/405—Arrangements for obtaining a desired directivity characteristic by combining a plurality of transducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/55—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
- H04R25/552—Binaural
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/55—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
- H04R25/554—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired using a wireless connection, e.g. between microphone and amplifier or using Tcoils
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/55—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
- H04R25/558—Remote control, e.g. of amplification, frequency
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/13—Aspects of volume control, not necessarily automatic, in stereophonic sound systems
Definitions
- the present invention relates to a hearing aid, a signal processing method, and a program that make it easier for a hearing aid user to hear a desired sound.
- a hearing aid is a device that amplifies such a small sound and makes it easy to hear even a person whose hearing ability has decreased.
- the hearing aid increases not only the desired sound but also the noise, it is difficult to hear the voice of the conversation partner and the sound of the TV in a noisy environment.
- Patent Document 1 describes a microphone in which a sound source direction is detected by microphones having two or more directivities and the directivity is switched to the detected direction.
- the microphone described in Patent Document 1 can make the sound of the sound source easier to hear when the sound source is one by directing directivity in the direction of the sound source.
- Patent Document 2 describes a hearing aid that automatically controls directivity rather than designating the direction of the sound that the hearing aid user wants to hear by operation.
- the hearing aid described in Patent Document 2 detects the line of sight of a hearing aid user and directs directivity in the direction of the line of sight.
- An object of the present invention is to provide a hearing aid, a signal processing method, and a program that make it easy to hear the sound of the TV when the hearing aid user wants to watch the TV and the voice of the person when the hearing aid user wants to talk to the person.
- the hearing aid of the present invention is a hearing aid worn on both ears provided with a microphone array, a sound source direction estimating unit for detecting a sound source direction from a sound signal input from the microphone array, and a hearing aid wearer's hearing aid from the sound signal.
- a self-speech detection unit that detects voice, a TV sound detection unit that detects TV sound from the sound signal, a non-wearing person based on the detected sound source direction information, the self-speech detection result, and the TV sound detection result
- the other-speaker detection unit for detecting the utterance of the speaker, the self-speech detection result, the TV sound detection result, the other-speaker utterance detection result, and the frequency for each sound source based on the sound source direction information
- the signal processing method of the present invention is a signal processing method for a hearing aid worn on both ears on which a microphone array is installed, the step of detecting a sound source direction from a sound signal input from the microphone array, and the hearing aid from the sound signal.
- a speaker other than the wearer based on the step of detecting the voice of the wearer, the step of detecting TV sound from the sound signal, the detected sound source direction information, the self-speech detection result, and the TV sound detection result Detecting the utterance, calculating the frequency for each sound source using the self-speech detection result, the TV sound detection result, the other-speaker utterance detection result, and the sound source direction information, and the sound source direction information And determining the scene based on the frequency for each sound source and controlling the hearing of the hearing aid according to the determined scene.
- the present invention is a program for causing a computer to execute each step of the signal processing method.
- the present invention can make it easier for a hearing aid user to hear a sound that he / she wants to hear according to the scene when there are a plurality of sound sources such as a TV and a conversation. For example, in a situation where the hearing aid user wants to watch TV, the sound of the TV is easier to hear when he / she wants to talk to a person, and in a situation where he / she talks while watching TV, both I can hear sound.
- the figure which shows the structure of the hearing aid which concerns on embodiment of this invention The block diagram which shows the principal part structure of the hearing aid which concerns on the said embodiment.
- Flow chart showing the flow of processing of the hearing aid according to the above embodiment The figure which shows the sound source direction estimation experiment result of the hearing aid concerning the said embodiment
- the figure which shows the TV sound detection experiment result of the hearing aid concerning the said embodiment The figure which shows the TV sound detection experiment result of the hearing aid concerning the said embodiment.
- determined the self-speech, TV single sound, and others' utterance with respect to the sound source direction estimation result for every frame of the hearing aid concerning the said embodiment The figure which shows the frequency according to the sound source in the "conversation scene” of the hearing aid concerning the said embodiment The figure which shows the frequency according to the sound source in the "TV scene” of the hearing aid which concerns on the said embodiment. The figure which shows the frequency according to the sound source in "while viewing scene” of the hearing aid concerning the above-mentioned embodiment. The figure which shows the table
- determination by the point addition method of the hearing aid concerning the said embodiment The figure showing the example of the scene discrimination
- FIG. 1 is a diagram showing a configuration of a hearing aid according to an embodiment of the present invention. This embodiment is an example applied to a remote control type hearing aid (hereinafter abbreviated as “hearing aid”) in which the hearing aid main body and the earphone are separated.
- hearing aid a remote control type hearing aid
- the hearing aid 100 includes a hearing aid housing 101 that is applied to the outer ear, and a remote control device 105 that is connected to the hearing aid housing 101 in a wired manner.
- the hearing aid housing 101 includes two housings having the same configuration for the left ear and the right ear. On the upper part of the left and right hearing aid housings 101, microphones for picking up surrounding sounds are arranged side by side in the front and rear, respectively, and a microphone array 102 consisting of four in total is formed.
- the hearing aid housing 101 includes a speaker 103 that outputs a hearing sound or a TV sound, and the speaker 103 is connected to an ear chip 104 fitted to the inner ear by a tube. The hearing aid user can hear the sound output from the speaker 103 from the ear chip 104.
- the remote control device 105 includes a CPU 106 that controls and calculates the hearing aid 100 and a transmission / reception unit 107 that receives radio waves transmitted from the audio transmitter 108.
- the audio transmitter 108 is connected to the TV 109 and transmits a TV sound signal by wireless communication such as Bluetooth.
- the transmission / reception unit 107 receives radio waves sent from the audio transmitter 108 and sends the received TV sound to the CPU 106.
- the sound collected by the microphone array 102 is sent to the CPU 106 in the remote control device 105.
- the CPU 106 performs a hearing aid process such as directivity control or amplifying the gain of the frequency band in which the hearing ability is reduced so that the hearing aid user can easily hear the sound input from the microphone array 102 and outputs the sound from the speaker 103. . Further, the CPU 106 outputs the received TV sound from the speaker 103 according to the situation.
- a signal processing method in the CPU 106 will be described in detail with reference to FIGS.
- the remote control device 105 is placed in a breast pocket of a hearing aid user, processes the sound collected by the microphone array 102 inside the hearing aid housing 101, and makes the user wearing the ear tip 104 hear it.
- the hearing aid 100 receives the radio signal transmitted from the audio transmitter 108 connected to the TV 109 by the transmission / reception unit 107 built in the remote control device 105 of the hearing aid 100.
- the hearing aid user can switch and listen to the actual surrounding sound acquired by the hearing aid 100 and the sound of the TV 109.
- the hearing aid 100 can be switched not only by the operation of the hearing aid user, but also can automatically determine the situation and optimally hear the sound that the hearing aid user wants to hear.
- the hearing aid housing 101 and the remote control device 105 are connected by wire, but may be wireless. Further, instead of performing all the hearing aid processing by the CPU 106 in the remote control device 105, the left and right hearing aid housings 101 may be provided with a DSP (Digital Signal Processor) that performs some signal processing.
- DSP Digital Signal Processor
- FIG. 2 is a block diagram showing a main configuration of the hearing aid 100 according to the present embodiment.
- the hearing aid 100 includes a microphone array 102, an A / D (Analog to Digital) conversion unit 110, a sound source direction estimation unit 120, a self-speech detection unit 130, a TV sound detection unit 140, and another person's speech detection unit. 150, a sound source frequency calculation unit 160, a scene determination unit 170, and an output sound control unit 180.
- a / D Analog to Digital
- the TV sound detection unit 140 includes a microphone input short time power calculation unit 141, a TV sound short time power calculation unit 142, and a TV single section detection unit 143.
- the microphone array 102 is a sound collection device in which a plurality of microphones are arranged.
- the hearing aid 100 is attached to both ears where the microphone array 102 is installed.
- the A / D converter 110 converts the sound signal input from the microphone array 102 into a digital signal.
- the sound source direction estimation unit 120 detects the sound source direction from the A / D converted sound signal.
- the self-speech detector 130 detects the hearing aid user's voice from the A / D converted sound signal.
- the TV sound detection unit 140 detects TV sound from the A / D converted sound signal.
- a TV is described as an example of a sound source that exists daily in a home.
- the signal detected by the TV sound detection unit 140 may be not only TV sound but also sound signals of various AV devices other than TV sound.
- the various AV devices are, for example, a BD (Blu-ray Disc) / DVD (Digital Versatile Disk) device connected to a TV, or a streaming data reproducing device transmitted by broadband.
- the TV sound in this specification is a collective term for sounds received from various AV devices including TV sound.
- the microphone input short time power calculation unit 141 calculates the short time power of the sound signal converted by the A / D conversion unit 110.
- the TV sound short time power calculation unit 142 calculates the short time power of the received TV sound.
- the TV single section detection unit 143 determines a TV single section using the received TV sound and the sound signal converted by the A / D conversion unit 110. Specifically, the TV single section detection unit 143 compares the TV sound short-time power and the microphone input short-time power, and detects a section in which the difference falls within a predetermined range as a TV single section.
- the other person utterance detection unit 150 detects the utterance of a speaker other than the wearer using the detected sound source direction information, the self utterance detection result, and the TV sound detection result.
- the sound source frequency calculation unit 160 calculates the frequency for each sound source using the self-speech detection result, the TV sound detection result, the other-speaker speech detection result, and the sound source direction information.
- the scene discriminating unit 170 discriminates the scene using the sound source direction information and the frequency for each sound source.
- the scene classification includes “conversation scene” where the wearer is talking, “TV viewing scene” where the wearer is watching TV, and “while watching TV scene” where the wearer is talking and watching TV at the same time. included.
- the output sound control unit 180 processes the sound input from the microphone according to the scene determined by the scene determination unit 170 so as to be easily heard by the user, and controls the hearing of the hearing aid 100.
- the output sound control unit 180 controls the hearing of the hearing aid 100 by directivity control. For example, in the “conversation scene”, the output sound control unit 180 directs a directional beam in the front direction. In the “TV viewing scene”, the output sound control unit 180 directs a directional beam in the front direction. Further, in the “TV viewing scene”, the output sound control unit 180 outputs the TV sound received by the TV sound receiving unit. In the “TV viewing scene”, the output sound control unit 180 controls the wide directivity. In this case, in the “TV viewing scene”, the output sound control unit 180 outputs the TV sound received by the TV sound receiving unit to one ear, and outputs the sound having wide directivity to the other ear.
- FIG. 3 shows a usage example of the hearing aid 100.
- FIG. 3 is a diagram showing a positional relationship between a hearing aid user who wears the hearing aid on his / her ear, a TV, and a person having a conversation.
- FIG. 3 (a) a TV is attached but the hearing aid user is not particularly watching the TV and is talking to the family.
- This scene is called a “conversation scene”.
- TV sound is flowing from the TV speaker on the right side of the hearing aid user, and the hearing aid user is talking to a person in front and diagonally left front.
- this “conversation scene” since TV sound interferes with the conversation and it is difficult to have a conversation, it is desirable to suppress the TV sound and control the directivity forward.
- FIG. 3B the positions of the person and the TV are the same as in FIG. 3A, but the hearing aid user is watching the TV and the family is talking in the left direction.
- This scene will be referred to as a “TV scene”.
- TV scene it is difficult to hear the TV sound as it is because the family conversation is in the way, so it is necessary for the hearing aid user to manually operate the TV sound to be directly output from the hearing aid.
- TV scene it is desirable to switch this automatically or to direct the directivity to the front of the TV.
- FIG. 3 (c) the positions of the person and the TV are the same as those in FIGS. 3 (a) and 3 (b), but the hearing aid user is talking about the contents of the TV with his / her family while watching the TV.
- This scene will be referred to as “while viewing scene”.
- this “viewing scene” it is necessary not to hear either the TV sound or the voice of the conversation, but to hear both sounds.
- conversations related to TV content are often conducted when the sound of the TV is interrupted, so by listening to omnidirectional or wide-directional sound, both the sound of the TV and the voice of the conversation are heard. Will be able to.
- FIG. 4 is a flowchart showing a process flow of the hearing aid 100. This flow is executed by the CPU 106 at every predetermined timing.
- the sound collected by the microphone array 102 is converted into a digital signal by the A / D converter 110 and output to the CPU 106.
- step S1 the sound source direction estimation unit 120 estimates and outputs the sound source direction by performing signal processing from the A / D converted sound signal using the difference in arrival time of the sound arriving at each microphone.
- the sound source direction estimation unit 120 first obtains the direction of the sound source for each 512 points with a resolution of 22.5 ° for a sound signal sampled at a sampling frequency of 48 kHz.
- the sound source direction estimation unit 120 outputs the direction that appears most frequently in the frame for one second as the estimated direction of the frame.
- the sound source direction estimation unit 120 can obtain a sound source direction estimation result every second.
- FIG. 5 shows the result output by the sound source direction estimation unit 120 at this time.
- FIG. 5 is a diagram showing the results of the sound source direction estimation experiment, where the horizontal axis represents time (seconds) and the vertical axis represents the direction.
- the directions are output in increments of 22.5 ° from ⁇ 180 ° to + 180 °, with the front of the hearing aid user being 0 °, the left direction being negative, and the right direction being positive.
- the result of the sound source direction estimation experiment is that the sound output from the speaker of the TV in front of the hearing aid user is mixed with the voice of the conversation partner in the left hand of the hearing aid user, and an estimation error is generated. Including. For this reason, this information alone does not indicate what kind of sound source is in which direction.
- step S ⁇ b> 2 the self-speech detection unit 130 determines whether or not the sound signal in the frame t is a self-speech segment from the A / D-converted sound signal and outputs it.
- a method for detecting the spontaneous utterance as a known technique, for example, there is a method for detecting the spontaneous utterance by detecting a voice vibration due to bone conduction as disclosed in Patent Document 3. Using such a method, the self-speech detection unit 130 sets a section where the vibration component is equal to or greater than a predetermined threshold for each frame as a self-speech utterance section.
- step S3 the TV sound detection unit 140 uses the A / D converted sound signal and the external TV sound signal received by the transmission / reception unit 107 (FIG. 1), and the surrounding sound environment in the frame t is TV. Judge whether or not only sound is sounding and output.
- the TV sound detection unit 140 includes a microphone input short time power calculation unit 141, a TV sound short time power calculation unit 142, and a TV single section detection unit 143.
- the microphone input short-time power calculation unit 141 calculates the short-time power of the sound signal collected by the microphone array 102.
- the TV sound short-time power calculation unit 142 calculates the short-time power of the received TV sound.
- the TV single section detection unit 143 compares these two outputs and detects a section in which the difference is within a certain range as a TV single section.
- the TV sound detection method will be described.
- the sound output from the TV speaker is not the same as the original TV sound because a delay occurs and a reflected sound is mixed while it travels through the space to the microphone of the hearing aid. Since the TV sound transmitted by radio waves also has a delay, when calculating the correlation between the sound collected by the microphone and the original TV sound, the unknown delay must be taken into account and the amount of calculation increases. Problem arises.
- the sound collected by the microphone is compared with the original TV sound using a short-time power of about 1 second in which the delay can be ignored.
- a short-time power of about 1 second in which the delay can be ignored.
- the microphone input short-time power calculation unit 141 calculates the power Pm (t) in the 1-second section of the frame t with respect to the sound signal of at least one omnidirectional microphone in the microphone array 102 by the following equation (1).
- Xi represents a sound signal
- N represents the number of samples per second.
- the sampling frequency is 48 kHz
- N 48000.
- the TV sound short-time power calculation unit 142 similarly calculates the power Pt (t) in the section for one second from the following equation (2) for the external TV sound signal received by the transmission / reception unit 107.
- yi represents a TV sound signal.
- FIG. 6 is a diagram showing the results of the TV sound detection experiment, where the horizontal axis represents time (seconds) and the vertical axis represents the power level difference (dB).
- FIG. 6 shows the power difference Ld per second between the sound collected by the hearing aid microphone array 102 and the TV sound.
- a shaded area surrounded by a square in FIG. 6 shows a section labeled by a person as a section of TV alone by listening.
- the power level difference Ld (t) varies in the non-stationary sound other than the TV sound, that is, in the section where the voice of the conversation partner or one's own voice is heard.
- this power level difference is a value in the vicinity of ⁇ 20 dB. From this, it can be seen that the TV single section can identify the section where only the TV sound is heard by using the power level difference per second as the feature amount. Therefore, the TV sound detection unit 140 detects a section where the power level difference Ld (t) is ⁇ 20 ⁇ ⁇ dB as a TV single section.
- TV sounds include human voices, they cannot be distinguished from live human voices only by the voice that shows the voice quality, not noise or music.
- the section of only the TV sound with a small amount of calculation without depending on the distance from the TV or the environment of the room. Can be detected.
- step S ⁇ b> 4 the other person utterance detection unit 150 is detected by the self utterance section detected by the self utterance detection unit 130 and the TV single section detection unit 143 from the output result for each direction output by the sound source direction estimation unit 120. Excluded sections. Further, the other person utterance detection unit 150 outputs a section in which the voice band power of at least one omnidirectional microphone is equal to or greater than a predetermined threshold from the sections excluding the self-speaking section and the TV single section as the other person utterance section. . By restricting the other person utterance section to a place where the power of the voice band is large, noises other than human voice can be removed.
- the detection of voice property is based on the voice band power, but other methods may be used.
- FIG. 7 is a graph plotting the results of discrimination of self-speech, TV single sound, and other-speaker from the sound source direction estimation result for each frame shown in FIG.
- the spontaneous speech is mainly detected at around 0 °, and the TV sound is often detected from 22.5 ° to 22.5 ° to the right of the hearing aid user. .
- the hearing aid user looks at a 42-inch TV with stereo speakers on both the left and right sides at a distance of 1 to 2 meters. It is a sound collection of when you are. This experiment simulates an actual home environment.
- the sound source direction estimation result is detected in the 0 ° direction.
- step S5 the frequency calculation unit 160 for each sound source uses the output results of the own utterance detection unit 130, the TV single section detection unit 143, and the other person utterance detection unit 150 to calculate a long-time frequency for each sound source. Output.
- FIGS. 8 to 10 show the ambient sound picked up by the hearing aid microphone array actually worn on both ears and the TV source sound recorded at the same time for each of the scenes of FIGS. 3 (a), (b) and (c). Are used to perform self-speech detection, TV single section detection, and other-speaker detection, and to determine the appearance frequency for 10 minutes for each sound source.
- FIG. 8 is a frequency graph for each sound source in the “conversation scene”
- FIG. 9 is a frequency graph for each sound source in the “TV scene”
- FIG. 10 is a frequency graph for each sound source in the “viewing scene”.
- the hearing aid user participates in the conversation, a lot of spontaneous speech is detected in the front direction. Since the hearing aid user faces the TV to view the TV screen, the TV sound is detected in a direction near the front, and the other person's utterance is detected in a direction other than the front. In addition, in the case of viewing, there is a tendency that the hearing aid user and others spend a certain amount of time silently watching TV together, and have a tendency to talk about the contents when the TV sound is interrupted. For this reason, the TV alone has a longer time.
- FIG. 11 summarizes these features.
- FIG. 11 is a diagram showing a table showing scene features.
- the sound source frequency calculation unit 160 can determine the scene from the sound environment by using the characteristics shown in the table of FIG.
- the shaded portion of the table shows parameters that are particularly characteristic of the scene.
- the frequency in the past 10 minutes is obtained from the frame t.
- a shorter section may be used so as to follow a realistic motion.
- step S6 the scene discriminating unit 170 discriminates the scene using the frequency information for each sound source and the direction information of each sound source.
- Whether or not the TV is powered can be determined by whether or not TV sound is received. However, the scene discriminating unit 170 automatically determines whether the hearing aid user is watching TV, is talking without watching TV, or is talking with family while watching TV. It is necessary to judge.
- Scene discrimination is performed, for example, by scoring using the following point system.
- FIG. 12 is a diagram showing an example of scene discrimination by a point addition method.
- Fs is the frequency of the self-speech detected in the 0 ° direction within the past fixed time from the frame t
- Dt is the TV direction in which the frequency of the TV single sound is highest
- Ft is the frequency at that time Indicates.
- the direction in which the frequency of the other person's utterance is highest is defined as the other person's utterance direction and Dp
- Fp indicates the frequency at that time.
- the frequency determination threshold is ⁇ .
- the “conversation scene” score, the “TV scene” score, and the “while watching scene” score are obtained, and the scene having the highest value and the score is equal to or greater than the predetermined threshold ⁇ is set as the determination result.
- the score is less than ⁇ , it is output that it is not any scene.
- scoring is performed so that the parameters that clearly show the features of the scene will have a large score.
- points are not deducted even if they are erroneously detected so that a scene can be detected even if all feature quantities are not correctly detected.
- the scene determination unit 170 outputs “conversation scene” because the “conversation scene” score 20 having the highest score is equal to or greater than the predetermined threshold ⁇ .
- the scene determination unit 170 outputs “TV scene” because the “TV scene” score 25 having the highest score is equal to or greater than the predetermined threshold ⁇ .
- the scene discriminating unit 170 outputs “while viewing the scene” because the “while viewing scene” score 25 having the highest score is equal to or greater than the predetermined threshold ⁇ .
- scoring for scene discrimination is not limited to the above-described scoring method.
- the threshold value may be changed according to each feature amount, or the threshold value may be added in several stages.
- the scene discriminating unit 170 may design and score a function depending on the frequency instead of adding a score to the score based on a threshold value, or may make a judgment based on a rule.
- FIG. 13 shows an example of a determination method based on rules.
- FIG. 13 is a diagram illustrating an example of scene discrimination based on rules.
- step S ⁇ b> 7 the output sound control unit 180 controls the output sound according to the scene determined by the scene determination unit 170.
- the output of the hearing aid speaker is switched to the externally input TV sound.
- directivity control may be performed on the front.
- control is performed so that the directivity is wide.
- the output sound control unit 180 performs hearing aid processing such as amplifying the sound pressure in a frequency band that is difficult to hear according to the degree of hearing loss of the hearing aid user, and outputs from the speaker.
- the hearing aid 100 of the present embodiment includes an A / D converter 110 that converts a sound signal input from the microphone array 102 into a digital signal, and a sound source that detects a sound source direction from the sound signal.
- the hearing aid 100 uses the detected sound source direction information, the self-speech detection result, and the TV sound detection result to detect an utterance of a speaker other than the wearer, and a self-speech detection result.
- a sound source frequency calculation unit 160 that calculates the frequency of each sound source using the sound source direction information.
- the scene discriminating unit 170 discriminates “conversation scene”, “TV viewing scene”, and “TV viewing scene” using the sound source direction information and the frequency for each sound source.
- the output sound control unit 180 controls the hearing of the hearing aid 100 according to the determined scene.
- the surrounding TV sound is suppressed and directivity is narrowed to the front, so that it is easy to talk with the person in front.
- the hearing aid user is concentrated on the TV, the output of the hearing aid is automatically switched to the TV sound, so that the TV sound can be easily heard without having to perform any troublesome operations.
- it becomes wide directivity. Therefore, when everyone is silent, they can hear the sound of TV, and when someone speaks, they can hear both sounds without being suppressed.
- the present embodiment appropriately discriminates the scene by using not only the direction of the sound source but also the type of the sound source (TV sound, self-speech or other person's voice), frequency information, and time information. Will be able to.
- the present embodiment can cope with a case where both TV sound and conversation are desired to be heard by discriminating “while watching TV scene”.
- the present invention can also be applied to a hearing aid that controls the volume of a TV.
- FIG. 14 is a diagram showing the configuration of a hearing aid that controls the volume of the TV.
- the same components as those in FIG. 2 are denoted by the same reference numerals.
- a hearing aid 100A for controlling the volume of a TV includes a microphone array 102, an A / D conversion unit 110, a sound source direction estimation unit 120, a self-speech detection unit 130, a TV sound detection unit 140, and another person's speech detection.
- the output sound control unit 180A generates a TV sound control signal for controlling the volume of the TV based on the scene determination result determined by the scene determination unit 170.
- the transmission / reception unit 107 transmits the TV sound control signal generated by the output sound control unit 180A to the TV.
- the TV sound control signal is preferably transmitted by wireless communication such as Bluetooth, but may be transmitted by infrared rays.
- the TV of the present invention has an effect that the sound volume can be output in accordance with the scene determined by the hearing aid 100A.
- the present invention can also be applied to devices other than TV.
- devices other than TV include radio, audio, and personal computer.
- the present invention receives sound information transmitted from a device other than the TV, and listens while talking whether the user is listening to the sound emitted from the device or is talking. Determine if it is a scene. Furthermore, the present invention may control the output sound according to the determined scene.
- the present invention can also be realized as application software for a mobile device.
- the present invention discriminates a scene from sound input from a microphone array mounted on a high-function mobile phone and sound information transmitted from a TV, and controls the output sound according to the scene to let the user hear it. be able to.
- the names hearing aid and signal processing method are used.
- the device may be a hearing aid device, an audio signal processing device, and the method may be a scene discrimination method or the like.
- the signal processing method described above is also realized by a program for causing this signal processing method to function.
- This program is stored in a computer-readable recording medium.
- the hearing aid and the signal processing method according to the present invention are useful for a hearing aid that makes it easier for a hearing aid user to hear a desired sound.
- the present invention is also useful as application software for portable devices such as high-function mobile phones.
Landscapes
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Neurosurgery (AREA)
- Otolaryngology (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Circuit For Audible Band Transducer (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
図1は、本発明の一実施の形態に係る補聴器の構成を示す図である。本実施の形態は、補聴器本体とイヤホンとが分離した形態のリモコン型補聴器(以下、「補聴器」と略記する)に適用した例である。
ステップS1において、音源方向推定部120は、A/D変換された音信号から、各マイクに到来する音の到来時間の差を利用して信号処理することにより、音源方向を推定し出力する。音源方向推定部120は、まず最初に、サンプリング周波数48kHzでサンプリングした音信号について、512ポイントごとに音源のある方向を22.5°の分解能で求める。次に、音源方向推定部120は、1秒間のフレーム内で最も高い頻度で表れる方向をそのフレームの推定方向として出力する。音源方向推定部120は、1秒ごとに音源方向推定結果を得ることができる。
ステップS2において、自発話検出部130は、A/D変換された音信号から、フレームtにおける音信号が自発話区間であるか否かを判定し、出力する。自発話検出の方法としては、公知の技術として、例えば特許文献3のように骨伝導による音声振動を検出することにより自発話を検出する方法がある。自発話検出部130は、このような方法を用いて、フレーム毎に振動成分が所定の閾値以上となる区間を自発話発声区間とする。
ステップS3において、TV音検出部140は、A/D変換された音信号と、送受信部107(図1)で受信した外部TV音信号を利用して、フレームtにおいて周囲の音環境がTVの音だけが鳴っている状態か否かを判断し出力する。
次は、図3(c)の補聴器ユーザが横にいる人と会話をしながらTVを見ているシーンについて、TV音検出実験を行った結果について説明する。具体的には、図3(c)のシーンにおいて、実際に両耳に装着した補聴器マイクロホンアレイ102にて周囲の音を収音するとともに、TVのソース音も同時に記録し、TV音検出実験を行った。
ステップS4において、他者発話検出部150は、音源方向推定部120で出力された方向ごとの出力結果から、自発話検出部130で検出された自発話区間、TV単独区間検出部143で検出された区間を除く。さらに、他者発話検出部150は、自発話区間及びTV単独区間を除いた区間から、少なくとも1つの無指向マイクの音声帯域パワーが所定の閾値以上となる区間を、他者発話区間として出力する。他者発話区間は、音声帯域のパワーが大きいところに限定することにより、人の声以外の騒音を除去することができる。なお、ここでは、音声性の検出を音声帯域パワーによるものとしたが、他の方法を用いてもよい。
ステップS5において、音源別頻度計算部160は、自発話検出部130、TV単独区間検出部143、他者発話検出部150の出力結果を用いて、それぞれの音源について、長時間の頻度を計算し出力する。
「会話シーン」では、補聴器ユーザ自身会話に参加しているため、正面方向に自発話が多く検出されると共に、補聴器ユーザは会話相手の方を見ながらしゃべるため、正面方向付近に会話相手の声も検出される。ただし、正面方向に自発話も検出されるため、相対的に会話相手の声は、それほど多くは検出されない。また、会話は、TVの内容とは無関係に進められるため、TVを見るために黙り込むことがなく、そのためTV単独の区間は短いという特徴が見られる。
ステップS6において、シーン判別部170は、前記音源別頻度情報と各音源の方向情報を用いてシーンの判別を行う。
図8のような音源別頻度分布が得られていれば、各シーンのスコアは、以下のようになる。
「会話シーン」スコア=10+5+5=20
「TVシーン」スコア=0
「ながら視聴シーン」スコア=0
このため、シーン判別部170は、最もスコアの高い「会話シーン」スコア20が、所定の閾値λ以上であるため、「会話シーン」であると出力する。
「会話シーン」スコア=0
「TVシーン」スコア=10+5+5+5=25
「ながら視聴シーン」スコア=5+5=10
このため、シーン判別部170は、最もスコアの高い「TVシーン」スコア25が所定の閾値λ以上であるため、「TVシーン」であると出力する。
「会話シーン」スコア=10
「TVシーン」スコア=5+5=10
「ながら視聴シーン」スコア=10+5+5+5=25
このため、シーン判別部170は、最もスコアの高い「ながら視聴シーン」スコア25が所定の閾値λ以上であるため、「ながら視聴シーン」であると出力する。
ステップS7において、出力音制御部180は、シーン判別部170により判定されたシーンに応じて出力音を制御する。
101 補聴器筐体
102 マイクロホンアレイ
103 スピーカ
104 イヤーチップ
105 リモコン装置
106 CPU
107 送受信部
108 オーディオ送信機
109 TV
110 A/D変換部
120 音源方向推定部
130 自発話検出部
140 TV音検出部
141 マイク入力短時間パワー算出部
142 TV音短時間パワー算出部
143 TV単独区間検出部
150 他者発話検出部
160 音源別頻度計算部
170 シーン判別部
180,180A 出力音制御部
Claims (13)
- マイクロホンアレイを設置した両耳に装着する補聴器であって、
前記マイクロホンアレイから入力された音信号から音源方向を検出する音源方向推定部と、
前記音信号から補聴器装着者の声を検出する自発話検出部と、
前記音信号からTV音を検出するTV音検出部と、
前記検出された音源方向情報と前記自発話検出結果と前記TV音検出結果に基づいて装着者以外の話者の発話を検出する他話者発話検出部と、
前記自発話検出結果と前記TV音検出結果と前記他話者発話検出結果と、前記音源方向情報に基づいて音源毎の頻度を計算する音源毎頻度計算部と、
前記音源方向情報と前記音源毎頻度とを用いてシーンを判別するシーン判別部と、
前記判定されたシーンに応じて補聴器の聞こえを制御する出力音制御部と、
を備える補聴器。 - 前記TV音検出部は、前記TVから送信されたTVの音情報を受信するTV音受信部と、
受信したTV音と前記音信号に基づいてTV単独区間を検出するTV単独区間検出部と、を備える請求項1記載の補聴器。 - 前記TV音検出部は、前記TVから送信されたTVの音情報を受信するTV音受信部と、
受信したTV音の短時間パワーを計算するTV音短時間パワー算出部と、
前記音信号の短時間パワーを計算するマイク入力短時間パワー算出部と、
前記TV音短時間パワーと前記マイク入力短時間パワーとを比較し、その差が所定範囲となる区間をTV単独区間として検出するTV単独区間検出部と、を備える請求項1記載の補聴器。 - 前記シーン判別部は、装着者が会話をしている「会話シーン」、装着者がTVを視聴している「TV視聴シーン」、装着者が会話もTV視聴も同時に行う「ながらTV視聴シーン」の、各シーンに分類する請求項1記載の補聴器。
- 前記出力音制御部は、指向性制御を行う請求項1記載の補聴器
- 前記出力音制御部は、「会話シーン」では正面方向に指向性のビームを向ける請求項4記載の補聴器。
- 前記出力音制御は、「TV視聴シーン」では正面方向に指向性のビームを向ける請求項4記載の補聴器。
- 前記出力音制御部は、「TV視聴シーン」では前記TV音受信部で受信したTV音を出力する請求項4記載の補聴器。
- 前記出力音制御部は、「ながらTV視聴シーン」では広指向性とする請求項4記載の補聴器。
- 前記出力音制御部は、「ながらTV視聴シーン」では一方の耳にTV音受信部で受信したTV音を出力し、他方の耳に広指向性とした音を出力する請求項4記載の補聴器。
- 送受信部をさらに有し、
前記出力音制御部は、前記シーン判別部における分類結果に基づいて、TV音を制御するTV音制御信号を生成し、
前記送受信部は、前記TV音制御信号を出力する請求項4記載の補聴器。 - マイクロホンアレイを設置した両耳に装着する補聴器の信号処理方法であって、
前記マイクロホンアレイから入力された音信号から音源方向を検出するステップと、
前記音信号から補聴器装着者の声を検出するステップと、
前記音信号からTV音を検出するステップと、
前記検出された音源方向情報と前記自発話検出結果と前記TV音検出結果に基づいて装着者以外の話者の発話を検出するステップと、
前記自発話検出結果と前記TV音検出結果と前記他話者発話検出結果と、前記音源方向情報を用いて音源毎の頻度を計算するステップと、
前記音源方向情報と前記音源毎頻度とに基づいてシーンを判別するステップと、
前記判定したシーンに応じて補聴器の聞こえを制御するステップと
を有する補聴器の信号処理方法。 - 請求項12記載の補聴器の信号処理方法の各ステップをコンピュータに実行させるためのプログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201180002942.8A CN102474697B (zh) | 2010-06-18 | 2011-06-16 | 助听器和信号处理方法 |
US13/388,494 US9124984B2 (en) | 2010-06-18 | 2011-06-16 | Hearing aid, signal processing method, and program |
JP2011535803A JP5740572B2 (ja) | 2010-06-18 | 2011-06-16 | 補聴器、信号処理方法及びプログラム |
EP11795414.9A EP2536170B1 (en) | 2010-06-18 | 2011-06-16 | Hearing aid, signal processing method and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010139726 | 2010-06-18 | ||
JP2010-139726 | 2010-06-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011158506A1 true WO2011158506A1 (ja) | 2011-12-22 |
Family
ID=45347921
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/003426 WO2011158506A1 (ja) | 2010-06-18 | 2011-06-16 | 補聴器、信号処理方法及びプログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US9124984B2 (ja) |
EP (1) | EP2536170B1 (ja) |
JP (1) | JP5740572B2 (ja) |
CN (1) | CN102474697B (ja) |
WO (1) | WO2011158506A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017037526A (ja) * | 2015-08-11 | 2017-02-16 | 京セラ株式会社 | ウェアラブル装置及び出力システム |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9219964B2 (en) | 2009-04-01 | 2015-12-22 | Starkey Laboratories, Inc. | Hearing assistance system with own voice detection |
US8477973B2 (en) | 2009-04-01 | 2013-07-02 | Starkey Laboratories, Inc. | Hearing assistance system with own voice detection |
US20110288860A1 (en) * | 2010-05-20 | 2011-11-24 | Qualcomm Incorporated | Systems, methods, apparatus, and computer-readable media for processing of speech signals using head-mounted microphone pair |
US9247356B2 (en) * | 2013-08-02 | 2016-01-26 | Starkey Laboratories, Inc. | Music player watch with hearing aid remote control |
CN103686574A (zh) * | 2013-12-12 | 2014-03-26 | 苏州市峰之火数码科技有限公司 | 立体声电子助听器 |
EP3461148B1 (en) * | 2014-08-20 | 2023-03-22 | Starkey Laboratories, Inc. | Hearing assistance system with own voice detection |
WO2016050312A1 (en) * | 2014-10-02 | 2016-04-07 | Sonova Ag | Method of providing hearing assistance between users in an ad hoc network and corresponding system |
US10181328B2 (en) * | 2014-10-21 | 2019-01-15 | Oticon A/S | Hearing system |
US9734845B1 (en) * | 2015-06-26 | 2017-08-15 | Amazon Technologies, Inc. | Mitigating effects of electronic audio sources in expression detection |
DE102015212613B3 (de) * | 2015-07-06 | 2016-12-08 | Sivantos Pte. Ltd. | Verfahren zum Betrieb eines Hörgerätesystems und Hörgerätesystem |
EP3116239B1 (en) * | 2015-07-08 | 2018-10-03 | Oticon A/s | Method for selecting transmission direction in a binaural hearing aid |
US9747814B2 (en) * | 2015-10-20 | 2017-08-29 | International Business Machines Corporation | General purpose device to assist the hard of hearing |
CN106782625B (zh) * | 2016-11-29 | 2019-07-02 | 北京小米移动软件有限公司 | 音频处理方法和装置 |
DK3396978T3 (da) | 2017-04-26 | 2020-06-08 | Sivantos Pte Ltd | Fremgangsmåde til drift af en høreindretning og en høreindretning |
US10349122B2 (en) | 2017-12-11 | 2019-07-09 | Sony Corporation | Accessibility for the hearing-impaired using keyword to establish audio settings |
JP7163035B2 (ja) * | 2018-02-19 | 2022-10-31 | 株式会社東芝 | 音響出力システム、音響出力方法及びプログラム |
DE102018216667B3 (de) * | 2018-09-27 | 2020-01-16 | Sivantos Pte. Ltd. | Verfahren zur Verarbeitung von Mikrofonsignalen in einem Hörsystem sowie Hörsystem |
US11089402B2 (en) * | 2018-10-19 | 2021-08-10 | Bose Corporation | Conversation assistance audio device control |
US10795638B2 (en) | 2018-10-19 | 2020-10-06 | Bose Corporation | Conversation assistance audio device personalization |
US11368776B1 (en) * | 2019-06-01 | 2022-06-21 | Apple Inc. | Audio signal processing for sound compensation |
CN114007177B (zh) * | 2021-10-25 | 2024-01-26 | 北京亮亮视野科技有限公司 | 助听控制方法、装置、助听设备和存储介质 |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5888996A (ja) | 1981-11-20 | 1983-05-27 | Matsushita Electric Ind Co Ltd | 骨導マイクロホン |
JPS62150464A (ja) | 1985-12-24 | 1987-07-04 | Fujitsu Ltd | 自動発券方式 |
JPH0686399A (ja) * | 1992-08-31 | 1994-03-25 | Daiichi Fueezu Kk | 補聴器 |
JPH09327097A (ja) | 1996-06-07 | 1997-12-16 | Nec Corp | 補聴器 |
JP2007028610A (ja) * | 2005-07-11 | 2007-02-01 | Siemens Audiologische Technik Gmbh | 聴音装置及びその作動方法 |
JP2007515830A (ja) * | 2003-09-19 | 2007-06-14 | ヴェーデクス・アクティーセルスカプ | 補聴器の受音特性の指向性制御方法および制御可能な指向特性を備える補聴器用の信号処理装置 |
WO2009001559A1 (ja) * | 2007-06-28 | 2008-12-31 | Panasonic Corporation | 環境適応型補聴器 |
JP2009512372A (ja) * | 2005-10-17 | 2009-03-19 | ヴェーデクス・アクティーセルスカプ | 選択可能なプログラムを有する補聴器および補聴器におけるプログラム変更方法 |
JP2009528802A (ja) * | 2006-03-03 | 2009-08-06 | ジーエヌ リザウンド エー/エス | 補聴器の全方向性マイクロホンモードと指向性マイクロホンモードの間の自動切換え |
JP2010139726A (ja) | 2008-12-11 | 2010-06-24 | Canon Inc | 光学機器 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6455793U (ja) | 1987-10-02 | 1989-04-06 | ||
JPH03245699A (ja) | 1990-02-23 | 1991-11-01 | Matsushita Electric Ind Co Ltd | 補聴器 |
US6072884A (en) * | 1997-11-18 | 2000-06-06 | Audiologic Hearing Systems Lp | Feedback cancellation apparatus and methods |
DE50115802D1 (de) * | 2001-01-05 | 2011-04-07 | Phonak Ag | Dafür |
US6910013B2 (en) | 2001-01-05 | 2005-06-21 | Phonak Ag | Method for identifying a momentary acoustic scene, application of said method, and a hearing device |
DE10236167B3 (de) * | 2002-08-07 | 2004-02-12 | Siemens Audiologische Technik Gmbh | Hörhilfegerät mit automatischer Situtaionserkennung |
EP2081405B1 (en) | 2008-01-21 | 2012-05-16 | Bernafon AG | A hearing aid adapted to a specific type of voice in an acoustical environment, a method and use |
JP4355359B1 (ja) * | 2008-05-27 | 2009-10-28 | パナソニック株式会社 | マイクを外耳道開口部に設置する耳掛型補聴器 |
EP2579620A1 (en) * | 2009-06-24 | 2013-04-10 | Panasonic Corporation | Hearing aid |
-
2011
- 2011-06-16 CN CN201180002942.8A patent/CN102474697B/zh not_active Expired - Fee Related
- 2011-06-16 WO PCT/JP2011/003426 patent/WO2011158506A1/ja active Application Filing
- 2011-06-16 US US13/388,494 patent/US9124984B2/en not_active Expired - Fee Related
- 2011-06-16 EP EP11795414.9A patent/EP2536170B1/en not_active Not-in-force
- 2011-06-16 JP JP2011535803A patent/JP5740572B2/ja not_active Expired - Fee Related
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5888996A (ja) | 1981-11-20 | 1983-05-27 | Matsushita Electric Ind Co Ltd | 骨導マイクロホン |
JPS62150464A (ja) | 1985-12-24 | 1987-07-04 | Fujitsu Ltd | 自動発券方式 |
JPH0686399A (ja) * | 1992-08-31 | 1994-03-25 | Daiichi Fueezu Kk | 補聴器 |
JPH09327097A (ja) | 1996-06-07 | 1997-12-16 | Nec Corp | 補聴器 |
JP2007515830A (ja) * | 2003-09-19 | 2007-06-14 | ヴェーデクス・アクティーセルスカプ | 補聴器の受音特性の指向性制御方法および制御可能な指向特性を備える補聴器用の信号処理装置 |
JP2007028610A (ja) * | 2005-07-11 | 2007-02-01 | Siemens Audiologische Technik Gmbh | 聴音装置及びその作動方法 |
JP2009512372A (ja) * | 2005-10-17 | 2009-03-19 | ヴェーデクス・アクティーセルスカプ | 選択可能なプログラムを有する補聴器および補聴器におけるプログラム変更方法 |
JP2009528802A (ja) * | 2006-03-03 | 2009-08-06 | ジーエヌ リザウンド エー/エス | 補聴器の全方向性マイクロホンモードと指向性マイクロホンモードの間の自動切換え |
WO2009001559A1 (ja) * | 2007-06-28 | 2008-12-31 | Panasonic Corporation | 環境適応型補聴器 |
JP2010139726A (ja) | 2008-12-11 | 2010-06-24 | Canon Inc | 光学機器 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2536170A4 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017037526A (ja) * | 2015-08-11 | 2017-02-16 | 京セラ株式会社 | ウェアラブル装置及び出力システム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2011158506A1 (ja) | 2013-08-19 |
EP2536170A1 (en) | 2012-12-19 |
EP2536170B1 (en) | 2014-12-31 |
JP5740572B2 (ja) | 2015-06-24 |
US20120128187A1 (en) | 2012-05-24 |
US9124984B2 (en) | 2015-09-01 |
CN102474697B (zh) | 2015-01-14 |
CN102474697A (zh) | 2012-05-23 |
EP2536170A4 (en) | 2013-03-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5740572B2 (ja) | 補聴器、信号処理方法及びプログラム | |
US10810989B2 (en) | Method and device for acute sound detection and reproduction | |
US12045542B2 (en) | Earphone software and hardware | |
KR102449230B1 (ko) | 마이크로폰의 기회주의적 사용을 통한 오디오 향상 | |
CN110447073B (zh) | 用于降噪的音频信号处理 | |
JP5581329B2 (ja) | 会話検出装置、補聴器及び会話検出方法 | |
US8744100B2 (en) | Hearing aid in which signal processing is controlled based on a correlation between multiple input signals | |
US20170345408A1 (en) | Active Noise Reduction Headset Device with Hearing Aid Features | |
WO2010140358A1 (ja) | 補聴器、補聴システム、歩行検出方法および補聴方法 | |
WO2012042768A1 (ja) | 音声処理装置および音声処理方法 | |
JP2011097268A (ja) | 再生装置、ヘッドホン及び再生方法 | |
JP2017063419A (ja) | 雑音を受ける発話信号の客観的知覚量を決定する方法 | |
EP3777114B1 (en) | Dynamically adjustable sidetone generation | |
KR20150018727A (ko) | 청각 기기의 저전력 운용 방법 및 장치 | |
KR20170058320A (ko) | 오디오 신호 처리 장치 및 방법 | |
JP2010506526A (ja) | 補聴器の動作方法、および補聴器 | |
EP3072314B1 (en) | A method of operating a hearing system for conducting telephone calls and a corresponding hearing system | |
CN115866474A (zh) | 无线耳机的透传降噪控制方法、系统及无线耳机 | |
JP7350092B2 (ja) | 眼鏡デバイス、システム、装置、および方法のためのマイク配置 | |
WO2022254834A1 (ja) | 信号処理装置、信号処理方法およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180002942.8 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011535803 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13388494 Country of ref document: US Ref document number: 2011795414 Country of ref document: EP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11795414 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |