WO2011001694A1 - 補聴器の調整装置、方法およびプログラム - Google Patents
補聴器の調整装置、方法およびプログラム Download PDFInfo
- Publication number
- WO2011001694A1 WO2011001694A1 PCT/JP2010/004359 JP2010004359W WO2011001694A1 WO 2011001694 A1 WO2011001694 A1 WO 2011001694A1 JP 2010004359 W JP2010004359 W JP 2010004359W WO 2011001694 A1 WO2011001694 A1 WO 2011001694A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- phoneme
- syllable
- hearing
- unit
- hear
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 129
- 238000005259 measurement Methods 0.000 claims abstract description 47
- 238000011156 evaluation Methods 0.000 claims abstract description 46
- 230000005236 sound signal Effects 0.000 claims description 19
- 210000004556 brain Anatomy 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 abstract description 49
- 230000006870 function Effects 0.000 abstract description 7
- 238000002474 experimental method Methods 0.000 description 56
- 230000008569 process Effects 0.000 description 40
- 238000004070 electrodeposition Methods 0.000 description 22
- 238000000605 extraction Methods 0.000 description 18
- 230000003542 behavioural effect Effects 0.000 description 17
- 230000002354 daily effect Effects 0.000 description 12
- 230000002159 abnormal effect Effects 0.000 description 7
- 230000003321 amplification Effects 0.000 description 6
- 238000003199 nucleic acid amplification method Methods 0.000 description 6
- 206010011878 Deafness Diseases 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 210000005069 ears Anatomy 0.000 description 5
- 239000000284 extract Substances 0.000 description 5
- 210000003128 head Anatomy 0.000 description 5
- 238000007792 addition Methods 0.000 description 4
- 230000002411 adverse Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 4
- 238000003745 diagnosis Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 230000010370 hearing loss Effects 0.000 description 3
- 231100000888 hearing loss Toxicity 0.000 description 3
- 208000016354 hearing loss disease Diseases 0.000 description 3
- 230000001965 increasing effect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000003825 pressing Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000001755 vocal effect Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 238000000692 Student's t-test Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000000763 evoking effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 210000004761 scalp Anatomy 0.000 description 2
- 238000012353 t test Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 208000032041 Hearing impaired Diseases 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 230000007177 brain activity Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 210000000624 ear auricle Anatomy 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000011065 in-situ storage Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000004936 stimulating effect Effects 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
- A61B5/377—Electroencephalography [EEG] using evoked responses
- A61B5/38—Acoustic or auditory stimuli
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/70—Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7225—Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/06—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
- G10L2021/065—Aids for the handicapped in understanding
Definitions
- the present invention relates to a technique for adjusting a hearing aid. More specifically, the present invention relates to a technique for identifying a phoneme that a user feels difficult to hear when using a hearing aid using a user's brain wave and adjusting the correction processing of the hearing aid so that the user can hear it better.
- Hearing aid is a device that compensates the user's decreased hearing due to sound amplification.
- the amount of sound amplification that each user seeks from the hearing aid varies depending on the degree of hearing loss of the user and the frequency band. Therefore, before starting to use the hearing aid, first, fitting that adjusts the amount of sound amplification for each frequency according to the hearing ability of each user is essential.
- the fitting is generally performed based on an audiogram for each user.
- An “audiogram” is a result of an evaluation of hearing with respect to a pure tone of each frequency. For example, for each sound of a plurality of frequencies, the lowest sound pressure level (decibel value) that the user can hear is set to the frequency. It is the figure plotted according to. Audiograms are created at hearing aid stores and hospitals.
- Hearing aid stores and hospitals first create audiograms for each user. Then, an amplification amount is determined based on a fitting method that is an adjustment method for amplifying the sound pressure level that can be heard comfortably from the audiogram, and initial adjustment is performed.
- the voice of a single syllable is presented to the user verbally or one CD at a time, and the speech intelligibility evaluation is performed to evaluate whether or not the speech is actually heard. I do.
- a hearing aid having characteristics according to the user's hearing can be obtained.
- the hearing aid user actually wears the hearing aid in everyday life, and it is optimal for each situation because it is at home, watching TV, or going out.
- the adjustment of the correct hearing aid is considered different.
- scene for example, conversations with hearing aid specialists that were good when you were watching TV but were noisy when watching TV
- There was no problem, but it was still difficult to hear in the dialogue with the family, etc. it was communicated to the expert at the hearing aid dealer, and the expert was re-adjusting based on the result.
- An approach to automatically readjust in daily life situations can be considered as a solution to such problems.
- a conventional related technique for this approach there is a technique for performing an evaluation based on an objective index such as an electroencephalogram, not an evaluation of hearing based on an oral report (Patent Document 1), and a reproduced sound according to a change in external environmental sound.
- An technique for adjusting (Patent Document 2), a technique for holding and switching a plurality of fitting parameters (Patent Document 3), and the like are known.
- Patent Document 1 an auditory characteristic of each frequency for a pure tone is evaluated using an electroencephalogram using an ASSR (Audity-Stayy-State-Response), and this allows evaluation without verbal reports with large variations for each user. It becomes possible.
- Patent Document 2 makes it possible to always reproduce music having the same sound quality with respect to fluctuations in external environmental sounds, and thus can adapt to fluctuations in external environmental sounds to some extent.
- Patent Document 3 holds a plurality of fitting parameters in advance and switches the fitting parameters in accordance with the sound environment of the living place.
- Patent Document 1 can evaluate a user's hearing tone for a pure tone, but cannot evaluate a conversational tone.
- Patent Document 2 although adjustment according to external sound is possible to some extent, adjustment according to how the user heard is not possible.
- Patent Document 3 a plurality of adjustment parameters are retained, but parameters suitable for all situations are not necessarily prepared.
- the hearing aid For the user, whether or not the hearing aid should be adjusted is whether or not it is easy to hear the current sound that the user hears through the hearing aid regardless of the sound environment. If a phoneme that is particularly difficult to hear can be identified, it is possible to make adjustments that improve the hearing of only that phoneme.
- the individual adjustment method of the hearing aid there are problems such as having an effect on a specific phoneme but having an adverse effect on other phonemes. As long as the conventional adjustment method is used, adjustment for all sounds must be performed, but such adjustment is difficult. Therefore, an adjustment method that does not adversely affect other phonemes, while targeting a phoneme that is difficult to hear by a method that is not a conventional adjustment method is effective.
- the object of the present invention is to provide a phoneme that requires adjustment and hearing improvement on the hearing aid side without making oral reports or manual adjustments for various sound environments encountered by users in daily life situations. It is to realize a hearing aid readjustment device that can automatically readjustment.
- the apparatus for adjusting a hearing aid collects ambient sounds and outputs a sound signal, and uses the information on the phonemes or syllables included in the sound signal, thereby using the phonemes or syllables.
- a voice cutout unit that outputs time information for specifying the time when the voice is uttered; an electroencephalogram measurement unit that measures a user's electroencephalogram signal; and the identified phoneme obtained from the electroencephalogram signal measured by the electroencephalogram measurement unit Or based on an event-related potential starting from the time when the syllable is uttered, the hearing determination unit that determines difficulty in hearing the phoneme or syllable, and the hearing determination unit determines that it is difficult to hear a plurality of phonemes or syllables
- a phoneme specifying unit that specifies that the phoneme or syllable that appeared earlier in time among the plurality of phonemes or syllables is difficult to hear; and the phoneme specified by the phoneme specifying unit
- the hearing determination unit determines whether the phoneme is based on whether or not a predetermined characteristic component is included in the event-related potential at 800 ms ⁇ 100 ms from the time when the phoneme or syllable is uttered from the event-related potential. Alternatively, it may be determined whether the syllable is difficult to hear.
- the electroencephalogram measurement unit may measure the electroencephalogram signal by using an electrode installed around Pz in the international 10-20 method of the user.
- the hearing determination unit may determine that the phoneme or syllable is difficult to hear when a positive component is included in the event-related potential.
- the electroencephalogram measurement unit may measure the electroencephalogram signal by using an electrode installed around Cz in the international 10-20 method of the user.
- the hearing determination unit may determine that the phoneme or syllable is difficult to hear when a negative component is included in the event-related potential.
- the gain adjustment unit may select a gain adjustment method according to the phoneme type from any of a plurality of types of gain adjustment methods according to the phoneme type specified by the phoneme specifying unit.
- Another adjusting device uses the information on phonemes or syllables included in a sound signal of ambient sounds collected by a sound collection unit that collects ambient sounds, and uses the phoneme or syllable information.
- the specified phoneme or syllable acquired from an electroencephalogram signal measured by an audio cutout unit that outputs time information specifying the time when the utterance is uttered and an electroencephalogram measurement unit that measures a user's electroencephalogram signal is uttered
- a hearing determination unit that determines difficulty in hearing the sound based on an event-related potential starting from a time; and the hearing determination unit determines that the plurality of phonemes or syllables are difficult to hear.
- a phoneme specifying unit that specifies that the phoneme or syllable that appears earlier in time among the phonemes or syllables is difficult to hear, and outputs information on the phonemes specified by the phoneme specifying unit.
- the adjusting device may output the information on the phoneme specified by the phoneme specifying unit to a gain adjusting unit that adjusts the gain of the phoneme.
- the hearing aid evaluation apparatus uses a sound collection unit that collects ambient sounds and outputs a sound signal, and uses the information on the phonemes or syllables included in the sound signal to utter the phonemes or syllables.
- a voice cutout unit that outputs time information for specifying the time that has been recorded; an electroencephalogram measurement unit that measures a user's electroencephalogram signal; and the identified phoneme or syllable obtained from the electroencephalogram signal measured by the electroencephalogram measurement unit Based on the event-related potential starting from the time when the voice was uttered, the hearing determination unit for determining difficulty in hearing the phoneme or syllable, and the hearing determination unit determined that it is difficult to hear a plurality of phonemes or syllables A phoneme specifying unit that specifies that a phoneme or syllable that appears earlier in time among the plurality of phonemes or syllables is difficult to hear and stores the specified results.
- the method for adjusting a hearing aid includes the steps of collecting ambient sounds and outputting a sound signal, and using the phoneme or syllable information included in the sound signal, the phoneme or syllable is uttered.
- a step of outputting time information for specifying the specified time a step of measuring a user's brain wave signal, and an event starting from the time at which the specified phoneme or syllable is uttered, obtained from the measured brain wave signal Determining a difficulty in hearing the phoneme or syllable based on a related potential, and determining that it is difficult to hear the plurality of phonemes or syllables in the determining step.
- a computer program is a computer program for adjusting a hearing aid, and is executed by a computer to receive an audio signal of ambient sounds collected by the computer; Using the included phoneme or syllable information, outputting time information identifying the time when the phoneme or syllable was uttered; receiving the measured user's brain wave signal; and from the brain wave signal Based on the acquired event-related potential starting from the time when the specified phoneme or syllable was uttered, a step of determining difficulty in hearing the phoneme or syllable, and a plurality of phonemes or syllables in the determining step If it is determined that it is difficult to hear, the plurality of phonemes or syllables The step of specifying that the phoneme or syllable that appeared earlier in time is difficult to hear, and determining the gain adjustment method for the specified phoneme or syllable according to the type of the phoneme or syllable, and the determined gain adjustment Adjusting the gain of
- the timing when the user wearing the hearing aid feels that it is difficult to hear and the phoneme thereof are specified by electroencephalogram analysis, and the user's hearing is estimated. Based on the information obtained as a result, adjustment suitable for the phoneme identified as difficult to hear is performed.
- the hearing aid process can be adjusted on the spot where it is difficult for the user to hear. Therefore, for example, it is unnecessary for the user to memorize the situation of difficulty in hearing, go to a hearing aid store, explain to a specialist, and receive readjustment.
- (A) is a figure which shows the procedure of the process performed only by a hearing aid
- (b) is a figure which shows the outline
- the configuration of the hearing aid readjustment device includes the following two technical matters. One of them is to evaluate the ease of hearing (confidence level of discrimination) by EEG measurement, and the other is to which phoneme to hear when evaluating the ease of hearing by EEG measurement for continuous speech. It is a point that specifies whether it was difficult.
- the inventors of the present application independently devised two methods for realizing the speech intelligibility evaluation without inputting the user's answer. The experiment was conducted. Then, an index has been discovered for enabling evaluation of speech sounds instead of conventional evaluation of pure sounds.
- the second technical matter is inspired by the knowledge of the auditory research by the inventors of the present application when the first experimental result and a plurality of word sounds are consecutive.
- the second technical matter will be described in detail in the description of the embodiment.
- the inventors of the present application conducted the following behavioral experiment and electroencephalogram measurement experiment in order to realize speech intelligibility evaluation that does not require verbal reporting by the user.
- the inventors of the present application first conducted a behavioral experiment to examine the relationship between the degree of confidence of voice discrimination and the probability of occurrence of abnormal hearing. Specifically, the speech of a single syllable was presented in the order of voice and letters (Hiragana), and the user was asked to confirm whether the voice and letters were the same, and the confidence level of voice listening was answered with a button. As a result, the inventors of the present application have a low probability of occurrence of abnormal hearing as low as 10% or less when the confidence level of sound discrimination is high, and a high probability of occurrence of abnormal hearing as high as 40% or higher when the confidence level of discrimination is low. It was confirmed.
- the inventors of the present application conducted an electroencephalogram experiment in which a single syllable word sound was presented by voice and the reaction to the voice presentation was examined. Then, the event-related potential, which is one of the signal components of the electroencephalogram, was added and averaged based on the degree of confidence of discrimination obtained in the behavioral experiment. As a result, in the event-related potential starting from the voice stimulus, when the confidence level for the voice discrimination is high compared to the case where the confidence level is low, a positive component is induced in the latency from 700 ms to 900 ms around the center of the head. discovered.
- the experiment participants were 6 university / graduate students with normal hearing.
- Fig. 1 shows the outline of the experimental procedure for behavioral experiments.
- Stimulating speech sounds refer to the “concept of hearing aid fitting” (Kojiro Kodera, Diagnosis and Treatment, 1999, p. 172). You selected from a pair of YA rows or a pair of KA / TA rows. The participants were instructed to hear the voice and to think of the corresponding hiragana. In the participants with normal hearing ability, three conditions of voice with processed frequency gain were presented so that the degree of confidence of each voice was dispersed. (1) 0 dB condition: The frequency gain was not processed as an easily audible voice.
- FIG. 2 shows the gain adjustment amount for each frequency in each of the conditions (1) to (3).
- the reason for reducing the frequency gain of the high frequency is to reproduce a typical pattern of hearing loss in elderly people. General elderly people with hearing loss are difficult to hear high-frequency sounds. By reducing the frequency gain of the high frequency, it is possible to cause the normal hearing person to simulate hearing that is equivalent to the hearing difficulty of the elderly hearing impaired person.
- Procedure B is a button press for proceeding to Procedure C, and was added in order to present the text stimulus of Procedure C at the participant's pace in the experiment. This button is also referred to as the “Next” button.
- Step C a single hiragana character was presented on the display. Characters that match the voice presented in Procedure A as matching trials and hiragana characters that do not match voices as mismatching trials were each shown with a probability of 0.5. Hiragana characters that do not match generally have a line of Na and Ma, Ra and Ya, and Ka and Ta. For example, when hiragana “NA” was presented in procedure A, “NA” was presented in procedure C in the matching trial, and “MA” was presented in procedure C in the mismatching trial.
- Procedure D is a button press (numbers 1 to 5 on the keyboard) for confirming how much the participant feels a discrepancy between the voice presented in Procedure A and the characters presented in Procedure C. 5 if you feel an absolute match, 4 if you feel a match, 3 if you don't know, 2 if you feel a disagreement, 1 if you feel an absolute disagreement Each was pushed. If 5 or 1 is pressed in this button press, the participant will be divided into a correct answer and an incorrect answer (occurrence of abnormal hearing) at the stage of procedure C, but at the time of listening to the voice presented at the stage of procedure A. It can be said that he was confident in the discrimination. Similarly, if 2 to 4 are pressed, it can be said that the participant was not confident in hearing the voice.
- FIG. 3 is a flowchart showing the procedure for one trial. In this flowchart, for the convenience of explanation, both the operation of the apparatus and the operation of the experiment participant are described.
- Step S11 is a step of presenting single syllable speech to the experiment participants.
- Step S12 is a step in which the participant hears a single syllable voice and thinks of a corresponding hiragana.
- Hiragana is a character (phonetic character) representing pronunciation in Japanese.
- Step S13 is a step in which the participant presses the space key as the next button (procedure B).
- Step S14 is a step in which Hiragana characters that match or do not match the voice are presented on the display with a probability of 50% starting from Step S13 (Procedure C).
- Step S15 is a step of confirming whether the hiragana conceived by the participant in step S12 matches the hiragana presented in step S14.
- Step S16 is a step in which the number of 1 to 5 keys is used to answer how much the participant feels the match / mismatch in Step S15 (Procedure D).
- FIG. 4 is a diagram showing the degree of confidence in the voice recognition of the participants classified according to the result of the button press and the probability of correct / incorrect button press.
- the confidence level of the discrimination was classified as follows. When 5 (absolute coincidence) or 1 (absolute disagreement) was pressed, the confidence level was high. The probability that the degree of confidence was “high” was 60.4% of all trials (522 trials out of 864 trials). When 4 (probably coincident), 3 (not sure), or 2 (probably inconsistent) was pressed, the confidence level of discrimination was set to “low”. The probability that the degree of confidence was “low” was 39.6% of the total trials (342 trials out of 864 trials).
- the correctness / incorrectness of the button press was determined by the match / mismatch between the voice and the character and the pressed button. If 5 (absolute match) or 4 (probably match) is pressed in the match trial, and 1 (absolute mismatch) or 2 (probably mismatch) is pressed in the mismatch trial, it is determined to be positive. .
- Fig. 4 (a) shows the correct / wrong result of pressing a button in a trial with a high degree of confidence. It can be seen that the correct button was selected in almost all trials (92%). This indicates that when the confidence level of discrimination is high, the voice can be correctly recognized. From this result, it can be said that when the confidence level of the discrimination is high, the speech intelligibility is high.
- Fig. 4 (b) shows the correct / wrong result of pressing a button in a trial with low discrimination confidence. It can be seen that there is a high probability that the wrong button was pressed (42%). This indicates that abnormal hearing is likely to occur when the degree of confidence of discrimination is low. From this result, it can be said that when the confidence level of the discrimination is low, it can be evaluated that the speech intelligibility is low.
- Electroencephalogram measurement experiment The inventors of the present application conducted an electroencephalogram measurement experiment in order to examine the relationship between the degree of confidence of voice discrimination and the event-related potential after voice presentation.
- FIGS. 5 to 9 experimental settings and experimental results of the conducted electroencephalogram measurement experiment will be described.
- the experiment participants were 6 university / graduate students who were the same as those in the behavioral experiment.
- FIG. 5 is a diagram showing electrode positions in the international 10-20 method.
- the sampling frequency was 200 Hz and the time constant was 1 second.
- a 1-6 Hz digital bandpass filter was applied off-line.
- As an event-related potential for voice presentation a waveform from ⁇ 100 ms to 1000 ms was cut out from the voice presentation.
- the average of event-related potentials was calculated based on the degree of confidence of hearing for each participant and for each speech sound under all the conditions (0 dB ⁇ ⁇ 25 dB ⁇ ⁇ 50 dB) in the above behavioral experiment.
- FIG. 6 shows an outline of the experimental procedure of the electroencephalogram measurement experiment.
- procedure X a single syllable speech was presented. Stimulus speech sounds are similar to behavioral experiments, referring to “Hearing Aid Fitting Concept” (Kazuko Kodera, Diagnosis and Treatment Company, 1999, p. 172). A pair was selected from a pair, a la / ya line pair, and a ka / ta line pair. The participants were instructed to hear the voice and to think of the corresponding hiragana. In addition, similar to the behavioral experiment, voices with the following three conditions were presented in the same manner as in the behavioral experiment so that the participants who have normal hearing ability can discriminate each voice. (1) 0 dB condition: The frequency gain was not processed as an easily audible voice.
- FIG. 7 is a flowchart showing the procedure for one trial.
- the same blocks as those in FIG. 3 are denoted by the same reference numerals, and the description thereof is omitted.
- the difference from FIG. 3 is that there is no step S13 to step S16, and the experiment participant is not required to perform an explicit action.
- FIG. 8 shows a waveform obtained by summing and averaging event-related potentials in Pz based on voice presentation based on the degree of confidence of discrimination. The addition average was performed based on the degree of confidence of hearing for each participant for each speech sound under all the conditions (0 dB ⁇ ⁇ 25 dB ⁇ ⁇ 50 dB) in the behavioral experiment.
- the horizontal axis is time and the unit is ms
- the vertical axis is potential and the unit is ⁇ V.
- the downward direction of the graph corresponds to positive (positive) and the upward direction corresponds to negative (negative).
- the baseline was aligned so that the average potential from -100 ms to 0 ms would be zero.
- the broken line shown in FIG. 8 is the average waveform of the event-related potentials at the electrode position Pz when the confidence level of discrimination is high in the behavioral experiment, and the solid line is when the confidence level of discrimination is low. According to FIG. 8, it can be seen that a positive component appears in the latency period from 700 ms to 900 ms on the solid line indicating that the discrimination confidence level is low compared to the broken line indicating that the discrimination confidence level is high.
- the average potential between 700 ms and 900 ms for each participant was ⁇ 0.47 ⁇ V when the confidence level was high and 0.13 ⁇ V when the confidence level was low.
- the section average potential was significantly large when the confidence level of discrimination was low (p ⁇ 0.05).
- the inventors of the present application reflect the confidence component of the event-related potential from the latency of 700 ms to 900 ms starting from the voice presentation, and the positive component can be used as an indicator of the confidence of discrimination. The conclusion that it is. As a result of performing the t-test for every sampling from 0 ms to 1000 ms, the time periods in which the significant difference due to the difference in the degree of confidence of discrimination persisted for 30 ms or more were only 730 ms to 770 ms and 840 ms to 915 ms.
- FIG. 9 is a diagram showing, for each confidence level, the segment average potential from 700 ms to 900 ms of the event-related potential starting from voice presentation at the electrode positions C3, Cz, C4.
- the black circle line shown in FIG. 9 indicates the section average potential when the discrimination confidence level is high, and the white circle line indicates the section average potential when the discrimination confidence level is low.
- the event-related potential is positive when the confidence level of discrimination is high, and the event-related potential is negative when it is low. Focusing on the polarity of the event-related potential, it can be seen that the polarity is inverted between the measurement at the electrode position Pz (FIG. 8) and the measurement at the electrode position Cz (FIG. 9).
- the P300 component is known as a general event-related potential for auditory stimulation. The polarity of the P300 component hardly reverses at the electrode positions Cz and Pz.
- the latency of the component obtained in this experiment is different from 700 ms to 900 ms as compared with around 300 ms of the latency of the P300 component, this positive that was caused at the electrode position Pz when the discrimination confidence level is low.
- the component is a component different from the P300 component.
- an example using an electroencephalogram signal measured at the electrode position Pz will be described.
- the electrode position is Cz, it may be read with the polarity reversed as described at the beginning of this paragraph.
- “P300 component” means “latency induced by a target stimulus in an oddball task according to page 14 of“ New Physiological Psychology Vol. 2 ”(supervised by Mr. Miyata, Kitaoji Shobo, 1997). It is a positive component of event-related potentials around 300 ms.
- the black circle line that is the section average potential when the discrimination confidence level is high at the electrode positions C3, Cz, and C4 and the white circle line that is the section average potential when the discrimination confidence level is low are: It can be seen that the potential distribution (magnitude relationship) is different. As a result of the multiple comparison, the potential distribution was significantly different (p ⁇ 0.05). Thereby, it can be said that the confidence level can be determined from the potential distribution at the electrode positions C3, Cz, and C4.
- the positive component (FIG. 8) having a latency of 700 ms to 900 ms at the electrode position Pz and the characteristic component (FIG. 9) having a latency of 700 ms to 900 ms at the electrode positions C3, C4, and Cz can be identified by various methods. For example, a method of thresholding the magnitude of the peak amplitude in the vicinity of the latency of about 700 ms, a method of creating a template from a typical waveform of the above component, and calculating a similarity to the template can be used.
- the threshold value / template may be a typical user's previously stored or may be created for each individual.
- the time after a predetermined time elapsed from a certain time point in order to define the event-related potential component is expressed as, for example, “latency 700 ms to 900 ms”. This means that a range centered on a specific time from 700 ms to 900 ms can be included. At this time, the boundary between 700 ms and 900 ms is included in “latency 700 ms to 900 ms”.
- width from 30 ms to 50 ms is an example of a general individual difference of the P300 component.
- the positive component of the latency 700 ms to 900 ms has a slower latency than the P300, so the individual difference of the user is different. It appears even bigger. Therefore, it is preferable to handle it as a wider width, for example, a width of about 100 ms.
- the inventors of the present application can (1) evaluate the speech intelligibility based on the user's confidence level of voice recognition, and (2) the event relation starting from the voice presentation. It was discovered that positive components with potential latency of 700 ms to 900 ms reflect the confidence level. In combination, the positive component of the event-related potential can be used as an index for evaluating difficulty in hearing through the degree of confidence in hearing the voice.
- FIG. 10 shows the correspondence between the presence / absence of a positive component, the degree of confidence of discrimination, and the ease of hearing, which are summarized by the inventors of the present application. This correspondence is created by taking the positive component at the electrode position Pz as an example.
- the hearing aid readjustment device measures the brain waves evoked by conversational speech input from the sound collection unit when using a hearing aid in daily life, and latent event-related potentials starting from each phoneme of the conversational speech.
- the ease of hearing of each phoneme is evaluated using the presence or absence of a positive component from 700 ms to 900 ms. If there are phonemes that are difficult to hear, the readjustment device will readjust the hearing aid.
- FIG. 11 shows the configuration and usage environment of the hearing aid readjustment system 100.
- the hearing aid readjustment system 100 includes two parts, a hearing aid unit 101 and a hearing aid adjustment unit 102.
- the hearing aid unit 101 is a part that functions as a hearing aid, and includes a sound collection unit 2, a hearing aid processing unit 3, and an output unit 4.
- the hearing aid unit 101 collects sounds from the outside by the sound collecting unit 2, performs hearing aid processing according to the hearing condition of the user 1 by the hearing aid processing unit 3, and outputs the result from the output unit 4 to the user.
- FIG. 12 is an example of a scene where the readjustment system 100 is used.
- a user wears a hearing aid readjustment system 100 in which a hearing aid 101 and a readjustment device 102 are integrated.
- the components in FIG. 12 corresponding to the components in FIG.
- the sound collection unit 2 in FIG. 11 corresponds to the microphone 2 attached to the hearing aid.
- 11 corresponds to a signal processing circuit (chip circuit) (not shown) inside the hearing aid.
- the hearing aid adjustment unit 102 in FIG. 11 performs additional processing outside the hearing aid unit 101.
- the hearing aid adjustment unit 102 includes an electroencephalogram measurement unit 6, a sound extraction unit 5, a hearing determination unit 7, a phoneme identification unit 8, and a gain adjustment unit 9.
- the electroencephalogram measurement unit 6 measures the electroencephalogram of the user 1.
- the sound extraction unit 5 extracts a sound part from the sound information collected by the sound collection unit 2.
- the hearing determination unit 7 determines the hearing using the characteristics of the electroencephalogram relating to the ease of hearing (experiments and data are already explained).
- the phoneme identification unit 8 performs processing for eliminating the ambiguity and gain adjustment.
- the adjustment corresponding to each difficulty in hearing is performed by the unit 9. This adjustment is performed on the hearing aid processing unit 3 and is reflected in the subsequent hearing aid processing of the hearing aid unit 101.
- the hearing aid adjustment unit 102 in FIG. 11 corresponds to the circuit 102 shown in FIG. More specifically, the electroencephalograph measurement unit 6 of the hearing aid adjustment unit 102 in FIG. 11 includes an electroencephalograph main body 6a, an electrode 6b, and an electrode 6c which are circuits for amplifying a biological signal.
- the electroencephalogram is measured by measuring a potential difference between at least two electrodes mounted on the head and its surroundings. In this case, the electrodes 6b and 6c are installed in the part where the hearing aid main body 101 and the user's ear are in contact. Recently, in order to improve performance and usability, hearing aids may be worn at the same time in both ears. In this case, electroencephalogram measurement can measure the potential between both ears, making it easier to measure brain activity.
- the electrode was placed on the scalp. However, it is considered possible to arrange the electrodes at other positions. As shown in FIG. 9, the potential distribution pattern of C3-Cz-C4 is reversed regardless of whether the discrimination confidence level is high or low. Therefore, it is considered that the discrimination confidence level can be determined even when the electrode is arranged at an ear position further outside the electrode positions C3 and C4.
- components of the hearing aid adjustment unit 102 are functional parts that mainly perform signal processing. These are realized as components built in the hearing aid main body as shown in FIG. For example, a DSP or a memory is assumed as the component. This will be described in more detail below.
- FIG. 13 shows a hardware configuration of the hearing aid readjustment system 100 according to the present embodiment.
- a CPU 101a a CPU 101a, a RAM 101b, and a ROM 101d that perform signal processing of the hearing aid unit 101 are provided.
- a processing program 101c is stored in the RAM 101b.
- a CPU 102a, a RAM 102b, and a ROM 102d that perform signal processing of the hearing aid adjustment unit 102 are provided.
- a processing program 102c is stored in the RAM 102b.
- a microphone 2a and an audio input circuit 2b are provided as the sound collection unit 2
- a speaker (receiver) 4a and an audio output circuit 4b are provided as the output unit 4.
- an electroencephalograph 6a, an electrode 6b, and an electrode 6c are provided.
- Each device is connected to each other by a bus 100a and can exchange data.
- the audio signal collected by the sound collection unit 2 is subjected to hearing aid processing by the CPU 101 a by the program 101 c stored in the RAM 101 b and sent to the output unit 4.
- the hearing aid readjustment system 100 may be composed of a set of CPU, RAM, and ROM, or may be realized as hardware such as a DSP in which a computer program is incorporated in a semiconductor circuit. Such a DSP can realize all the functions of the above-described CPU, RAM, ROM, audio input / output circuit, and the like with a single integrated circuit.
- the computer programs 101c and 102c described above can be recorded on a recording medium such as a CD-ROM and distributed as a product to the market, or can be transmitted through an electric communication line such as the Internet.
- FIG. 14A shows a procedure of processing performed by a normal hearing aid
- FIG. 14B shows an overview of the procedure when the processing of the hearing aid readjustment system 100 according to the present embodiment is combined. Steps that are insufficient in the description of only the outline will be described later based on a more detailed flowchart.
- FIG. 14 (a) is an outline of the processing flow of the hearing aid.
- step S20 the sound collection unit 2 collects external sounds.
- step S30 the hearing aid processing unit 3 performs hearing aid processing.
- the hearing aid process is a process of decomposing the sound recorded in step S20 into power for each frequency, performing predetermined amplification for each frequency, and returning the sound again.
- performing predetermined amplification for each frequency is referred to as “hearing process”
- changing a predetermined value for how much gain should be adjusted for each frequency is referred to as “gain adjustment”.
- step S40 the output unit 4 outputs the result of the hearing aid process to the user. Specifically, the output unit 4 outputs the adjusted sound, so that the user 1 can hear as a sound that is easier to hear than before the adjustment.
- FIG. 14 (b) shows an outline of the flow of processing of the readjustment system for the processing described above.
- the same number as the number used in FIG. 14A is assigned to the step where the same process as the process of the hearing aid is performed, and the description thereof is omitted.
- the difference from the hearing aid processing is a portion of steps S50 to S90 sandwiched between processing steps S20 and S30 of the hearing aid, and here, the hearing aid adjustment processing is performed.
- step S50 the voice cutout unit 5 cuts out a voice signal.
- voices were presented one by one. However, since a continuous voice is heard in a daily scene of the user, it is necessary to cut out a voice signal.
- step S60 the electroencephalogram measurement unit 6 measures an electroencephalogram. Since the electroencephalograph has been downsized in recent years and the power consumption has been reduced, it is possible to realize a device in which an electroencephalograph is combined with a hearing aid. As long as the hearing aid readjustment system 100 is of a type that is worn using one ear, a plurality of positions of the electrodes of the miniaturized electroencephalograph may be installed, for example, at a portion where the hearing aid contacts the skin of the head. Alternatively, if the hearing aid readjustment system 100 is of a type that is worn using both ears, electrodes can be placed with both ears. In the latter case, an electroencephalogram between both ears can also be used. In addition, a head-like brain wave can be measured.
- Measured electroencephalogram is considered to contain various information, but it can grasp the tendency of the evoked potential with respect to the voice presentation by relating it to the stimulus presentation like the event-related potential.
- step S70 the hearing determination unit 7 extracts an electroencephalogram signal corresponding to the audio signal extracted by the audio extraction unit 5. By extracting each brain wave component, the hearing determination unit 7 determines the degree of hearing.
- step S80 the phoneme specifying unit 8 specifies a portion that is really difficult to hear in the output result of the hearing determination unit 7 when there are a plurality of speech candidates that are difficult to hear.
- step S90 the gain adjusting unit 9 adjusts the gain for a phoneme or syllable that is difficult to hear.
- the individual adjustment method of the hearing aid is effective for a specific phoneme but adversely affects other phonemes. Adjustment for all sounds is difficult and adjustment for a phoneme that is difficult to hear is effective.
- step S50 a speech signal extraction process
- step S70 a hearing determination process
- step S80 a phoneme identification process
- step S90 a gain adjustment process
- FIG. 15 shows the details of the processing flow of the voice extraction unit 5
- FIG. 16 shows an explanatory diagram of the processing. Hereinafter, description will be made along the flowchart of FIG.
- step S51 the sound extraction unit 5 acquires the sound signal recorded by the sound collection unit 2.
- the recorded sound is taken into the sound extraction unit 5 at a constant length for every constant period (timing).
- the sound extraction unit 5 captures a recorded sound signal 51 indicated by “recorded sound”.
- step S52 the sound extraction unit 5 converts the recorded sound signal 51 into a phoneme sequence by the acoustic processing 52 (FIG. 16).
- the extraction process in step S53 is performed on the converted phoneme sequence.
- the acoustic processing in step S52 is processing for detecting what phonemes and syllables are included in the speech data, and is used in preprocessing in the field of speech recognition. Specifically, the acoustic processing in the present embodiment performs a comparison operation with the current data based on the stored acoustic data of each phoneme and syllable (for example, a standard speech waveform and its feature amount), Is a process of recognizing the utterance content.
- step S53 the voice extraction unit 5 extracts and outputs a phoneme or syllable series in response to the result of the acoustic processing in step S52.
- FIG. 16 shows an example in which a phoneme string 53 of [hai / do / ti / ra / de / mo / i / i] is extracted as a result of the acoustic processing 52.
- phonemes separated by syllable levels are extracted.
- the fineness of extraction may be changed as appropriate. For example, [h / a / i / d / o / u / m / o] or the like may be divided by the phoneme level.
- the word recognition may be realized by, for example, storing the dictionary data in which the speech segmenting unit 5 associates the phoneme string sequence with the word and referring to the dictionary data based on the phoneme string 53. .
- step S54 the voice cutout unit 5 associates the time when each syllable extracted as the output phoneme string 53 was uttered, and stores them in pairs.
- FIG. 11 shows a procedure of processing performed by the hearing determination unit 7.
- FIG. 18 shows an example of data processing of the hearing determination processing.
- step S71 in FIG. 17 the hearing determination unit 7 receives syllable information and time information 71 (FIG. 18) corresponding to the syllable information from the sound extraction unit 5. According to the time information 71, the utterance point of each phoneme can be specified.
- the hearing determination unit 7 receives the electroencephalogram data from the electroencephalogram measurement unit 6, and then extracts the event-related potential from the time included in the correspondence information 71 between the syllable and the time.
- the event-related potential is brain wave information measured in association with a certain event (in this case, the pronunciation of a syllable), and a brain wave in a predetermined section (for example, a section 72a from ⁇ 100 ms to 1000 ms) is cut out from the time when the syllable is generated. As a result, an event-related potential is obtained. An electroencephalogram is cut out for each syllable.
- FIG. 18 shows the extracted event-related potential 72b.
- step S73 the hearing determination unit 7 extracts an electroencephalogram feature for analysis with respect to the extracted event-related potential 72b.
- the electroencephalogram feature of interest this time is a feature-positive component of, for example, 800 ms ⁇ 100 ms, and the analysis feature amount uses, for example, a maximum amplitude of 700 to 900 ms of latency, a section average potential, and other wavelet coefficients.
- the hearing determination unit 7 is also referred to as a component related to inaudibility of the electroencephalogram feature obtained in step S73 (for example, a late positive component when an electroencephalogram is measured from Pz, LPP (Late (Positive Potential). .) Is included. If it is determined that the LPP is included, the process proceeds to step S75. If it is determined that the LPP is not included, the process proceeds to step S76.
- a component related to inaudibility of the electroencephalogram feature obtained in step S73 for example, a late positive component when an electroencephalogram is measured from Pz, LPP (Late (Positive Potential). .) Is included. If it is determined that the LPP is included, the process proceeds to step S75. If it is determined that the LPP is not included, the process proceeds to step S76.
- this determination method it may be determined whether or not LPP is included by comparing the maximum amplitude or section average potential with a predetermined threshold. Alternatively, it is determined whether or not they are similar by the similarity (for example, correlation coefficient) between the electroencephalogram characteristics and a predetermined template of the electroencephalogram waveform created from the waveform of a typical positive component signal having a latency of 700 ms to 900 ms. Also good.
- FIG. 18 schematically shows a comparison between the extracted event-related potential 72b and the LPP waveform 73 when it is difficult to hear. As a result of the comparison, it may be determined that “similar to positive components” if they are similar and “no positive components” if they are not similar.
- the predetermined threshold value or template may be calculated / created from a waveform of a positive component of a general user held in advance, or may be calculated / created from a waveform of a positive component for each individual.
- step S75 the hearing determination unit 7 determines that “it is difficult to hear”.
- step S76 the hearing determination unit 7 determines that “easy to hear”.
- step S77 the hearing determination unit 7 stores the result of hearing.
- the result of hearing is stored in a table, for example, as a determination result 77 (FIG. 18).
- Syllables are arranged on the horizontal axis, and determination results are stored in the table for the syllables. As illustrated in FIG. 18, for example, a result that it is easy to hear is stored in hai and ra, and a result that it is difficult to hear is stored in do and ti.
- Listening processing is performed through such processing, and the hearing for each syllable is evaluated even for normal continuous speech.
- the standard utterance speed is about 8 to 12 mora ( ⁇ characters).
- the Japanese utterance speed of 10 mora / second can be standard.
- one syllable corresponds to almost one character.
- the time from when a syllable is uttered until the next syllable is uttered is about 100 ms.
- the characteristics of the electroencephalogram related to inaudibility appear at 700 ms to 900 ms, which is a rather late latency zone in the field of event-related potentials.
- the latency error of the event-related potential becomes larger as the delay time becomes slower.
- the error of the target event-related potential is assumed to be about ⁇ 100 ms.
- the section in which a significant difference was confirmed was characterized by a wide range of 730 ms to 915 ms (FIG. 8).
- a hearing determination result 77 results in a processing result that is difficult to hear for a plurality of consecutive syllables (do, ti). Makes it possible to adjust the hearing aid more effectively. If multiple phonemes or syllables that are determined to be difficult to hear are detected, the sound that appeared earlier in time (the sound that appeared first) was difficult to hear and treated as the target of the adjustment process described later May be.
- the individual adjustment method in the final stage adjustment of the hearing aid is effective for a specific phoneme because the pattern of frequency characteristics differs for each phoneme.
- the phoneme specifying unit 8 takes charge of this, and the processing of the phoneme specifying unit 8 utilizing the auditory characteristics that the inventors of the present application pay attention to will be described.
- step S80 of FIG. 14B The phoneme specifying process is performed by the phoneme specifying unit 8 (FIG. 11).
- FIG. 19 shows a procedure of processing performed by the phoneme identification unit 8.
- FIG. 20 shows auditory characteristic data which is the principle of phoneme identification processing. A description will be given below along the flowchart of FIG. 19 while being associated with the auditory principle of FIG.
- step S81 the phoneme identification unit 8 receives the syllable and the evaluation result of hearing from the hearing determination unit 7.
- step S82 the phoneme identification unit 8 first determines whether “difficult to hear” exists in the audible evaluation result. If “difficult to hear” does not exist, the process proceeds to step S83 to output that there is no “difficult to hear” syllable and end the processing. If “difficult to hear” exists, the process proceeds to step S84.
- step S84 the phoneme specifying unit 8 determines whether or not the evaluation result “difficult to hear” continues. If the evaluation results of “difficult to hear” are not consecutive, the process proceeds to step S85, and the “difficult to hear” syllable is output as a result, and the process ends. If the evaluation result of “difficult to hear” continues, the process proceeds to step S86. In the case of proceeding to step S86, it is determined that ambiguity remains in specifying the syllable from the relation between the latency of the electroencephalogram and the speaking speed.
- step S86 the phoneme identification unit 8 selects the syllable closest to the beginning of the consecutive “difficult to hear” syllables as the most difficult to hear syllable, outputs it as a result, and ends. For example, in the hearing determination result 77 of FIG. 18, do and ti are listed as candidates. The phoneme identification unit 8 determines do as a final result among these.
- FIG. 20 shows the intelligibility curves of the first, second, and third sounds of three meaningless words.
- This intelligibility curve was quoted from Kazuko Kodera, “Progress of hearing aids and social applications”, Diagnosis and Treatment, 2006, p. 67.
- This intelligibility curve is an evaluation result of speech intelligibility for eight normal hearing persons.
- the horizontal axis indicates the volume of sound (unit: dBSL) to be heard by the subject at the test sound level above the awareness threshold, and the vertical axis indicates the speech intelligibility (unit:%).
- the inspection sound level is divided into four stages of 10 to 40 decibels, and the clarity of the first sound, the second sound, and the third sound is plotted for each level.
- the first sound has the lowest speech intelligibility, and thereafter the intelligibility improves in the order of the second sound and the third sound.
- An object of the present invention is to specify a syllable that is difficult to hear in an utterance in a conversation scene of daily life. For this purpose as well, it is effective for the present inventors to determine that the syllable closest to the head is the most difficult to hear when there are a plurality of candidates that are determined to be difficult to hear as a result of brain wave processing. I was inspired. In the experimental conditions, it was an experiment with meaningless words, but in daily conversation, the word was also estimated from the relationship between the characters before and after the first sound, taking into account the fact that the relationship before and after the first sound was most difficult to use. It is considered appropriate to select a syllable close to the beginning of the sentence.
- candidates that are difficult to hear may or may not include the sound of the beginning of the word. Considering the knowledge shown in FIG. 20, it is reasonable to select a sound close to the beginning of the word because a sound close to the beginning of the word tends to be harder to hear.
- the relationship between the experimental results for continuous speech can be considered as follows. There are many silent sections of speech data even if they are short in a dialogue sentence, and continuous speech can be regarded as a repetition of a single utterance at a plurality of word levels if the speech is captured with the silent sections as breaks. Considering that the latency time of the electroencephalogram is about ⁇ 100 ms, a silent section of several hundred milliseconds is also seen in continuous speech, so it was thought that it supported the assumption that it could be a continuous word or the like.
- the gain adjustment process is performed by the gain adjustment unit 9 (FIG. 11). Specifically, when there is a “difficult to hear” syllable specified by the phoneme specifying unit 8, the phoneme specifying unit 8 introduces a specific hearing aid process or the like with reference to a table as shown in FIG. By doing so, only the sound that is hard to hear is improved.
- the gain adjusting unit 9 holds this table in an internal memory or buffer (not shown), and applies consonant part expansion or consonant part expansion compression based on this table.
- This table shows the results of research related to hearing aids related to consonants that are difficult to hear and the corresponding readjustment process (for example, Kazuko Kodera, “Progress and Social Application of Hearing Aid”, Diagnosis and Treatment, 2006, p. 78, etc. )) In advance.
- consonant part expansion is effective for unvoiced consonants h and voiced consonants d and g
- consonant part expansion and compression is effective for unvoiced consonants ts.
- the timing when the user feels difficult to hear and its syllable are specified by electroencephalogram analysis, and readjustment suitable for the specified syllable can be performed. This eliminates the need for the user to remember the difficulty of hearing, go to a hearing aid store, explain to an expert, and receive readjustment. The burden on the user can be reduced.
- the electrode position is assumed to be, for example, Pz in the international 10-20 method.
- it is difficult for each user to specify exactly the electrode position corresponding to the position of Pz. Therefore, it may be a position (Pz peripheral position) that seems to be Pz. Even in the position around Pz, the event-related potential is correctly measured. The same applies to the electrode position Cz and the like.
- the hearing aid readjustment device 100 may be provided in a form that does not have a hearing aid function. Specifically, only the hearing aid adjustment unit 102 in FIG. 11 may be provided. At this time, the hearing aid unit 101 is a normal hearing aid. However, the hearing aid unit 101 includes an interface for performing gain adjustment from an external PC or the like. The audio extraction unit 5 of the hearing aid adjustment unit 102 receives audio information (audio signal) from the sound collection unit 2 of the hearing aid unit 101 via this interface. Then, a gain adjustment instruction is passed from the hearing aid adjustment unit 102 to the hearing aid processing unit 3 of the hearing aid unit 101. Note that the hearing aid unit 101 may receive only gain adjustment. In this case, the hearing aid adjustment unit 102 may have a function equivalent to that of the sound collection unit.
- the hearing aid adjustment unit 102 may output the evaluation result to a gain adjuster (not shown) provided outside.
- the gain adjuster only needs to have the same function as the gain adjuster 9.
- the electroencephalogram measurement unit 6 may be omitted from the hearing aid adjustment unit 102.
- the electroencephalogram measurement unit 6 may be provided outside the hearing aid adjustment unit 102 and connected to the hearing aid adjustment unit 102.
- FIG. 22 shows a configuration of the hearing aid evaluation apparatus 112 according to the modification.
- the difference between the hearing aid evaluation device 112 and the hearing aid adjustment unit 102 (FIG. 11) is that the hearing aid evaluation device 112 is not provided with the gain adjustment unit 9 of FIG. 11 and that the sound collection unit 2 is provided. It is in. Since other configurations are common, description of each component is omitted.
- the output unit 4 is not necessarily required for the hearing aid evaluation device 112, and the hearing aid evaluation device 112 does not need to be downsized like the hearing aid adjustment unit 102.
- the sound collection unit 2 and / or the electroencephalogram measurement unit 6 may be omitted from the configuration of the hearing aid evaluation apparatus 112 shown in FIG. It is also possible to realize the same operation as the hearing aid evaluation device 112 shown in FIG. 22 by providing the sound collection unit 2 and / or the electroencephalogram measurement unit 6 outside and connecting them to the hearing aid evaluation device 112.
- the hearing aid evaluation apparatus 112 can be configured by a high-performance microphone, a larger electroencephalograph for medical / research purposes, and a PC. Technical development for downsizing is not necessary, and since it can be realized using an existing microphone, electroencephalograph, PC and computer program, it can be realized at low cost.
- a fitter that adjusts a hearing aid in a hearing aid store wears this hearing aid evaluation device 112 on the user and evaluates hearing from an electroencephalogram during a conversation.
- the function of the phoneme identification unit 8 can store information on evaluation results indicating in which part of the dialogue the user feels difficult to hear in an internal memory (not shown).
- the recorded information is output to a monitor (not shown) and is output to a gain adjuster for viewing a gain provided separately.
- the hearing aid evaluation apparatus 112 may output evaluation result information in real time. Based on this information, the fitter can perform an appropriate gain adjustment. According to this method, even when the automatic gain adjustment method for all hearing is not established, the hearing aid processing can be adjusted according to the inaudibility actually felt by the user.
- the hearing aid evaluation apparatus 112 can be used in a workplace where a user usually lives, a public place, or a home, for example, besides making adjustments in a hearing aid store.
- the hearing aid evaluation device 112 may be lent to a user at a hearing aid store. Then, as the user sends their daily life as usual, the EEG measurement is performed in the hearing aid evaluation apparatus 112, and it is recorded and accumulated for what kind of dialogue in which situation the user feels difficult to hear. Keep it.
- a hearing instrument store fitter can refer to the stored data and recommend an optimal type of hearing aid or gain adjustment.
- the hearing aid readjustment apparatus enables in-situ readjustment by detecting, by means of an electroencephalogram, an inaudibility that a hearing aid user is expected to encounter on a daily basis. For this reason, it can be widely used in situations where hearing aids are used.
Landscapes
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Acoustics & Sound (AREA)
- Neurosurgery (AREA)
- Signal Processing (AREA)
- Otolaryngology (AREA)
- Pathology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Psychiatry (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Psychology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
Description
本願発明者らは、音声の聞き分けに関する自信度と異聴発生確率との関係を調べるために、行動実験を実施した。以下、図1から図3を参照しながら、実施した行動実験の実験設定および実験結果を説明する。
本願発明者らは、音声の聞き分け自信度と音声呈示後の事象関連電位との関係を調べるために、脳波計測実験を実施した。以下、図5から図9を参照しながら、実施した脳波計測実験の実験設定および実験結果を説明する。
(1)0dB条件:聞き分けやすい音声として周波数ゲインの加工をしなかった。
(2)-25dB条件:250Hz-16kHzの周波数のゲインを段々と-25dBまで調整(低減)した。
(3)-50dB条件:250Hz-16kHzの周波数のゲインを段々と-50dBまで調整(低減)した。
以下、図面を参照しながら補聴器の再調整装置の実施形態を説明する。
以下、上述の実施形態にかかる補聴器の再調整システム100にかかる変形例を説明する。
3 補聴処理部
4 出力部
5 音声切出部
6 脳波計測部
7 聞こえ判定部
8 音素特定部
9 ゲイン調整部
10 蓄積部
100 補聴器の再調整システム
101 補聴器部
102 補聴調整部
Claims (12)
- 周囲の音を集音し、音声信号を出力する集音部と、
前記音声信号に含まれている音素または音節の情報を利用して、前記音素または音節が発声された時刻を特定する時刻情報を出力する音声切出部と、
ユーザの脳波信号を計測する脳波計測部と、
前記脳波計測部で計測された脳波信号から取得される、特定した前記音素または音節が発声された時刻を起点とした事象関連電位に基づき、前記音素または音節に対する聞こえにくさを判定する聞こえ判定部と、
前記聞こえ判定部において複数の音素または音節に対して聞こえにくいと判定された場合に、前記複数の音素または音節のうち時間的に前に現れた音素または音節が聞きにくかったと特定する音素特定部と、
前記音素特定部で特定された前記音素または音節に対して、前記音素の種類に応じてゲイン調整方法を決定し、決定したゲイン調整方法で前記音素または音節のゲインを調整するゲイン調整部と
を備えた、補聴器の調整装置。 - 前記聞こえ判定部は、前記事象関連電位のうち、前記音素または音節が発声された時刻を起点として800ms±100msにおける事象関連電位に所定の特徴成分が含まれているか否かに基づいて前記音素または音節に対する聞こえにくさを判定する、請求項1に記載の調整装置。
- 前記脳波計測部は、前記ユーザの国際10-20法におけるPz周辺に設置された電極を利用して、前記脳波信号を計測する、請求項2に記載の調整装置。
- 前記聞こえ判定部は、前記事象関連電位に陽性成分が含まれているときは前記音素または音節が聞こえにくいと判定する、請求項3に記載の調整装置。
- 前記脳波計測部は、前記ユーザの国際10-20法におけるCz周辺に設置された電極を利用して、前記脳波信号を計測する、請求項2に記載の調整装置。
- 前記聞こえ判定部は、前記事象関連電位に陰性成分が含まれているときは前記音素または音節が聞こえにくいと判定する、請求項5に記載の調整装置。
- 前記ゲイン調整部は、前記音素特定部で特定された音素の種類に応じて、複数種類のゲイン調整方法のいずれかから、前記音素の種類に応じたゲイン調整方法を選択する、請求項1に記載の調整装置。
- 周囲の音を集音する集音部で集音された周囲の音の音声信号に含まれている音素または音節の情報を利用して、前記音素または音節が発声された時刻を特定する時刻情報を出力する音声切出部と、
ユーザの脳波信号を計測する脳波計測部で計測された脳波信号から取得される、特定した前記音素または音節が発声された時刻を起点とした事象関連電位に基づき、前記音声に対する聞こえにくさを判定する聞こえ判定部と、
前記聞こえ判定部において複数の音素または音節に対して聞こえにくいと判定された場合に、前記複数の音素または音節のうち時間的に前に現れた音素または音節が聞きにくかったと特定する音素特定部とを備え、
前記音素特定部で特定された音素の情報を出力する、補聴器の調整装置。 - 前記音素特定部で特定された音素の情報を、音素のゲインを調整するゲイン調整部に出力する、請求項8に記載の調整装置。
- 周囲の音を集音し、音声信号を出力する集音部と、
前記音声信号に含まれている音素または音節の情報を利用して、前記音素または音節が発声された時刻を特定する時刻情報を出力する音声切出部と、
ユーザの脳波信号を計測する脳波計測部と、
前記脳波計測部で計測された脳波信号から取得される、特定した前記音素または音節が発声された時刻を起点とした事象関連電位に基づき、前記音素または音節に対する聞こえにくさを判定する聞こえ判定部と、
前記聞こえ判定部において複数の音素または音節に対して聞こえにくいと判定された場合に、前記複数の音素または音節のうち時間的に前に現れた音素または音節が聞きにくかったと特定し、特定した結果を蓄積する音素特定部と
を備えた、補聴評価装置。 - 周囲の音を集音し、音声信号を出力するステップと、
前記音声信号に含まれている音素または音節の情報を利用して、前記音素または音節が発声された時刻を特定する時刻情報を出力するステップと、
ユーザの脳波信号を計測するステップと、
計測された脳波信号から取得される、特定した前記音素または音節が発声された時刻を起点とした事象関連電位に基づき、前記音素または音節に対する聞こえにくさを判定するステップと、
前記判定するステップにおいて複数の音素または音節に対して聞こえにくいと判定された場合に、前記複数の音素または音節のうち時間的に前に現れた音素または音節が聞きにくかったと特定するステップと、
特定された前記音素または音節に対して、前記音素または音節の種類に応じてゲイン調整方法を決定し、決定したゲイン調整方法で前記音素または音節のゲインを調整するステップと
を包含する、補聴器の調整方法。 - コンピュータによって実行されるコンピュータプログラムであって、
前記コンピュータプログラムは、前記コンピュータに対し、
集音された周囲の音の音声信号を受け取るステップと、
前記音声信号に含まれている音素または音節の情報を利用して、前記音素または音節が発声された時刻を特定する時刻情報を出力するステップと、
計測されたユーザの脳波信号を受け取るステップと、
前記脳波信号から取得される、特定した前記音素または音節が発声された時刻を起点とした事象関連電位に基づき、前記音素または音節に対する聞こえにくさを判定するステップと、
前記判定するステップにおいて複数の音素または音節に対して聞こえにくいと判定された場合に、前記複数の音素または音節のうち時間的に前に現れた音素または音節が聞きにくかったと特定するステップと、
特定された前記音素または音節に対して、前記音素または音節の種類に応じてゲイン調整方法を決定し、決定したゲイン調整方法で前記音素または音節のゲインを調整するステップと
を実行させる、補聴器の調整のためのコンピュータプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010543732A JP4769336B2 (ja) | 2009-07-03 | 2010-07-02 | 補聴器の調整装置、方法およびプログラム |
CN2010800037241A CN102265335B (zh) | 2009-07-03 | 2010-07-02 | 助听器的调整装置和方法 |
US13/085,806 US9149202B2 (en) | 2009-07-03 | 2011-04-13 | Device, method, and program for adjustment of hearing aid |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009159115 | 2009-07-03 | ||
JP2009-159115 | 2009-07-03 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/085,806 Continuation US9149202B2 (en) | 2009-07-03 | 2011-04-13 | Device, method, and program for adjustment of hearing aid |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011001694A1 true WO2011001694A1 (ja) | 2011-01-06 |
Family
ID=43410780
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/004359 WO2011001694A1 (ja) | 2009-07-03 | 2010-07-02 | 補聴器の調整装置、方法およびプログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US9149202B2 (ja) |
JP (1) | JP4769336B2 (ja) |
CN (1) | CN102265335B (ja) |
WO (1) | WO2011001694A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012108128A1 (ja) * | 2011-02-10 | 2012-08-16 | パナソニック株式会社 | 脳波記録装置、補聴器、脳波記録方法およびそのプログラム |
WO2013017169A1 (en) * | 2011-08-03 | 2013-02-07 | Widex A/S | Hearing aid with self fitting capabilities |
WO2013161189A1 (ja) * | 2012-04-24 | 2013-10-31 | パナソニック株式会社 | 補聴器利得決定システム、補聴器利得決定方法、およびコンピュータプログラム |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DK2581038T3 (en) * | 2011-10-14 | 2018-02-19 | Oticon As | Automatic real-time hearing aid fitting based on auditory evoked potentials |
KR101368927B1 (ko) * | 2012-01-03 | 2014-02-28 | (주)가온다 | 오디오 신호 출력 방법 및 장치, 오디오 신호의 볼륨 조정 방법 |
WO2013161235A1 (ja) * | 2012-04-24 | 2013-10-31 | パナソニック株式会社 | 語音弁別能力判定装置、語音弁別能力判定システム、補聴器利得決定装置、語音弁別能力判定方法およびそのプログラム |
EP2560412A1 (en) | 2012-10-08 | 2013-02-20 | Oticon A/s | Hearing device with brain-wave dependent audio processing |
US9933990B1 (en) * | 2013-03-15 | 2018-04-03 | Sonitum Inc. | Topological mapping of control parameters |
CN103816007B (zh) * | 2013-11-22 | 2016-04-06 | 刘志勇 | 一种基于脑电频域特征指标化算法的耳鸣治疗设备及方法 |
US10037712B2 (en) * | 2015-01-30 | 2018-07-31 | Toyota Motor Engineering & Manufacturing North America, Inc. | Vision-assist devices and methods of detecting a classification of an object |
US10157607B2 (en) | 2016-10-20 | 2018-12-18 | International Business Machines Corporation | Real time speech output speed adjustment |
US11412333B2 (en) * | 2017-11-15 | 2022-08-09 | Starkey Laboratories, Inc. | Interactive system for hearing devices |
TWI669709B (zh) * | 2018-07-17 | 2019-08-21 | 宏碁股份有限公司 | 電子系統及音訊處理方法 |
US11228849B2 (en) | 2018-12-29 | 2022-01-18 | Gn Hearing A/S | Hearing aids with self-adjustment capability based on electro-encephalogram (EEG) signals |
EP3675525B1 (en) * | 2018-12-29 | 2023-05-24 | GN Hearing A/S | Hearing aids with self-adjustment capability based on electro-encephalogram (eeg) signals |
JP7189033B2 (ja) * | 2019-01-23 | 2022-12-13 | ラピスセミコンダクタ株式会社 | 半導体装置及び音出力装置 |
CN113286243A (zh) * | 2021-04-29 | 2021-08-20 | 佛山博智医疗科技有限公司 | 一种自测言语识别的纠错系统及方法 |
CN113286242A (zh) * | 2021-04-29 | 2021-08-20 | 佛山博智医疗科技有限公司 | 分解言语信号修饰音节提升语音信号清晰度的装置 |
CN116156401B (zh) * | 2023-04-17 | 2023-06-27 | 深圳市英唐数码科技有限公司 | 基于大数据监测的助听设备智能检测方法、系统和介质 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06105399A (ja) * | 1992-09-24 | 1994-04-15 | Hitachi Ltd | 聴覚補償装置 |
JPH09182193A (ja) * | 1995-12-27 | 1997-07-11 | Nec Corp | 補聴器 |
JP2007202619A (ja) * | 2006-01-31 | 2007-08-16 | National Institute Of Advanced Industrial & Technology | 脳活動解析方法および装置 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6225282B1 (en) * | 1996-01-05 | 2001-05-01 | Genentech, Inc. | Treatment of hearing impairments |
JP3961616B2 (ja) | 1996-05-22 | 2007-08-22 | ヤマハ株式会社 | 話速変換方法および話速変換機能付補聴器 |
JP2904272B2 (ja) | 1996-12-10 | 1999-06-14 | 日本電気株式会社 | ディジタル補聴器、及びその補聴処理方法 |
AU2001261946A1 (en) | 2000-05-19 | 2001-11-26 | Michael Sasha John | System and method for objective evaluation of hearing using auditory steady-state responses |
JP3482465B2 (ja) | 2001-01-25 | 2003-12-22 | 独立行政法人産業技術総合研究所 | モバイルフィッティングシステム |
JP4145507B2 (ja) | 2001-06-07 | 2008-09-03 | 松下電器産業株式会社 | 音質音量制御装置 |
CA2452945C (en) * | 2003-09-23 | 2016-05-10 | Mcmaster University | Binaural adaptive hearing system |
WO2006123539A1 (ja) * | 2005-05-18 | 2006-11-23 | Matsushita Electric Industrial Co., Ltd. | 音声合成装置 |
CN101223571B (zh) * | 2005-07-20 | 2011-05-18 | 松下电器产业株式会社 | 音质变化部位确定装置及音质变化部位确定方法 |
JP4064446B2 (ja) | 2005-12-09 | 2008-03-19 | 松下電器産業株式会社 | 情報処理システム、情報処理装置および方法 |
-
2010
- 2010-07-02 WO PCT/JP2010/004359 patent/WO2011001694A1/ja active Application Filing
- 2010-07-02 JP JP2010543732A patent/JP4769336B2/ja not_active Expired - Fee Related
- 2010-07-02 CN CN2010800037241A patent/CN102265335B/zh not_active Expired - Fee Related
-
2011
- 2011-04-13 US US13/085,806 patent/US9149202B2/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06105399A (ja) * | 1992-09-24 | 1994-04-15 | Hitachi Ltd | 聴覚補償装置 |
JPH09182193A (ja) * | 1995-12-27 | 1997-07-11 | Nec Corp | 補聴器 |
JP2007202619A (ja) * | 2006-01-31 | 2007-08-16 | National Institute Of Advanced Industrial & Technology | 脳活動解析方法および装置 |
Non-Patent Citations (1)
Title |
---|
KOTA TAKANO ET AL.: "The Study of Auditory Recognition and Event-Related Potentials", IEICE TECHNICAL REPORT, vol. 96, no. 501, 25 January 1997 (1997-01-25), pages 155 - 161 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012108128A1 (ja) * | 2011-02-10 | 2012-08-16 | パナソニック株式会社 | 脳波記録装置、補聴器、脳波記録方法およびそのプログラム |
JP5042398B1 (ja) * | 2011-02-10 | 2012-10-03 | パナソニック株式会社 | 脳波記録装置、補聴器、脳波記録方法およびそのプログラム |
CN103270779A (zh) * | 2011-02-10 | 2013-08-28 | 松下电器产业株式会社 | 脑电波记录装置、助听器、脑电波记录方法以及其程序 |
US9232904B2 (en) | 2011-02-10 | 2016-01-12 | Panasonic Intellectual Property Management Co., Ltd. | Electroencephalogram recording apparatus, hearing aid, electroencephalogram recording method, and program thereof |
WO2013017169A1 (en) * | 2011-08-03 | 2013-02-07 | Widex A/S | Hearing aid with self fitting capabilities |
WO2013161189A1 (ja) * | 2012-04-24 | 2013-10-31 | パナソニック株式会社 | 補聴器利得決定システム、補聴器利得決定方法、およびコンピュータプログラム |
JPWO2013161189A1 (ja) * | 2012-04-24 | 2015-12-21 | パナソニックIpマネジメント株式会社 | 補聴器利得決定システム、補聴器利得決定方法、およびコンピュータプログラム |
US9712931B2 (en) | 2012-04-24 | 2017-07-18 | Panasonic Intellectual Property Management Co., Ltd. | Hearing aid gain determination system, hearing aid gain determination method, and computer program |
Also Published As
Publication number | Publication date |
---|---|
US9149202B2 (en) | 2015-10-06 |
US20110188664A1 (en) | 2011-08-04 |
JPWO2011001694A1 (ja) | 2012-12-13 |
CN102265335A (zh) | 2011-11-30 |
CN102265335B (zh) | 2013-11-06 |
JP4769336B2 (ja) | 2011-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4769336B2 (ja) | 補聴器の調整装置、方法およびプログラム | |
JP4690507B2 (ja) | 語音明瞭度評価システム、その方法およびそのプログラム | |
JP4638558B2 (ja) | 語音明瞭度評価システム、その方法およびそのコンピュータプログラム | |
JP5144835B2 (ja) | うるささ判定システム、装置、方法およびプログラム | |
Divenyi et al. | Audiological correlates of speech understanding deficits in elderly listeners with mild-to-moderate hearing loss. I. Age and lateral asymmetry effects | |
JP5002739B2 (ja) | 聴力判定システム、その方法およびそのプログラム | |
JP5042398B1 (ja) | 脳波記録装置、補聴器、脳波記録方法およびそのプログラム | |
Lawson et al. | Speech audiometry | |
JP5144836B2 (ja) | 語音聴取の評価システム、その方法およびそのプログラム | |
US8849391B2 (en) | Speech sound intelligibility assessment system, and method and program therefor | |
WO2021035067A1 (en) | Measuring language proficiency from electroencephelography data | |
RU2743049C1 (ru) | Способ доврачебной оценки качества распознавания речи, скрининговой аудиометрии и программно-аппаратный комплекс, его реализующий | |
JP2007114631A (ja) | 情報処理装置、情報処理方法、およびプログラム | |
Plante-Hébert et al. | Electrophysiological Correlates of Familiar Voice Recognition. | |
Liu et al. | Psychometric functions of vowel detection and identification in long-term speech-shaped noise | |
Kurkowski et al. | Phonetic Audiometry and its Application in the Diagnosis of People with Speech Disorders | |
Yamamoto et al. | GESI: Gammachirp Envelope Similarity Index for Predicting Intelligibility of Simulated Hearing Loss Sounds | |
Faulkner et al. | The TIDE project OSCAR | |
Kumar et al. | Speech Identification Test in Telugu: Considerations for Sloping High Frequency Hearing Loss | |
Isa et al. | Auditory Perception and Comprehensibility Among Selected Hard of Hearing English as Second Language Speakers in Kano State, Nigeria.: Auditory Perception and Comprehensibility Among Selected Hard of Hearing English as Second Language Speakers in Kano State, Nigeria. | |
Burgdorf | Ling-6 sounds as a hearing screening tool | |
Gengel et al. | Research on Frequency Transposition for Hearing Aids. Final Report. | |
Hiroya et al. | Japanese native speakers discriminate English vowel formant frequencies better than English native speakers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080003724.1 Country of ref document: CN |
|
ENP | Entry into the national phase |
Ref document number: 2010543732 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10793866 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10793866 Country of ref document: EP Kind code of ref document: A1 |