WO2009156145A1 - Dispositif et procédé d'aide auditive - Google Patents

Dispositif et procédé d'aide auditive Download PDF

Info

Publication number
WO2009156145A1
WO2009156145A1 PCT/EP2009/004567 EP2009004567W WO2009156145A1 WO 2009156145 A1 WO2009156145 A1 WO 2009156145A1 EP 2009004567 W EP2009004567 W EP 2009004567W WO 2009156145 A1 WO2009156145 A1 WO 2009156145A1
Authority
WO
WIPO (PCT)
Prior art keywords
wearer
acoustic signal
hearing aid
optical information
acoustic
Prior art date
Application number
PCT/EP2009/004567
Other languages
German (de)
English (en)
Inventor
Simon Thiele
Original Assignee
Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. filed Critical Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.
Publication of WO2009156145A1 publication Critical patent/WO2009156145A1/fr

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/009Teaching or communicating with deaf persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F11/00Methods or devices for treatment of the ears or hearing sense; Non-electric hearing aids; Methods or devices for enabling ear patients to achieve auditory perception through physiological senses other than hearing sense; Protective devices for the ears, carried on the body or in the hand
    • A61F11/04Methods or devices for enabling ear patients to achieve auditory perception through physiological senses other than hearing sense, e.g. through the touch sense
    • GPHYSICS
    • G02OPTICS
    • G02CSPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
    • G02C11/00Non-optical adjuncts; Attachment thereof
    • G02C11/06Hearing aids
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/008Visual indication of individual signal levels

Definitions

  • the present invention relates to a hearing aid, such as a hearing aid. a hearing aid for the deaf or hearing impaired.
  • a visual hearing aid in the form of spectacles, wherein the visual hearing aid is an aid for speech perception for the deaf and hard of hearing people, that stimulates the sound visual canal with a sound visualization, so that the wearer of the glasses the learning of a sound speech perception over the visual channel would be possible.
  • a representation is favored, according to which the volume is mapped to the brightness and a frequency-to-location assignment is used.
  • two powerful microphones are mounted, which are preferably provided on the back of the carrying handle, so that they are as close as possible to the ear canal end when carrying, ie at the natural sound receiving opening of the person. It is also a stereo operation in connection with a spectacle-shaped display described in which the sound from the right and left microphone is processed separately for each eye, ie right for right and left for left.
  • the US 4,972,486 also describes a pair of glasses, are displayed in the icons that represent automatically recognized phonemes. It also shows that the display can be superimposed on the normal field of view.
  • the former has the disadvantage that the stereo operation proposed there requires dpm brain: to analyze the visualized sound-speech perception so precisely in real-time that a spatial impression is also created. If it is even possible for the hearing to achieve such an externalization, then this is obviously accompanied by a high effort for the wearer of the glasses, which is likely to quickly tire the wearer, so that a visualized "directional hearing” is scarcely possible or even impossible.
  • the latter two solutions do not go into any visualized or tactile "directional hearing” at all. Therefore, it would also be desirable to have a hearing aid scheme that allows or facilitates direction-sensitive "listening".
  • the object of the present invention is to provide a hearing aid which can be used for a wider group of people, can be used with less effort and / or fewer dangers and / or enables a more cost-effective implementation while also facilitating directional sensitivity for sound or made possible in the first place.
  • a finding of the present invention is that a hearing improvement can be achieved with less effort, with less danger in the application and, if applicable, with a broader applicability, if it is not attempting to "eradicate" the residual hearing ability of the person in question or by an implant to address the otherwise useless nerve endings of the auditory sense of perception, but instead acoustic signals are detected and used to address one and / or several other perceptual senses of the person concerned, such as the visual, haptic, olfactory or guild perception, and thus on this "detour" to help the user to an improved hearing.
  • a further realization of the present invention is that the human brain is able to learn complex facts or to associate associations between the most diverse perceptions, so that the human brain is in particular able to acquire speech information in a different way than via the human brain auditory perception organ. Especially with permanent stimulation of one or more of the other non- auditive perception of the human being as a function of acoustic signals, the brain may be able to extract speech information from this stimulation.
  • a further realization of the present invention is that one possibility for the non-invasive but permanent stimulation of a perception organ other than the auditory perceptive organ is to use a pair of spectacles as the basis for the stimulation of visual perception as a function of acoustic signals .
  • the wearing of glasses is accepted in society and a wearer of glasses is therefore not unpleasant.
  • the acousto-dependent stimulation of the visual perception via a pair of spectacles therefore makes possible a permanent application and thus makes it possible, in particular, for the human brain to adapt to the acquisition of language by means of the optical stimulation.
  • the lack of the need for an invasive procedure opens up the possibility of use in a larger group of people and this also without possible side effects or dangers.
  • the device for detecting the acoustic signal comprises a plurality of acoustic sensors which are offset from one another when projected into the horizontal plane while the user is standing upright, a direction being determined from the output signals of the acoustic sensors the acoustic signal comes from, and this direction is used to vary the stimulation of the other senses depending on the direction, so that the user can conclude from the variation on the direction of the acoustic signal. In this way it is possible for the user to experience the direction of acoustic signals despite hearing loss or even hearing loss.
  • the output signals of a plurality of acoustic sensors are evaluated in order to determine the direction from which the acoustic signal originates, whereby Consequently, a directional filtering is performed depending on the detected direction, for example, to increase the signal / noise ratio of the detected acoustic signal, wherein the corresponding filtered acoustic signal is used to stimulate the other senses of perception.
  • the optical information is encoded in a color coded in the view of the wearer, which depends on the detected direction, the acoustic sensors are arranged, for example, at the transverse ends of the spectacle frame.
  • a further realization of the present invention is to have realized that using one or more of the above-mentioned approaches it is also possible to facilitate a directional sensitivity for sound, such as in the case of a police officer, avalanche searcher or the like in one Use, or even to allow, as in the case of a deaf.
  • direction-indicating information is therefore superimposed in the view of a spectacle wearer.
  • the direction determination can, as mentioned above, be made possible by the appropriate arrangement of microphones.
  • the insertion of the direction-indicating information can also be carried out in addition to the insertion of the optical information dependent on the detected acoustic signal.
  • vibrators or other haptic actuators are arranged on earrings or hearing aids for binaural hearing aid and are depending on the detected direction, from which a detected sound signal comes, different driven, such as stronger on the side from which the signal comes, For example, a frequency of a pulse-like drive could be used to indicate an angle relative to the sagittal axis.
  • FIG. 2 shows a block diagram of a hearing aid device according to a further exemplary embodiment
  • FIG. 3 shows a hearing aid device according to another
  • Figure 4 is a schematic representation of a hearing aid glasses according to an embodiment
  • FIG. 5 shows an exemplary representation of a spectrogram of an acoustic signal
  • Figure 6 is an exemplary representation of a temporal intensity curve of an acoustic signal
  • FIG. 7 shows a schematic drawing of a field of view of a spectacle wearer with a visual aid inserted Information dependent on detected acoustic signals according to an embodiment
  • FIG. 8 shows a three-panel image of a hearing aid with multiple acoustic sensors according to an embodiment
  • FIG. 9 shows a block diagram of a hearing aid device with a plurality of acoustic sensors according to an exemplary embodiment
  • FIG. 10 shows a block diagram of a hearing aid device with a plurality of acoustic sensors according to a further exemplary embodiment.
  • FIG. 11a and 11b are schematic drawings of a field of view of a spectacle wearer with information displayed in direction indicating according to various embodiments;
  • FIG. 12 shows a schematic illustration of a hearing aid device that is integrated in earrings, according to an exemplary embodiment
  • FIG. 13 shows a schematic illustration of a hearing aid device which allows directional sensitivity for hearing aids, according to an exemplary embodiment.
  • FIG. 1 shows a hearing aid device 10 with a detection device 12 for detecting acoustic signals and a stimulation device 14.
  • the stimulation device 14 is coupled to the detection device 12 in order to receive the detected acoustic signals from the same.
  • the stimulation device 14 uses the detected acoustic signals Signals to other perception senses 16 as the auditory perception 18 of a user 20 depending on the detected acoustic signals stimulate.
  • the person 20 is allowed to "experience" an auditory signal 22 originating from an acoustic source 24, namely via the detour the acoustic detection device 12 and the stimulation device 14, in that the latter converts the acoustic signal 22 detected by the detection device 12 into a stimulation of one or more of the other perception senses 16 in a suitable manner.
  • the user's brain 20 may learn speech recognition based on this "detour pacing".
  • the other sensory senses 16 are different from the auditory perception 18 and include, for example, the visual, haptic, olfactory or gustatory perception of the user 20.
  • the detection device 12 and the stimulation device 14 can be accommodated in a pair of glasses which the user 20 can wear on a daily basis.
  • the stimulation device 14 comprises, for example, a piezo-element which is intended to be worn or fastened to the skin of the user.
  • an implantation would be possible.
  • the use of combinations of detours is also conceivable.
  • the detection device 12 may include one or more acoustic sensors.
  • the detection device 12 can be connected to the stimulation device 14 in a wired or cordless manner in a communicative manner.
  • the detection sung device 12 is carried by the user 20 elsewhere, such as. B. on the ear or in the ear canal itself.
  • FIG. 2 shows that the detection device 12 can comprise one or more acoustic sensors 12a or 12b and that the stimulation device can optionally also include a processing device 14b in addition to an actuator 14a.
  • the actuator 14a comprises, for example, an LCD, OLED, LED or CCD display, the image of which is imaged onto the retina of the user 20 via suitable optics or which is activated in such a way that it when the user's eye is a-coded to a distant object despite positioning in the lens plane, a sharp image of the optical information, which depends on the detected acoustic signal, results on the user's retina.
  • the actuator 14a in the case of stimulating the user's haptic perception, includes a stylus tip, such as a stylus. B. a piezoelectrically operated probe tip.
  • the actuator 14a for example, consist of a number of Piezoele- elements. Electrical irritation of the skin is also possible.
  • FIG. 2 shows by way of example also the possibility that both the output signal of each acoustic sensor 12a or 12b and the input signal or control signal of the actuator 14a are digital.
  • the actuator 14a is designed so that it allows a maximum number of actuator adjustment options per unit time, so that he total with a maximum uncompressed data rate R 2 is controllable.
  • the two data rates are the same.
  • the data rates can also be different.
  • the data rate R 2 is, for example, more than 10 percent of the data rate Ri.
  • a processing device 14b can be arranged between the actuator 14a and the detection device 12.
  • this processing device 14b can be a spectrally weighting filter which, for example, weakens less important acoustic frequencies.
  • the actuator 14a may be adjusted with a control signal corresponding to the temporal intensity profile of the acoustic signal 22 or a filtered version thereof.
  • the processing device 14b can be designed such that the mapping of possible sensor output signals of the sensor or sensors 12 to the maximum number of actuator setting options is surjective.
  • the processing means 14b may also be arranged to convert the amount of information contained in the acoustic signal 22 into a more compact signal based on semantic analysis.
  • the processing means comprises a feature extractor 30, a database 32, and optionally a processor 34.
  • the feature extractor 34 is adapted to extract features from the incoming detected acoustic signal, and to extract characteristics from a table 36 in FIG to look up the database 32 which has entries 38 which associate features 40 with information 42.
  • the features 40 may be phonemes, in which case the information 42 associated with the lookup table 36 may be the corresponding phoneme indices.
  • a further look-up table could be provided to look up the sequence of phonemes thus obtained in a further look-up table which assigns phoneme sequences a sequence of words in alphanumeric representation.
  • the feature extractor 30 forms a fingerprint from the detected acoustic signal, such as a piece of music or an alarm signal, for example using the Look up fingerprint in the look-up table, and by this look up information 42 about it indicating what kind of audio signal it is, such as siren, etc., or what piece of music is the acoustic signal.
  • the optionally downstream processor 34 may be provided to appropriately translate the information 42 obtained by the lookup into the stimulation of the non-auditory sense of perception.
  • the processor 34 may comprise, for example, a graphics processor which, for example, fades in an alphanumeric character which has been read from the database 32 onto the retina of the hearing impaired person.
  • FIG. 4 shows a hearing aid device which has a pair of spectacles 50, an acoustic detection device 52 with, for example, a plurality of acoustic sensors 52a, 52b and 52c and an insertion device 54.
  • the acoustic sensors 52a-52c are distributed over the glasses 50 on the same.
  • the insertion device 54 is designed so that it is capable, upon accommodation of an eye of the user (not shown) through the spectacle lenses 56 of the spectacles 50 on a sharp plane 58 behind the spectacle lenses 56 for the user's eye optical information 60, which suitably depends on the detected acoustic signal detected by the acoustic sensors 52a-52c.
  • FIG. 4 shows by way of example that the optical information 60 can be a temporal intensity profile of the acoustic signal, this and alternative exemplary embodiments being explained in more detail below.
  • the fader 54 uses the detected acoustic signal.
  • the hearing aid glasses according to FIG. 4 make it possible to perceive acoustic signals 22 with the help of the eye.
  • the insertion device optically displays the acoustic signal 22, as recorded by the device 52 or the acoustic sensors 52a-52c, on an inner side of the lenses 56, for example with the aid of an intensity over time Signal, as explained with reference to FIG.
  • the acoustic sensors 52a-52c record an acoustic signal and pass it on to the insertion device, which then reproduces the signal in a manner that makes sense for the recipient or wearer of the spectacles 50 on the inside of the spectacles 50.
  • this is exemplary for a time-over-intensity signal for the German-language sentence "The Free Encyclopedia" shown.
  • spectacles such as those described on the website www. lumos-optical.com is described.
  • the various possible variations described above with reference to FIGS also apply to the embodiment of Figure 4 apply.
  • the three acoustic sensors instead of the three acoustic sensors, only one acoustic sensor, such as only the centrally located acoustic sensor 52b is used, or alternatively only two acoustic sensors, such as the acoustic sensors 52a and 52c located at the lateral ends of the spectacle frame ,
  • the acoustic sensors 52a-52c may be wired or wirelessly coupled to the fader 54.
  • the insertion device 54 can be completely accommodated in the spectacles 50 or only partially.
  • the fader 54 includes a display such as an optic which images the image of the display sharply on the retina of the wearer.
  • the fade-in device 54 may have a display which, for example in the non-active state, is transparent or optically transparent and is arranged in the spectacle lens plane.
  • the fader 54 includes a display on one of the major sides of the lenses 56, the display being suitably driven to display a crisp image of the optical information that depends on the detected acoustic signals.
  • the fader 54 may optionally include a processing device, a look-up table and / or have a processor, the latter also being able to be arranged outside the glasses in order to be worn by the user elsewhere.
  • processor it may be configured to execute software that implements the functionality described herein.
  • the software may be stored in an associated memory, optionally together with data such as the aforementioned look-up tables or reference references mentioned below, which memory may also be located in or external to the goggle and also rewritable, write-once and can only be formed readable.
  • acoustic signals by the insertion device 54 there are various possibilities, for which embodiments are presented below.
  • One possible presentation would be the use of speech recognition, such as by means of speech recognition software, in the fade-in device 54, with the aid of which words obtained by speech analysis are projected onto the spectacles or imaged onto the retina of the spectacle wearer, which reflect the content of the acoustic signal.
  • FIG. 5 shows an example of such a time-over-intensity depiction or temporal information.
  • intensity course From the temporal representation of the intensity profile of the acoustic signal, the brain of the spectacle wearer is able to recognize by a learning process different linguistic patterns in the acoustic signals and finally to form them into words and sentences. In this depiction, the brain of the user or of the spectacle wearer would thus learn to recognize words on the basis of the illustrated temporal intensity representation, just as one would a new language learns.
  • FIG. 5 shows an example of such a time-over-intensity depiction or temporal information.
  • FIG. 5 shows, for example, the temporal intensity profile diagram for four consecutively spoken articulated words / delusion /, / course /, / tooth / and / Lahn /, wherein it can clearly be seen that the individual temporal intensity progressions despite the Distinguish similarity of phonemes clearly.
  • Figure 5 illustrates that similar sounding words can be distinguished in this approach.
  • An advantage of the temporal intensity profile representation according to FIG. 5 is therefore also that the spectacle wearer could perceive sounds other than words, such as a bang or a horn. This meant for the hearing impaired persons who use glasses according to Figure 4, a higher level of safety in dangerous situations.
  • FIG. 6 Another example of a representation, which is superimposed by the insertion device 54 into the view of the spectacle wearer depending on the detected acoustic signals, is shown in FIG.
  • FIG. 6 for example, a frequency-over-time representation of the acoustic signal is superimposed on the user's view, wherein, for example, the degree of shading or the color at a specific position of the spectrogram reflects the intensity of the signal.
  • the advantage of using the spectrogram over the temporal intensity profile representation according to FIG. 5 is that the information density is higher, so that the recipient may be provided with a more accurate decoding of the semantic content of the acoustic signal than by merely displaying the temporal intensity profile according to FIG 5.
  • the insertion device fades the visual representation of the detected acoustic signal into the view of the spectacle wearer in such a way that the smallest possible influence on the visual field of the spectacle wearer is ensured.
  • the optical information signal in the field of view at the bottom, as indicated in FIG. 7, which shows an exemplary field of view limited by the spectacle frame in which a field 70 is located in the lower section, into which the fader 54 fades in the optical information.
  • the fader 54 for example, inserts alpha-numeric characters into the field 70 representing the content of the acoustic signal or a temporal intensity profile of the acoustic signal as in FIG. 5 or a spectrogram of the acoustic signal as in FIG static with an intermittent abrupt change of content in the field 70, or the optical fader 54 fades in the visual representation of the detected acoustic signal scrolling from one side of the panel 70 so that the optical representation disappears at the opposite end of the panel 70, wherein the scrolling speed may be in accordance with the normal progression of time.
  • the fade-in device 54 could be designed to display optical information obtained continuously from the detected audio signal in the field 70.
  • the fader 54 could be configured to allow the user to switch between a fade-in mode of FIG. 7 and a standard goggle mode in which the field of view is not affected by the field 70 and the fade-in of the optical information is inhibited, for example by manual actuation of a corresponding input device on the spectacles, such as a button or the like (not shown). shows) .
  • the fader 54 could be configured to automatically switch between such a normal and fade mode, for example by analyzing the detected acoustic signal itself. Further, a fade-in mode other than that shown in FIG. 7 would also be possible.
  • the optical information representing the content of the acoustic signal may be superimposed so as to occupy the entire field of view of the spectacle wearer, in which case the spectacle wearer sees nothing during the fade-in mode except for the optical representation of the acoustic Signals, but for example, manually switch to normal glasses mode, if he just wants.
  • the hearing aid glasses according to Figure 4 could be any sounds of nature, road traffic and other potentially dangerous environments perceived and thus banned as a danger.
  • the device of Figure 4 could also be used, for example, to facilitate the learning of languages.
  • the insertion device 54 is for example able to be operated in a slightly modified operating mode.
  • the fade-in device fades, for example, an optical representation of a previously detected word, such as 7th, e-. the word articulated to a mother, until the child or spectacle wearer has said the word in a sufficiently precise manner, such as in terms of form and intensity.
  • the user could learn language despite possible hearing damage or hearing impairment.
  • a just-indicated learning mode could for example be configured as follows.
  • the insertion device 54 could be designed to visually represent acoustic signals detected in the normal operating mode in one of the manners described above. In particular, it could be designed to use one of the time-resolved representations according to, for example, FIGS. 5 and 6 in order to optically convert the acoustic signals.
  • the fader 54 could then be converted into a learning mode, in which the same acoustically utterances of the user next to or in a superimposed manner with pre-stored optical representations visually displays, so that the user can recognize deviations between the pre-stored representation and the visual representation of what he voiced or spoken by his eye.
  • the pre-stored optical representation may have been obtained in a recording mode in which the fader 54 is for example displaceable in a similar manner as in the Lprnipodus.
  • a one-time default storage of such reference representations could also be provided.
  • a temporal reference intensity profile could thus be displayed, for example in the view of the user during the learning mode, for example above or below the currently recorded temporal intensity profile.
  • the temporal alignment of the two representations could be realized by the fader 54 by correlation of the two intensity profiles, in which, for example, it selects the temporal relative position which leads to the best correlation.
  • the superimposition device 54 superimposes the two intensity profiles and For example, highlights the differences in color or otherwise. In this way, the user can tell if he spoke too loudly or indistinctly.
  • the fade-in device 54 firstly displays a word to be repeated in alpha-numeric characters and then the visual representation of the user's speech as just described simultaneously with the previously stored reference display displays. In this case, the fade-in device 54 provides, for example, a number of repetitions of the retransmission, before proceeding to the next word in this way.
  • the fader 54 In a slightly modified learning mode, it would also be possible for the fader 54 to provide the user with various visual representations of different acoustic signals that the user may be making
  • the fade-in device 54 successively fades in different semantic but not word-wise articulated signals, such as signal horns, horns, sirens or other characteristic unspoken audio signals, or words optically, which the user is to recognize, the user assuming his guess or
  • the fade-in device 54 could then fade in the correctness or incorrectness of the conjecture subsequently into the user's view, for example by suspending it in accordance with FIG s semantic content of the user answered and compared with the actually displayed in the optical representation of the word or signal.
  • a user could also first learn "optical hearing" by having the spectator sequentially fade the user into pre-stored representations of words that the user should then recognize, similar to vocabulary learning.
  • the user could thus be helped by means of a software running on the hearing aid to learn how to handle the hearing glasses.
  • a learning process could be realized, which instructs the user to learn words, to learn to estimate the volume and detect danger signals in, for example, background noise.
  • FIG. 8 this is illustrated, for example, in the case of two acoustic sensors which are arranged along the spectacle frame 80 made of spectacle frame 82 and brackets 84 at the lateral ends of the spectacle frame 82 and indicated by the reference symbol 86.
  • the acoustic sensors 86 could also be arranged differently than shown in FIG.
  • the acoustic sensors 86 are preferably arranged as shown so that they are offset when projected into the horizontal plane of the spectacle wearer - in the case of standing upright - as can be seen from the top view of the three-panel image of FIG. 8 below.
  • the acoustic sensors 86 are arranged offset in this projection along or on the transverse axis 88 to each other, so that acoustic signals originating from a direction 90, which are inclined to the sagittal axis 92 to the acoustic sensors 86 meet with different phase offset.
  • the acoustic sensors 86 could also be arranged in the region of the ends of the brackets 84, which face away from the spectacle frame 82 and thus are closer to the ears of the spectacle wearer .
  • the obscuration also means an increased effort for direction detection, since the occlusion in the direction detection must be taken into account in any case.
  • the mounting on the spectacle frame shown in FIG. 8 can therefore be a simplification in terms of the necessary computing power for direction detection, which can provide cost advantages and / or power consumption advantages, so that a battery discharge time can be longer.
  • Figure 9 shows one way in which the output signals of several acoustic sensors could be exploited.
  • the insertion device 54 comprises a direction detector 100 and a processing device 102. Both are connected to the acoustic sensors 86.
  • the direction detector 100 is configured to determine from the output signals of the acoustic sensors 86 the direction 90 from which originates the acoustic signal that the acoustic sensors detect 86, wherein the processing device 102 uses the determined direction to an actuator 104, such as the aforementioned display, depending on the determined direction to drive.
  • the spectacle wearer obtains information about where or from which direction the acoustic signal originates, which is described by the visual information which the insertion device 54 inserts into the view of the spectacle wearer.
  • the insertion of a specially provided character for indicating the direction from which the detected audio signal originates, such as the loudest portion, such as by means of an arrow or a color-coded background color in the information indicating the detected optical signal is also possible.
  • Fig. IIa shows an exemplary embodiment for a fade in a field of view of the spectacle wearer, after which the detected direction is indicated by an arrow 140, which is superimposed in the user's view and points in the direction from which the detected acoustic sound, such as eg the loudest contribution comes.
  • Fig. IIa is indicated by way of example that the arrow 140 points in the direction of a door 142, which sees the spectacle wearer.
  • 11a further indicates that, according to the exemplary embodiment of FIG. 3, the acoustic signal may possibly have been analyzed semantically in order to detect which type or type of noise is involved. For example, a distinction is made between rippling, crackling, whistling, etc. noises.
  • the detected type of noise can be indicated by a corresponding symbol 144 and / or by a corresponding inscription 146 of the arrow 140, as illustrated in FIG. IIa illustratively for both cases.
  • FIG. IIb shows yet another alternative, according to which the direction of the spectacle wearer against the detected direction is not indicated by an arrow, but by highlighting or marking an object or a point in the view of the wearer, from or from the acoustic Signal comes.
  • Fig. IIb illustratively shows a case where the eyeglass wearer sees in front of him a meadow 150 followed by bushes 152, a bush 154 being indicated by a mark 156, such as a border or flashing portion of the field of view corresponding to the bush 156 is highlighted to indicate that the acoustic signal or noise detected by the acoustic sensors originates from this direction.
  • FIG. 11b again shows by way of example that the type of the noise could be indicated by a symbol 158, here by way of example a crackling noise.
  • FIGS. 11 a and 11 b further make it clear that spectacles according to, for example, FIG. 8 with acoustic sensors 86 can also be used to support a wearer of glasses. to facilitate a directional sensitivity for sound or to enable it only without passing on the sound information for further analysis visually to the wearer.
  • the spectacle wearer does not necessarily have to be given the option of speech perception.
  • an indication of a direction from which a particular sound comes is helpful. This applies to the deaf or the deaf as well as to people who are used in dangerous, life-threatening or similar situations, such as police officers, avalanche searchers, rescue workers or the like.
  • the additional semantic indication of the type of sound is also optional, but it helps the user, who may perhaps, although he may hear, need his concentration for other things than for distinguishing different sounds.
  • FIG. 9 thus shows that it is possible to detect a phase shift of the acoustic signal with a plurality of acoustic sensors mounted on a pair of glasses and thus to effect direction coding of incoming acoustic signals in such a way that, for example, blue acoustic signals arriving from the front , acoustic signals arriving from behind could be coded with red, acoustic signals arriving from the left with green and acoustic signals arriving from the right could be coded with yellow, whereby the recipient or spectacle wearer would always know from which direction the noises are coming from Visual field are represented.
  • FIG. 10 shows by way of example a further alternative and / or additional possibility as to how it would be possible to benefit from the use of a plurality of acoustic sensors 86.
  • the fader 54 may include a direction detector 110, a directional filter 112, and optionally a processing unit. direction 114 have.
  • Directional detector 110 and directional filter 112 are coupled to the outputs of the acoustic sensors 86, the directional detector 110 detecting the direction from which the acoustic signal originates, while the directional filter 112 filtering the output signals of the acoustic sensors 86 so that the signal / Noise ratio of the acoustic signal originating from the detected direction, is increased or even optimized, based on this signal, an actuator 116 of the Einblendein- direction 54, such as a display could optionally be controlled by means of the optional processing device 114. In this way, a selective noise filtering could be carried out, for example, would have advantages in the case of a troubled school class or a pub.
  • the direction-dependent, variable noise amplification implemented by the directional filter 112 could, for example, better understand a teacher or generally a counterpart despite background noise from the recipient or spectacle wearer.
  • the embodiments described above provide, for example, for deaf-mutes a hitherto unprecedented possibility of hearing and speech. With the above embodiments, hearing-impaired people could virtually lead a normal life, which proves the educational and integrative element of the above exemplary embodiments. Moreover, embodiments illustrated above are also superior to the hearing-amplifying methods described in the introduction to the description for people with limited hearing, since complications that may arise due to an implant are not given in most embodiments mentioned above. In road traffic and in other dangerous situations, the above-described exemplary embodiments mostly enable a considerable reduction of the potential danger. In background noise situations, the embodiments described with respect to FIGS. 8 to 10 provide for example, the possibility of selective noise filtering as described above.
  • the above embodiments show a manner in which acoustic signals, such as e.g. Loud, words or sounds, are translated into visual, giving rise to the advantages mentioned above.
  • acoustic signals such as e.g. Loud, words or sounds
  • directional stimulation is not limited to the stimulation of visual perception, but may of course also be applied to the stimulation of other non-auditory perceptions, such as vibrations via an earring.
  • the stimulation of haptic perception at different skin sites could be performed, depending on the direction in which the acoustic signal originated.
  • Two earrings, each with an acoustic detection device, could for example be coupled to one another via a wireless interface, so that the direction-dependent processing according to FIGS. 9 and / or 10 would be made possible in connection with the acousto-dependent irritation of the ear lobes.
  • FIG. 9 and / or 10 could for example be coupled to one another via a wireless interface, so that the direction-dependent processing according to FIGS. 9 and / or 10 would be made possible in connection with the acousto-dependent irritation of the ear lobes.
  • FIG. 9 and / or 10 could for example be coupled to one another via a wireless interface, so that the direction-dependent processing according to FIGS. 9 and
  • each of the earrings 160a and 160b could be provided with a haptic actuator, such as a vibrator 166a and 166b, respectively, which are capable of haptically stimulating the ear 162a or 162b or at least the earlobe.
  • the stimulation could be dependent on the detected direction.
  • the ear facing away from the sound source (not shown) could, for example, be haptically stimulated with a lower level than the ear facing the sound source.
  • a frequency of pulsed excitation of the hidden and / or uncovered ear could be used to indicate to the earring carrier 164 a relative angle to the sagittal axis below which the noise arrives.
  • FIG. 13 shows that hearing aids 180a and 180b, respectively, for the binaural hearing aid simulation of the auditory nerve to be worn on the left and right ears 182a and 182b of a user 184 may each be provided with an acoustic sensor 86 - and in addition with haptic actuators, such as Vibrators, 186a and 186b, respectively.
  • haptic actuators such as Vibrators, 186a and 186b, respectively.
  • the user whose damaged auditory nerve is binaurally excited via the hearing aids 180a and 180b, is very unlikely to have directional sensitivity to sound via this horn excitation.
  • the actuators 186a, 186b the user 184 could be shown the direction of the currently existing noise or of the acoustic signal currently present, namely in circumvention of the auditory nerve.
  • a corresponding processing device or a corresponding direction detector can be integrated in the earrings or hearing devices, as can the acoustic sensors and the haptic actuators.
  • the hearing aids may be in-the-ear, behind-the-ear or implantable hearing aids.
  • the above spectacle embodiments could optionally be coupled with a device which stimulates haptic human perception as a function of the detected acoustic signals. This is illustrated by way of example in Fig.
  • FIG. 8 which illustrates that possibly at that part of the bracket 84 of the glasses, which touches the head of the wearer in the vicinity of the ears of the same vibrators or other haptic perception of the spectacle wearer stimulating facilities arranged can be controlled depending on the acoustic signal detected by the acoustic sensors.
  • Such devices can also be arranged on nose support surfaces of the spectacle frame, as shown by reference numerals 122a and 122b.
  • Such devices 120a-122b arranged on the inside of the spectacle frame, such as vibration pads, could also be controlled in another way as a function of the detected acoustic signal than the optical insertion device.
  • the processing means 54 evaluates the phase shift of the incoming Signals from the noise source and / or loudness of the incoming signals to obtain a measure of the speed and / or a distance to the recipient or just a measure of the volume r loudness threshold and / or dropping below a minimum distance and / or exceeding a speed limit Then, the devices 120a-122b could warn the reciever, in fact, depending on the direction, as shown in FIG.
  • a pair of glasses as a hearing aid with only one of the devices 120a-122b, so only a haptic perception excitation without optical fade would be an aid to a wearer of glasses and thus represents a possible embodiment.
  • the scheme according to the invention can also be implemented in software.
  • the implementation can be carried out on a digital storage medium, in particular a floppy disk or a CD with electronically readable control signals. gene, which can cooperate with a programmable computer system so that the corresponding method is performed.
  • the invention thus also consists in a computer program product with program code stored on a machine-readable carrier for carrying out the method according to the invention when the computer program product runs on a computer.
  • the invention can thus be realized as a computer program with a program code for carrying out the method when the computer program runs on a computer.

Abstract

Il est possible d'obtenir une amélioration de l'audition et même une amélioration de la sensibilité directionnelle au son à peu de frais et avec moins de risques dans l'application et éventuellement avec un plus vaste éventail d'indications si l'on ne tente pas d'« exciter » la capacité auditive résiduelle de la personne concernée, mais, si au contraire, on recueille et on utilise des signaux acoustiques pour s'adresser à un et/ou plusieurs autres sens de la perception de la personne concernée, par exemple les sens optique, haptique, olfactique ou gustatif, afin de donner ainsi, par ce « détour », un résultat auditif amélioré à l'utilisateur. Afin de stimuler durablement mais de manière non invasive un autre organe de la perception que l'organe de la perception auditive, il est possible d'utiliser une paire de lunettes comme base de la simulation de la perception optique en fonction de signaux acoustiques.
PCT/EP2009/004567 2008-06-26 2009-06-24 Dispositif et procédé d'aide auditive WO2009156145A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE200810030404 DE102008030404A1 (de) 2008-06-26 2008-06-26 Hörhilfevorrichtung und -verfahren
DE102008030404.2 2008-06-26

Publications (1)

Publication Number Publication Date
WO2009156145A1 true WO2009156145A1 (fr) 2009-12-30

Family

ID=41020784

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2009/004567 WO2009156145A1 (fr) 2008-06-26 2009-06-24 Dispositif et procédé d'aide auditive

Country Status (2)

Country Link
DE (1) DE102008030404A1 (fr)
WO (1) WO2009156145A1 (fr)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2469323A1 (fr) * 2010-12-24 2012-06-27 Sony Corporation Dispositif d'affichage d'informations sonores, procédé d'affichage des informations sonores et programme
CN104966433A (zh) * 2015-07-17 2015-10-07 江西洪都航空工业集团有限责任公司 一种辅助聋哑人对话的智能眼镜
US9161113B1 (en) 2012-02-17 2015-10-13 Elvin Fenton Transparent lens microphone
US9980054B2 (en) 2012-02-17 2018-05-22 Acoustic Vision, Llc Stereophonic focused hearing
DE102017005129A1 (de) 2017-05-30 2018-12-06 Frank Lochmann Vorrichtung zur Konvertierung einer akustischen Richtungsinformation in eine wahrnehmbare Form
US10768738B1 (en) 2017-09-27 2020-09-08 Apple Inc. Electronic device having a haptic actuator with magnetic augmentation
US10768747B2 (en) 2017-08-31 2020-09-08 Apple Inc. Haptic realignment cues for touch-input displays
US10890978B2 (en) 2016-05-10 2021-01-12 Apple Inc. Electronic device with an input device having a haptic engine
US10936071B2 (en) 2018-08-30 2021-03-02 Apple Inc. Wearable electronic device with haptic rotatable input
US10942571B2 (en) 2018-06-29 2021-03-09 Apple Inc. Laptop computing device with discrete haptic regions
US10966007B1 (en) 2018-09-25 2021-03-30 Apple Inc. Haptic output system
US11024135B1 (en) 2020-06-17 2021-06-01 Apple Inc. Portable electronic device having a haptic button assembly
US11054932B2 (en) 2017-09-06 2021-07-06 Apple Inc. Electronic device having a touch sensor, force sensor, and haptic actuator in an integrated module
WO2023101802A1 (fr) * 2021-12-01 2023-06-08 Snap Inc. Lunettes avec détection de direction d'arrivée de son

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012113646A1 (fr) * 2011-02-22 2012-08-30 Siemens Medical Instruments Pte. Ltd. Système auditif
FR3066634A1 (fr) * 2017-05-16 2018-11-23 Orange Procede et equipement d'assistance auditive
EP3432606A1 (fr) * 2018-03-09 2019-01-23 Oticon A/s Système d'aide auditive

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5029216A (en) * 1989-06-09 1991-07-02 The United States Of America As Represented By The Administrator Of The National Aeronautics & Space Administration Visual aid for the hearing impaired
WO1999021400A1 (fr) * 1997-10-20 1999-04-29 Technische Universiteit Delft Prothese auditive comprenant un ensemble microphones
EP1083769A1 (fr) * 1999-02-16 2001-03-14 Yugen Kaisha GM & M Dispositif de conversion de la parole et procede correspondant
US20020103649A1 (en) * 2001-01-31 2002-08-01 International Business Machines Corporation Wearable display system with indicators of speakers
US20080010068A1 (en) * 2006-07-10 2008-01-10 Yukifusa Seita Method and apparatus for language training

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4972486A (en) * 1980-10-17 1990-11-20 Research Triangle Institute Method and apparatus for automatic cuing
DE3510508A1 (de) * 1985-03-22 1986-10-02 Siemens AG, 1000 Berlin und 8000 München Taktiles hoergeraet
DE10339027A1 (de) * 2003-08-25 2005-04-07 Dietmar Kremer Visuelles Hörgerät

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5029216A (en) * 1989-06-09 1991-07-02 The United States Of America As Represented By The Administrator Of The National Aeronautics & Space Administration Visual aid for the hearing impaired
WO1999021400A1 (fr) * 1997-10-20 1999-04-29 Technische Universiteit Delft Prothese auditive comprenant un ensemble microphones
EP1083769A1 (fr) * 1999-02-16 2001-03-14 Yugen Kaisha GM & M Dispositif de conversion de la parole et procede correspondant
US20020103649A1 (en) * 2001-01-31 2002-08-01 International Business Machines Corporation Wearable display system with indicators of speakers
US20080010068A1 (en) * 2006-07-10 2008-01-10 Yukifusa Seita Method and apparatus for language training

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10353198B2 (en) 2010-12-24 2019-07-16 Sony Corporation Head-mounted display with sound source detection
CN102543099A (zh) * 2010-12-24 2012-07-04 索尼公司 声音信息显示装置、声音信息显示方法和程序
EP2469323A1 (fr) * 2010-12-24 2012-06-27 Sony Corporation Dispositif d'affichage d'informations sonores, procédé d'affichage des informations sonores et programme
US9161113B1 (en) 2012-02-17 2015-10-13 Elvin Fenton Transparent lens microphone
US9470910B2 (en) 2012-02-17 2016-10-18 Acoustic Vision, Llc Transparent lens microphone
US9980054B2 (en) 2012-02-17 2018-05-22 Acoustic Vision, Llc Stereophonic focused hearing
CN104966433A (zh) * 2015-07-17 2015-10-07 江西洪都航空工业集团有限责任公司 一种辅助聋哑人对话的智能眼镜
US10890978B2 (en) 2016-05-10 2021-01-12 Apple Inc. Electronic device with an input device having a haptic engine
US11762470B2 (en) 2016-05-10 2023-09-19 Apple Inc. Electronic device with an input device having a haptic engine
DE102017005129A1 (de) 2017-05-30 2018-12-06 Frank Lochmann Vorrichtung zur Konvertierung einer akustischen Richtungsinformation in eine wahrnehmbare Form
US10768747B2 (en) 2017-08-31 2020-09-08 Apple Inc. Haptic realignment cues for touch-input displays
US11054932B2 (en) 2017-09-06 2021-07-06 Apple Inc. Electronic device having a touch sensor, force sensor, and haptic actuator in an integrated module
US11460946B2 (en) 2017-09-06 2022-10-04 Apple Inc. Electronic device having a touch sensor, force sensor, and haptic actuator in an integrated module
US10768738B1 (en) 2017-09-27 2020-09-08 Apple Inc. Electronic device having a haptic actuator with magnetic augmentation
US10942571B2 (en) 2018-06-29 2021-03-09 Apple Inc. Laptop computing device with discrete haptic regions
US10936071B2 (en) 2018-08-30 2021-03-02 Apple Inc. Wearable electronic device with haptic rotatable input
US10966007B1 (en) 2018-09-25 2021-03-30 Apple Inc. Haptic output system
US11805345B2 (en) 2018-09-25 2023-10-31 Apple Inc. Haptic output system
US11024135B1 (en) 2020-06-17 2021-06-01 Apple Inc. Portable electronic device having a haptic button assembly
US11756392B2 (en) 2020-06-17 2023-09-12 Apple Inc. Portable electronic device having a haptic button assembly
WO2023101802A1 (fr) * 2021-12-01 2023-06-08 Snap Inc. Lunettes avec détection de direction d'arrivée de son

Also Published As

Publication number Publication date
DE102008030404A1 (de) 2009-12-31

Similar Documents

Publication Publication Date Title
WO2009156145A1 (fr) Dispositif et procédé d'aide auditive
DE102008010515B4 (de) Schlafwarnvorrichtung
DE19549297C2 (de) Verfahren und Vorrichtung zur Beeinflussung der menschlichen Psyche
Corso Presbyacusis, hearing aids and aging
Mussoi et al. Age-related changes in temporal resolution revisited: Electrophysiological and behavioral findings from cochlear implant users
Moradi et al. Visual cues contribute differentially to audiovisual perception of consonants and vowels in improving recognition and reducing cognitive demands in listeners with hearing impairment using hearing aids
Humes Factors underlying individual differences in speech-recognition threshold (SRT) in noise among older adults
Ertmer et al. A comparison of vowel production by children with multichannel cochlear implants or tactile aids: Perceptual evidence
AT518441B1 (de) Vorrichtung zum Unterstützen des Sprach- und/oder Hörtrainings nach einer Cochlear Implantation
Billings et al. Acoustic change complex in background noise: phoneme level and timing effects
Saija et al. Visual and auditory temporal integration in healthy younger and older adults
CH709873A2 (de) Hörsystem mit anwenderspezifischer Programmierung.
Reed et al. Analytic study of the Tadoma method: improving performance through the use of supplementary tactual displays
Ifukube Sound-based assistive technology
Velmans Speech Imitation in Simulated Deafness, Using Visual Cues and Recoded'Auditory Information
Gagne et al. Simulation of sensorineural hearing impairment
DE102019218802A1 (de) System und Verfahren zum Betrieb eines Systems
Carney Vibrotactile perception of segmental features of speech: A comparison of single-channel and multichannel instruments
Coez et al. Hearing loss severity: Impaired processing of formant transition duration
CH712635A2 (de) Verfahren zum Anpassen eines Hörgerätes sowie Computerprogrammprodukt zur Durchführung des Verfahrens.
Morris et al. Effects of simulated cataracts on speech intelligibility
DE10231406A1 (de) Visuelle oder audiomäßige Wiedergabe eines Audiogrammes
Dietze Psychophysical Measurements of Temporal Integration Effects in Cochlear Implant Users
DE10063123B4 (de) Biofeedback-Verfahren zur Entspannungstherapie und Vorrichtung zur Durchführung des Verfahrens
DE102011116983A1 (de) Vorrichtung zur Abgabe eines wahrnehmbaren Erinnerungssignals an einen Patienten

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09768994

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09768994

Country of ref document: EP

Kind code of ref document: A1