WO2011100802A1 - Appareil auditif et procédé de modification ou d'amélioration de l'audition - Google Patents

Appareil auditif et procédé de modification ou d'amélioration de l'audition Download PDF

Info

Publication number
WO2011100802A1
WO2011100802A1 PCT/AU2011/000176 AU2011000176W WO2011100802A1 WO 2011100802 A1 WO2011100802 A1 WO 2011100802A1 AU 2011000176 W AU2011000176 W AU 2011000176W WO 2011100802 A1 WO2011100802 A1 WO 2011100802A1
Authority
WO
WIPO (PCT)
Prior art keywords
subcomponent
electrical audio
audio signals
hearing
signal
Prior art date
Application number
PCT/AU2011/000176
Other languages
English (en)
Inventor
Jeremy Marozeau
Peter John Blamey
Hamish Innes-Brown
Original Assignee
The Bionic Ear Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2010900709A external-priority patent/AU2010900709A0/en
Application filed by The Bionic Ear Institute filed Critical The Bionic Ear Institute
Publication of WO2011100802A1 publication Critical patent/WO2011100802A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/60Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles
    • H04R25/604Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles of acoustic or vibrational transducers
    • H04R25/606Mounting or interconnection of hearing aid parts, e.g. inside tips, housings or to ossicles of acoustic or vibrational transducers acting directly on the eardrum, the ossicles or the skull, e.g. mastoid, tooth, maxillary or mandibular bone, or mechanically stimulating the cochlea, e.g. at the oval window
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • A61N1/36036Applying electric currents by contact electrodes alternating or intermittent currents for stimulation of the outer, middle or inner ear
    • A61N1/36038Cochlear stimulation

Definitions

  • the present invention relates to hearing apparatus for modifying or improving hearing.
  • hearing devices such as hearing aids (HA) and cochlear implants (CI) can improve a recipient's ability to understand speech remarkably well, when the speech is accompanied by other types of sounds, the improvement can be limited.
  • HA and CI devices may distort music and other non-speech sounds in various ways. Accordingly, listening to music, or a single voice in a crowded room, are two activities that most people with hearing devices (hearing device recipients) still find difficult.
  • Auditory stream segregation is a perceptual process by which the human auditory system organises sounds from different sources into perceptually meaningful elements. For example, in social gatherings or noisy workplaces, people with normal hearing are able to separate the speech signal of a target speaker from other speech or non-speech sources. As another example, while listening to music, people with normal hearing are able to separate lines of melody played by the same or different instruments.
  • perceptual cues such as pitch, loudness, localisation and timbre can induce auditory streaming.
  • Pitch is related to" the fundamental frequency (F0) and is strongly related to the music scale.
  • Loudness is correlated with sound pressure level (or intensity), and allows the judgement of a sound on a scale from soft to loud.
  • Localisation is the ability to identify the position of an auditory source, and is correlated with both the interaural level difference (ILD) and the interaural time difference (ITD), caused respectively by the shielding of the head and outer ear, and the distance to the source of one ear relative to the other.
  • ILD interaural level difference
  • ITD interaural time difference
  • Timbre is a complex perceptual quality that can be decomposed into multiple dimensions such as brightness and impulsiveness.
  • these cues are not always as salient for people with hearing impairment, and a larger physical difference for a given cue is needed to reach the same perceptual difference as someone with normal hearing. For example, most people with normal hearing can easily discriminate between notes with an F0 a semitone apart, while cochlear implant recipients may require a difference in F0 of up to four semitones before perceiving a difference in pitch.
  • auditory stream segregation performance is reduced for people with hearing impairment, and in turn, this reduces their ability to extract target speech from noisy backgrounds and to appreciate music.
  • Cochlear implants have been designed specifically to convey speech information to the recipient by selecting peaks of specific frequencies in the spectrum. Those frequencies correspond to regions where important speech information can be found. Complex signals such as music contain a broader spectral content and a wider dynamic range than speech alone. For the recipient to perceive pitch and timbre in music requires fine spectral detail in the signal provided. Unfortunately, much of the spectral detail is discarded by the sound processor of the cochlear implant, reducing the quality of the music perceived by the recipient.
  • hearing apparatus comprising:
  • the audio input devices receive an electrical audio signal from the audio input devices, the electrical audio signal corresponding to a mixed acoustic signal generated by a plurality of sound sources;
  • each subcomponent electrical audio signal corresponding to a part of the mixed acoustic signal generated by a subset of the plurality of sound sources; and deliver each subcomponent electrical audio signal to a selected one of a plurality of hearing devices of a recipient, the selection of the hearing device for each subcomponent electrical audio signal being based on one or more characteristics of the subcomponent electrical audio signal.
  • a method of modifying hearing of a recipient of two or more hearing devices comprising:
  • the electrical audio signal corresponding to a mixed acoustic signal generated by a plurality of sound sources
  • each subcomponent electrical audio signal corresponding to a part of the mixed acoustic signal generated by a subset of the plurality of sound sources;
  • subcomponent electrical audio signal being based on one or more characteristics of the subcomponent electrical audio signal.
  • the audio input devices may comprise one or more microphones for receiving acoustic signals and converting the acoustic signals into electrical audio signals.
  • the audio input devices may comprise a direct input connection to receive electrical audio signals directly from a device such as a CD or MP3 player.
  • Each subset of the sound sources may comprise one or more, but may not comprise all, of the plurality of sound sources that generate the mixed acoustic signal.
  • the mixed acoustic signal may be a musical acoustic signal generated by one or more musical instruments and/or singers, for example. Additionally or alternatively, the mixed acoustic signal may be a mixed voice acoustic signal generated by one or more speakers, for example.
  • a sound source may be a musical instrument, e.g. a piano, violin or guitar etc.
  • a subset of the plurality of sound sources may therefore be a particular combination of musical instruments, e.g. the woodwind, brass or percussion section of an orchestra, etc.
  • a sound source may be a singer.
  • a subset of the plurality of sound sources may therefore be multiple singers such as the sopranos, basses and tenors of a choir, etc.
  • the electrical audio signal may be separated into subcomponent electrical audio signals corresponding to the vocals and accompanying music respectively of a musical work.
  • the electrical audio signal may be separated into subcomponent electrical audio signals corresponding to the music generated by different sections, or combinations of sections, of an orchestra.
  • the electrical audio signal may be separated into subcomponent electrical audio signals corresponding to the voices of different speakers or different singers.
  • the electrical audio signal may be separated into subcomponent electrical audio signals corresponding to different musical elements of a musical work.
  • one or more of the subsets of sound sources may comprise the musical instruments playing the melody, harmony, rhythm (e.g. beat) or bass of a musical work.
  • the electrical audio signal may be separated into subcomponent electrical audio signals corresponding to the melody and harmony respectively of a musical work.
  • a characteristic of each subcomponent electrical audio signal, on which the selection of the hearing devices is based may be one or more types of sound sources that generate the corresponding part of the mixed acoustic signal, such as different instruments or voices.
  • a characteristic of each subcomponent electrical audio signal may be one or more of its physical parameters (or corresponding perceptual properties) such as its frequency (pitch), amplitude (loudness), spectral and/or temporal qualities (timbre). These characteristics may be related however, as the frequency, amplitude, spectral, and/or temporal qualities etc. can be indicative of the types of sound sources.
  • Hearing devices can exhibit different audio playback qualities dependent on the characteristics of the electrical audio signals they process. For example, it has been found that a cochlear implant may be better at reproducing vocals or sounds from higher frequency sound sources or conveying the temporal envelope of sound, whereas a hearing aid may be better at reproducing lower frequency sound sources.
  • each subcomponent electrical audio signal may be at least partially routed to the hearing device worn by the recipient that will give better reproduction of sound for that recipient in consideration of the hearing devices worn by the recipient.
  • the various sounds making up a mixed acoustic signal may be better defined and/or more easily distinguishable from each other, offering more salient perceptual cues for the recipient. Consequently, the recipient's audio system may realise improved auditory streaming, enhancing the recipient's listening experience.
  • a subcomponent electrical audio signal may not necessarily be delivered to the hearing device worn by the recipient that will give better reproduction of sound for that subcomponent signal as such, in favour of delivering the subcomponent electrical audio signals to hearing devices of different ears.
  • subcomponent electrical audio signals may correspond to different voices, and it may be preferred in some embodiments to deliver these subcomponent electrical audio signals to different hearing devices worn in different ears of the recipient, even if one of the hearing devices does not offer as good reproduction of voices as the other.
  • subcomponent signals corresponding to voices may be delivered to each of these hearing devices, even if the cochlear implant is considered to provide better vocal reproduction.
  • improved audio streaming may be achieved nonetheless.
  • hearing device may refer to stand alone devices such as cochlear implants, BTE (behind the ear) hearing aids, ITE (in the ear) hearing aids, BAHAs (bone-anchored hearing aids), hybrid cochlear implants and tactile aids, or the term may refer only to parts of such devices, e.g., the loudspeaker componentry of BTE and ITE hearing aids, the vibrational component of BAHAs or the electrode array of cochlear implants.
  • the hearing devices may be integrated.
  • the subcomponent electrical audio signals may be delivered to a cochlear implant electrode array and loudspeaker components of a hybrid cochlear implant, the cochlear implant array and loudspeaker components optionally being considered as two separate cochlear implant and hearing aid hearing devices, despite being integrated in a single appliance.
  • the recipient of the hearing devices may have more than one of a particular type of hearing device.
  • the recipient may have a hybrid cochlear implant in each ear, effectively providing a hearing aid in both ears and a cochlear implant in both ears.
  • a separated subcomponent electrical audio signal may be delivered to one hearing device in one ear and to the same type of hearing device in the other ear. This may be used to vary the perceived localisation of sound sources, as discussed further below, whilst ensuring that type of hearing device is used for optimum reproduction of that subcomponent sound.
  • hearing apparatus comprising:
  • the audio input devices receive an electrical audio signal from the audio input devices, the electrical audio signal corresponding to a mixed acoustic signal generated by a plurality of sound sources;
  • each subcomponent electrical audio signal corresponding to a part of the mixed acoustic signal generated by a subset of the plurality of sound sources;
  • a method of modifying hearing of a recipient of one or more hearing devices comprising:
  • the electrical audio signal corresponding to a mixed acoustic signal generated by a plurality of sound sources
  • each subcomponent electrical audio signal corresponding to a part of the mixed acoustic signal generated by a subset of the plurality of sound sources
  • Enhanced subcomponent signals may be delivered exclusively to the one or more hearing devices. Alternatively, subcomponent signals that have been separated but have not been enhanced may also be delivered to the one or more hearing devices.
  • the perceptual properties may be modified such that the salience of the perceptual cues offered by the subcomponent electrical audio signals to the recipient, when reproduced as acoustic signals by the one or more hearing devices, may be improved.
  • the pitch, loudness, localisation, and/or timbre e.g.
  • impulsiveness, etc. of one or more of the subcomponent electrical audio signals may be modified.
  • reference to modification of perceptual properties of the subcomponent electrical audio signals is intended to indicate a modification of properties of these signals, independently and/or relative to each other, which, when the signals are reproduced and ultimately perceived as sound by the hearing device recipient, causes a change in the perceptual cues offered to the recipient.
  • one or more of the subcomponent electrical audio signals may undergo frequency shift and/or expansion of the frequency range. This may include frequency shifting one or more harmonics of the subcomponent electrical audio signals.
  • one or more of the subcomponent electrical audio signals may undergo amplitude processing such as modifying its gain.
  • one or more of the subcomponent electrical audio signals may undergo directionality processing such as changing the ILD or ITD, e.g. using a standard or bespoke head related transfer function.
  • directionality processing such as changing the ILD or ITD, e.g. using a standard or bespoke head related transfer function.
  • one or more of the subcomponent electrical audio signals may undergo spectral shift processing or temporal envelope modulation.
  • the recipient's auditory system may achieve improved auditory streaming of the subcomponent acoustic signals, enhancing the listening experience.
  • the perceptual cues for segregation of two or more subcomponent signals may be enhanced by increasing or exaggerating or introducing a difference between two or more
  • this type of enhancement may include any one or more of the following:
  • shifting the F0 of one or more of the subcomponent signals in an octave step may be advantageous, particularly when processing a mixed musical acoustic signal, as it may ensure that a musical relationship between the separated signals is substantially maintained.
  • the separated sounds should avoid any F0 overlap.
  • the ILD and/or the ITD may be adjusted, using a bespoke or standard head transfer function, for example.
  • a 180 degree localisation difference may be achieved between two subcomponent signals by delivering one of the subcomponent signals to one ear only, and the other subcomponent signal to the other ear only.
  • one subcomponent signal may effectively be split substantially evenly between hearing devices of both ears, whereas the other subcomponent signal may be delivered to a hearing device of one ear only.
  • many alternative arrangements are possible to achieve a 90 degree difference in localisation or other differences in localisation.
  • those hearing devices may be the same type of hearing device, to ensure that the best sound reproduction of that type of subcomponent signal is achieved.
  • localisation may be adjusted for one subcomponent signal that is best reproduced at a cochlear implant for example, through the application of that signal at different intensities to the cochlear implant of each ear.
  • the dominant hearing device may be selected as the hearing device that is configured to provide the best signal reproduction.
  • the temporal envelope and attack time are main features of timbre identity. It is possible to quantify the quality of temporal variation through a single descriptor of impulsiveness defined as the Full Duration at Half Maximum (FDHM). It has been found that introducing a difference of about 60% of FDHM between subcomponent signals may significantly improve the ability of the hearing impaired to segregate between these subcomponent signals.
  • FDHM Full Duration at Half Maximum
  • the overall spectral envelope influences the perceptual brightness of a sound and can be predicated by a measure of spectral centroid (the first moment of the spectrum). It has been found that introducing a difference of about 6 ERB (equivalent-rectangular bandwidth) between the spectral centroids of two subcomponent signals may significantly improve the ability of the hearing impaired to separate these subcomponent signals.
  • Altering the relative intensity of harmonics may be used to change the spectral shape and thus move the spectral centroid. Additionally or alternatively an FIR (finite impulse response) or IIR (Infinite impulse response) filter may be applied to the subcomponent waveform in the time domain to change its spectral shape.
  • FIR finite impulse response
  • IIR Infinite impulse response
  • At least two of the subcomponent electrical audio signals may be enhanced and delivered to the one or more hearing devices.
  • the signals may be delivered directly to the hearing devices.
  • the audio input devices and/or the approach taken , to separation of subcomponent electrical audio signals may be the same as disclosed above with respect to the first and second aspects of the invention.
  • the first to fourth aspects may be combined.
  • the perceptual properties of each subcomponent electrical audio signal may be enhanced, as set out with respect to the third and fourth aspects, further improving the listening experience for the recipient.
  • a variety of different techniques may be employed to separate the electrical audio signal into subcomponent audio signals.
  • one or more techniques selected from computer assisted scene analysis (CASA), blind source separation (BSS), independent component analysis (ICA), frequency analysis, temporal analysis, and neural network analysis may be employed.
  • the separation approach may include multiple steps, which may use different separation techniques. For example, as a first step, one separation technique may be used to separate the electrical audio signal into subcomponent electrical audio signals corresponding to the electrical audio signals derived from each respective audio input device that is used, and subsequently the same technique or another separation technique may be used to separate these subcomponent signals further, into subcomponent electrical audio signals corresponding to subsets of the sound sources.
  • Fast-ICA is initially used as a first step
  • Nonnegative Matrix Factorisation technique employing the Itakura-Saito divergence method is used as a second step.
  • These techniques may be used independently of each other, however, or other separation techniques may be used, such as Nonnegative Matrix Factorisation 2-D Deconvolution (NMF2D), Multichannel Nonnegative Matrix
  • MNMF Statistically Sparse Decomposition Principle via Local Gaussian Modelling
  • the electrical audio signal may be categorised. For example, dependent on the acoustic signal received at the audio input device, or dependent on the types of subcomponent signals that are separated, the electrical audio signal may be categorised as corresponding to a particular music genre, such as rock, jazz, vocal, electronic, classical etc, or as a combination of voices, for example.
  • MDS multi-dimensional scaling
  • cluster analysis cluster analysis
  • rhythm detection and neural network analysis
  • manual categorisation of the signal may be employed, through the recipient hearing the sound, determining the appropriate category, and using a switch or an alternative recipient interface to input the categorisation data to the processing apparatus. Additionally or alternatively, if the audio input devices receive electrical audio signals directly (through a direct input), the electrical audio signals may be tagged with information about the music type, allowing for automatic categorisation of the signal without requiring any potentially complex categorisation techniques.
  • the approach to separation may be adapted according to the categorisation of the signal. Adapting may include changing the criteria for separation of subcomponent signals and/or changing the
  • the processing apparatus may be configured to separate the signal into subcomponent electrical audio signals corresponding to the sounds generated by the vocal, bass and rhythm sound sources respectively, and the processing apparatus may employ one or more techniques to separate the subcomponent electrical audio signals selected from CASA, BSS, frequency analysis, temporal analysis and neural network analysis.
  • the processing apparatus may be configured to separate the signal into subcomponent electrical audio signals corresponding to the sounds generated by the vocal, bass and rhythm sound sources respectively, and the processing apparatus may employ one or more techniques to separate the subcomponent electrical audio signals selected from CASA, BSS, frequency analysis, temporal analysis and neural network analysis.
  • the processing apparatus may be configured to separate the processing apparatus to separate the signal into subcomponent electrical audio signals corresponding to the sounds generated by the vocal, bass and rhythm sound sources respectively, and the processing apparatus may employ one or more techniques to separate the subcomponent electrical audio signals selected from CASA, BSS, frequency analysis, temporal analysis and neural network analysis.
  • the processing apparatus may be configured to separate the signal into subcomponent electrical audio signals corresponding to the sounds generated by the vocal,
  • the processing apparatus may employ one or more techniques to separate the subcomponent electrical audio signals selected from frequency analysis, temporal analysis, dynamic analysis and neural network analysis.
  • the approach taken to subcomponent signal enhancement and/or the selection of the hearing device for a particular subcomponent signal may take into account the categorisation. If, for example, the sound is separated into subcomponent electrical audio signals corresponding to a voice and an instrumental accompaniment respectively, and these subcomponent signals are appropriately categorised as such, the signal processor may present the voice to a cochlear implant and instrumental accompaniment to a hearing aid automatically.
  • the processing apparatus may be adapted, nonetheless, to separate the electrical audio signal into subcomponent electrical audio signals that have a predetermined set of properties.
  • a left audio input device and a right audio input device may be used, positioned to the left and right sides of the recipient's body respectively.
  • Each of the audio input devices may include one or more microphones.
  • the audio input devices may include two or more microphones, the microphones being offset from each other so that directionality in an acoustic signal received by the input devices is evident.
  • the recipient may have at least one hearing device in each ear, and, in combination with the left and right input devices, may hear sound in stereo.
  • the recipient may have, for example, a cochlear implant in one ear and a hearing aid in the other ear, or both a cochlear implant and a hearing aid in each ear.
  • the processing apparatus may include a plurality of separate processors, one for each audio input device.
  • the audio input devices may be integral with the hearing devices or separate from the hearing devices.
  • the processors may process the electrical audio signals from audio input devices independently of each other.
  • the processors may be located conveniently on left and right sides of the body respectively, to connect to the audio input devices on that side of the body only.
  • each processor may connect to the hearing devices on the left and right sides of the body respectively.
  • this may allow the one or more audio input devices, processors and hearing devices on each side of the body to be integrated into respective unitary devices.
  • processors may or may not be connected to each other directly, the type of processing carried out by each individual processor may be dependent on the type of processing carried out by the other. For example, the processing carried out for a cochlear implant on the left ear may be different depending on whether there is a hearing aid, or a cochlear implant on the right ear.
  • the processing apparatus may comprise a single processor configured to process simultaneously the electrical audio signals obtained from different audio input devices.
  • the processor may be connected remotely to the hearing devices.
  • the processor may be integrated with one or more hearing devices of one ear and may be configured to communicate remotely with one or more hearing devices of the other ear, via wires or wirelessly.
  • Suitable computer readable media may include volatile (e.g. RAM) and/or nonvolatile (e.g. ROM, disk) memory, carrier waves and transmission media (e.g. copper wire, coaxial cable, fibre optic media).
  • carrier waves may take the form of electrical, electromagnetic and/or optical signals.
  • the audio input devices may be wired or wirelessly connected to the one or more processors.
  • the one or more processors may be wired or wirelessly connected to the one or more hearing devices.
  • signal processing apparatus configured to:
  • each subcomponent electrical audio signal corresponding to a part of the mixed acoustic signal generated by a subset of the plurality of sound sources; and deliver each subcomponent electrical audio signal to a selected one of a plurality of hearing devices of a recipient, the selection of the hearing device for each subcomponent electrical audio signal being based on one or more characteristics of the subcomponent electrical audio signal.
  • signal processing apparatus configured to:
  • each subcomponent electrical audio signal corresponding to a part of the mixed acoustic signal generated by a subset of the plurality of sound sources;
  • the signal processing apparatus of the fifth or sixth aspects may be configured as the signal processing apparatus described above with respect to any of the first to fourth aspects.
  • a hearing apparatus, a method of modifying hearing, or signal processing apparatus may be provided substantially as set forth in any of the first to sixth aspects of the invention, but wherein the electrical audio signal corresponds to a mixed acoustic signal comprising a plurality of musical elements of a musical work and separation of the electrical audio signal into a plurality of subcomponent electrical audio signals is performed such that each subcomponent electrical audio signal corresponds to a subset of the plurality of musical elements of the musical work.
  • the plurality of musical elements may be two or more of the melody, harmony, rhythm (e.g.
  • the separation of the audio acoustic signal may be substantially independent of the sound sources producing the musical elements. Accordingly, in these alternative aspects, separation of musical elements of the musical work is still possible, even if a single sound source is producing more than one musical element of the musical work, or a musical element is shifting quickly between different sound sources. For example, a melody and a rhythm or harmony may be played simultaneously on a piano or guitar, and these elements may be separated. As another example, the melody and accompaniment of an orchestral work may shift quickly between different instruments of an orchestra and these elements may be separated.
  • Fig. 1 shows a schematic diagram of components of a hearing assembly according to a first embodiment of the present invention
  • Fig. 2 shows a flow-chart of processing steps performed by a processor of the hearing assembly of Fig. 1 ;
  • FIG. 3 shows a flow-chart of alternative processing steps performed by a processor of the hearing assembly of Fig. 1 ;
  • Fig. 4 shows a layout of loudspeakers used in testing of separation techniques
  • Fig. 5a and 5b show simulation layouts of loudspeakers used in testing of separation techniques, with respectively, the distances/directions shown between loudspeakers and a listener's head and loudspeakers and microphones
  • Fig. 6 shows representations of test scenarios used in the testing of separation techniques
  • Fig. 7 shows a plot of MAP divergence for different separation techniques
  • Fig. 8 shows another plot of MAP divergence- for different separation techniques
  • Fig. 9 shows a plot of audio extraction time for different separation techniques
  • Fig. 10 shows a layout of loudspeakers around a listener used in testing of localisation cues
  • Figs. 1 la to l id show detection hit rates by non-musicians of deviant melodies for different distracter locations relative to a target melody location;
  • Figs. 12a to 12d show detection hit rates by musicians of deviant melodies for different distracter locations relative to a target melody location
  • Figs. 13a to 13d show detection hit rates by a hearing impaired person of deviant melodies for different distracter locations relative to a target melody location. Detail description of embodiments
  • the hearing apparatus comprises left and right audio input devices 10a, 10b, each audio input device 10a, 10b comprising two directional microphones 11 with different orientations, and a direct input socket 12.
  • the audio input devices 10a, 10b are configured to convert ambient mixed acoustic signals 1 into electrical audio signals, or receive electrical audio signals directly via the direct input socket 12.
  • Each audio input device 10a, 10b is connected to a processor 2 via wires 13, or wirelessly.
  • the processor in this embodiment comprises a memory, and a series of computer executable instructions residing on the memory.
  • the processor comprises one or more arithmetic and logic units (ALU) configured to execute the instructions to perform calculations, filtering, categorization, and sound source separation algorithms.
  • ALU arithmetic and logic units
  • Electrical audio signals generated or received by the audio input devices 10a, 10b are transmitted to the processor 2, which is configured to process the electrical audio signals based on the executable instructions and transmit the processed electrical audio signals to a plurality of hearing devices 31 , 32, ⁇ where they are converted back into sound, or signals that give the perception of sound.
  • two hearing devices 31, 32 are provided for each of the left and right ears of the recipient, one of the devices being a cochlear implant electrode array 31, and the other an ITE (in-the-ear) hearing aid loud speaker 32.
  • the two devices 31, 32 are integrated into left and right hybrid hearing appliances 33, 34.
  • other combinations of electric, acoustic and/or tactile hearing devices may be employed. Processing of the electrical audio signals by the processor 2 is now described in more detail with reference to Fig. 2.
  • the processor 2 is configured to perform a categorization step 22 on a received electrical audio signal 21.
  • the electrical audio signal may be categorized by the processor 2 as one of a number of music genres, such as rock, jazz, vocal, electronic, classical etc, or as multiple speakers.
  • the categorization step can be used to determine an appropriate type of signal separation technique to be used, or appropriate signal enhancements to be performed, for example.
  • Categorization can be performed manually, using a recipient-controlled switch
  • the recipient can choose to rely instead on automatic categorization of the electrical audio signal by the processor 2, using analysis techniques such as multi-dimensional scaling, cluster analysis and rhythm detection.
  • the recipient can choose to rely on information contained in data tags 22b that may be associated with the electrical audio signal 21, if the electrical audio signal is delivered to the audio input devices 10a, 10b through the direct input socket (e.g., in the form of an MP3 data signal), for example.
  • the processor is configured to perform a separation step 23, where it separates the electrical audio signal 21 into a plurality of subcomponent electrical audio signals 21a, 21b.
  • the electrical audio signal 21 can correspond to a mixed acoustic signal and the separation step 23 can split the electrical audio signal into subcomponent electrical audio signals that each correspond to different components of the mixed acoustic signal, each component created from one or more different sound sources, for example, and/or corresponding to different musical ' elements of a musical work, such as the melody and harmony, for example.
  • the processor 2 may use techniques such as computer auditory scene analysis (CASA), blind source separation (BSS), neural network analysis, frequency analysis and temporal analysis.
  • CASA computer auditory scene analysis
  • BSS blind source separation
  • neural network analysis frequency analysis and temporal analysis.
  • fast-ICA is used by the processor 2 to separate the electrical audio signal into subcomponent electrical audio signals corresponding to the electrical audio signals derived from each respective audio input device 10a, 10b, and subsequently a nonnegative matrix factorisation technique employing the Itakura-Saito divergence method is used to separate these subcomponent signals further, into subcomponent electrical audio signals corresponding to subsets of the sound sources.
  • each subcomponent electrical signal undergoes an enhancement step 24, 25.
  • Enhancement by the processor 2 of, for example, the directional properties, amplitude, frequency or timbre of one or more of the subcomponent electrical audio signals can be made.
  • the salience of the perceptual cues offered by the subcomponent signals when reproduced as acoustic signals ⁇ by the hearing devices 31 , 32 may be increased.
  • the processor 2 is configured to adjust the ILD and the ITD between different subcomponent signals using a head related transfer function such as to adjust the localisation between the different subcomponent signals to about 60 degrees or greater.
  • a difference of about 60% or greater of FDHM between notes of different subcomponent signals is introduced by the processor 2.
  • the F0 of one or more of the subcomponent signals is adjusted by an octave step (i.e. doubled or halved in frequency etc.) such as to introduce a greater than 4 semitone difference in F0 between notes of different subcomponent signals.
  • an octave step i.e. doubled or halved in frequency etc.
  • a difference in amplitude of about 12dB or greater is introduced between notes of different subcomponent signals.
  • a difference of about 6 ERB (equivalent-rectangular bandwidth) between the spectral centroids of notes of different subcomponent signals is introduced.
  • Enhancement may be carried out on a note-by note basis, e.g. when adjusting the F0 of a subcomponent signal or its duration. Additionally or alternatively, enhancement may be carried out on an entire subcomponent signal, or a part of the subcomponent signal including more than one note, e.g. by amplification, filtering, or introduction of an ITD or ILD between two subcomponents.
  • each of the enhanced subcomponent electrical audio signals 21a', 21b' is subject to a routing step 26, where it is routed to a hearing device 31, 32 dependent on one or more of its characteristics.
  • subcomponent signals associated with sound sources of a higher frequency or more melodic nature are routed to the cochlear implants 31 and subcomponent signals associated with sound sources of lower frequency or more harmonious nature are routed to the hearing aids 32.
  • the selection of different hearing devices for subcomponent signals with different characteristics is performed to ensure that the hearing device most suited to reproduction of each signal is used. Again, this may improve the salience of the perceptual cues offered by the subcomponent electrical audio signals when reproduced and improve the listening experience for the recipient.
  • the selection of hearing devices may take into account the location of the hearing devices. For example, it may be determined that different subcomponent signals should be delivered to hearing devices of different ears of the recipient, optionally regardless of the type of hearing devices present in the different ears, to improve the listening experience for the recipient. This may be carried out, for example, when subcomponent electrical audio signals correspond to different voices, or otherwise.
  • Fig. 3 Another embodiment is shown in Fig. 3.
  • the signal processing is substantially identical to the signal processing described with respect to Fig. 2, except that no categorization step is carried out. Signal processing is therefore carried out without knowledge of the musical genre, for example, to which the processed sounds belong.
  • categorization is carried out after separating the signal into subcomponent signals. Once the electrical audio signal has been separated into subcomponents, categorization may be used to identify the subcomponents in order to direct them to the appropriate hearing devices.
  • left and right behiiid-the-ear hearing appliances are provided, and a processor is provided in each hearing appliance. Each processor comprises an acoustic and electric output to communicate with a hearing aid and cochlear component of each hearing appliance.
  • a wireless link is provided to permit control and signal data to be transmitted in both directions between the processors in each appliance.
  • Nonnegative Matrix Factorisation 2-D Deconvolution was tested, which is a nonnegative matrix factorisation based method that extends the approach set forth in Smaragdis (Independent Component Analysis and Blind Signal Separation, 2004). In particular, it allows representation of both temporal structure and pitch change which occurs when an instrument plays different notes.
  • the technique was carried out with both least squares divergence (NMF2D) and with the Kullbach-Leibler divergence (NMF2D- ⁇ KL).
  • Nonnegative Matrix Factorisation with the Itakura-Saito divergence was tested, which is a method of source separation using nonnegative matrix factorisation with the Itakura-Saito divergence, and is described by Gray et al. (Transactions On Acoustics, Speech, And Signal Processing, 28, 1980).
  • EM 'Expectation Maximisation'
  • the MU method is a rescaled gradient descent algorithm where the step size is chosen to cancel out terms and therefore simplify the update equation (as discussed in Schmidt, M., Master 's thesis, Technical University of Denmark, Informatics and
  • the EM method is an iterative procedure for solving the maximum likelihood divergence with a statistical formalism on the two nonnegative matrices A and B.
  • MNMF Multichannel Nonnegative Matrix Factorisation
  • SSDP Local Gaussian Modelling
  • FastICA was tested, which is an independent component analysis method with fast convergence (cubic) (Hyvarinen, A. and Oja, E., Independent component analysis: algorithms and applications. Neural Networks, 13, 2000).
  • Various five second musical audio files sampled at 44.1 kHz were played through loudspeakers and recorded by microphones from a cochlear implant and hearing aid fitted to a simulated human head (Kemar). Recordings were taken in an acoustic chamber, three meters deep and three meters wide.
  • the hearing aid contained two microphones, one facing forward and one facing backwards.
  • the cochlear implant unit contained a single forward facing microphone.
  • a hearing aid was used as well as a cochlear implant unit so that the quality of microphones and their effect could be evaluated, also so that the effect of forward and rear facing microphones could be considered.
  • the arrangement of the apparatus is shown in Fig. 4, In Fig. 4, the numbered squares represent loudspeakers, and the solid square, cross and circle represent the front hearing aid, rear hearing aid and cochlear implant microphones respectively.
  • Fig. 6 the boxes with numbers represent loudspeakers and boxes with letters represent microphones. The boxes with lighter borders are not in use. Arrows represent locations of sources; if an arrow is not directed at a particular loudspeaker it means that the source is mixed across the loudspeakers.
  • the scenarios increased in difficultly up to a scenario similar to commercially recorded and produced music coming from two loudspeakers, which is considered the most common way of listening to music.
  • the scenarios can be summarised as follows: Scenarios 1.1 , 1.2: two loudspeakers each playing a single distinct source, the two scenarios differing in the distance between each source;
  • Scenario 1.3 three loudspeakers each playing a single distinct source.
  • Scenario 1.4 four loudspeakers each playing a single distinct source.
  • Scenarios 1.5- 1.7 one loudspeaker playing multiple different sources (two, three and four sources, respectively).
  • Scenario 1.8 two loudspeakers each exclusively playing two distinct sources, the sources being different in each loudspeaker.
  • Scenario 1.9 two loudspeakers non-exclusively playing four distinct sources.
  • Scenario 1.10 two loudspeakers playing professionally recorded and produced music.
  • Scenario 2.1 two loudspeakers each playing one distinct source.
  • Scenario 2.2 three loudspeakers each playing one distinct source.
  • Scenario 2.3 four loudspeakers each playing one distinct source.
  • NMF2D 3 0.500 31 1 96.10
  • each of the source separation techniques was used to extract sources from the recorded mixture for each scenario. These extracted sources were matched to an original source and the average error was calculated. If more sources were extracted than there were real sources, then the extra sources were ignored and any extracted source that was matched to two original sources was penalised by a factor of 100.
  • MAP divergence For each scenario, analysis was run for a varying number of extracted sources and the number of iterations for each technique was varied. An optimal number of extracted sources (K) and iterations (R) for each scenario were determined for each technique, as presented in Table 1, and, using the optimal parameters, the MAP divergence for all scenarios was calculated. The results of the MAP divergence calculations are shown graphically in Fig. 7. In Fig. 7, the centre line within the rectangular box represents the median value, the upper and lower values of the box represent the upper and lower quartile values, the upper and lower bars represent the maximum and minimum .values, and the crosses represent outliers.
  • the MAP divergence applies an Itakura-Saito divergence measure with the Frequency-Time matrices of original and separated sources.
  • a low average MAP divergence is desired with consistency across all scenarios, and this is represented by a low median score (low central line) and a small spread of results (close together upper and lower quartile values).
  • Experiment 3 was carried out to investigate the effect of localisation cues in musical stream segregation, to determine the minimum angle necessary to start to segregate two melodies by various listeners.
  • loudspeakers were positioned on a semi-circle with radius of 1.25 m from each listener's head at the listener's ear height, each loudspeaker being separated from its adjacent loudspeaker(s) by a 30 degree angle.
  • the location of a four-note target melody was fixed at one loudspeaker, and the location of a set of four distracter notes was gradually changed between the loudspeakers.
  • two of the notes of the target melody were inverted, in order to create a deviant melody.
  • the listeners were tasked with detecting the deviant melodies while trying to ignore the distracters.
  • Figs. 1 la to l id show plots of average hit rates for non-musicians for each distracter location, the hit rates (as a percentage) being represented by the position of markers along radial lines defined between the centre of the user's head and each loudspeaker (see scale in Fig. 1 lb for example). Figs.
  • 1 lc and 1 Id show hit rates when the target melody was fixed at a loudspeaker in front of the listener and the distracter was varied from 0 degrees (where the distracter and target melody were at the same loudspeaker speaker) to 180 degrees (where the distracter was at the loudspeaker directly behind the listener), via the left and right sides of the listener respectively.
  • Figs 1 1a and 1 lb show plots of hit rates when the target melody was fixed at a loudspeaker at the left and right sides, respectively, of the listener, and the distracter was varied from 0 degrees (where the distracter and target melody were at the same loudspeaker speaker) to 180 degrees (where the distracter was at the loudspeaker directly opposite the loudspeaker where the melody was fixed).
  • Figs. 12a to 12d show equivalent plots of average hits rates to 1 la to 1 Id, but for musicians.
  • Figs. 13a to 13d show equivalent plots of hit rates to Figs. 1 la to, 1 Id but for one exemplary hearing impaired listener of the six that were tested.
  • Figs. 1 la to 12d suggest that, for normal hearing listeners to reliably distinguish between the target melody and the distracter (which two elements may be perceived as equivalent to sounds from two different sound sources), a separation of 30 degrees is generally needed when the two sounds are presented in front of the listeners, and 60 degrees when the sounds are presented on the side.
  • a minimum separation of 60 degrees may be needed when the two sounds are presented in front of the hearing impaired listeners, and about 90 degrees when the two sounds are presented to the side of the listeners.
  • ILD Interaural Time Difference
  • ITD Interaural Time Difference
  • a 50% or greater hit rate in separation between the melody and distracter was achieved at an approximately 10 dB or 12 dB or greater difference in intensity between the melody and distracter notes.
  • the results indicate that enhancing loudness differences between separated subcomponent electrical audio signals can improve stream segregation and thus the listening experience for hearing impaired listeners.
  • FDHM full duration at half maximum
  • FDHM full duration at half maximum

Landscapes

  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • General Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Neurosurgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

L'invention concerne un appareil auditif comprenant un ou plusieurs dispositifs d'entrée audio (10a, 10b) et un appareil de traitement du signal (2) configuré pour : recevoir un signal audio provenant des dispositifs d'entrée audio, le signal audio correspondant à un signal sonore mélangé généré par une pluralité de sources sonores ; séparer le signal audio en une pluralité de signaux audio électriques de composante secondaire, chaque signal audio électrique de composante secondaire correspondant à une partie du signal sonore mélangé générée par un sous-ensemble de la pluralité de sources sonores ; et délivrer chaque signal audio électrique de composante secondaire à un dispositif auditif choisi parmi une pluralité de dispositifs auditifs (31, 32) d'un destinataire. Selon l'invention, la sélection du dispositif auditif pour chaque signal audio électrique de composante secondaire se base sur une ou plusieurs caractéristiques du signal audio électrique de composante secondaire. Une ou plusieurs propriétés perceptuelles des signaux audio électriques de composante secondaire peuvent être améliorées afin de favoriser une meilleure ségrégation perceptuelle par l'auditeur.
PCT/AU2011/000176 2010-02-19 2011-02-18 Appareil auditif et procédé de modification ou d'amélioration de l'audition WO2011100802A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2010900709A AU2010900709A0 (en) 2010-02-19 Hearing apparatus and method of modifying or improving hearing
AU2010900709 2010-02-19

Publications (1)

Publication Number Publication Date
WO2011100802A1 true WO2011100802A1 (fr) 2011-08-25

Family

ID=44482408

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2011/000176 WO2011100802A1 (fr) 2010-02-19 2011-02-18 Appareil auditif et procédé de modification ou d'amélioration de l'audition

Country Status (1)

Country Link
WO (1) WO2011100802A1 (fr)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013088771A (ja) * 2011-10-21 2013-05-13 Dainippon Printing Co Ltd 音響信号に対する妨害信号の埋め込み装置
CN103456311A (zh) * 2012-05-29 2013-12-18 三星电子株式会社 用于处理音频信号的方法和设备
EP2747458A1 (fr) * 2012-12-21 2014-06-25 Starkey Laboratories, Inc. Traitement dynamique amélioré d'une source audio en continu à partir de séparation et de remixage
WO2014164814A1 (fr) * 2013-03-11 2014-10-09 Ohio State Innovation Foundation Systèmes et procédés de traitement multiporteuses dans des dispositifs prothétiques auditifs
US9185500B2 (en) 2008-06-02 2015-11-10 Starkey Laboratories, Inc. Compression of spaced sources for hearing assistance devices
EP3007467A1 (fr) * 2014-10-06 2016-04-13 Oticon A/s Dispositif auditif comprenant une unité de séparation de source acoustique à faible latence
US9332360B2 (en) 2008-06-02 2016-05-03 Starkey Laboratories, Inc. Compression and mixing for hearing assistance devices
US9485589B2 (en) 2008-06-02 2016-11-01 Starkey Laboratories, Inc. Enhanced dynamics processing of streaming audio by source separation and remixing
EP3020212A4 (fr) * 2013-07-12 2017-03-22 Cochlear Limited Prétraitement d'un signal musical à canaux
CN107071674A (zh) * 2015-10-12 2017-08-18 奥迪康有限公司 配置成定位声源的听力装置和听力系统
WO2020152324A1 (fr) * 2019-01-25 2020-07-30 Sonova Ag Dispositif de traitement de signal, système et procédé de traitement de signaux audio
CN113647119A (zh) * 2019-01-25 2021-11-12 索诺瓦有限公司 用于处理音频信号的信号处理装置、系统和方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6231604B1 (en) * 1998-02-26 2001-05-15 Med-El Elektromedizinische Gerate Ges.M.B.H Apparatus and method for combined acoustic mechanical and electrical auditory stimulation
US20060233409A1 (en) * 2005-04-15 2006-10-19 Siemens Audiologische Technik Gmbh Hearing aid
EP2140908A2 (fr) * 2008-07-02 2010-01-06 Cochlear Limited Dispositifs pour personnes dont l'audition est altérée

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6231604B1 (en) * 1998-02-26 2001-05-15 Med-El Elektromedizinische Gerate Ges.M.B.H Apparatus and method for combined acoustic mechanical and electrical auditory stimulation
US20060233409A1 (en) * 2005-04-15 2006-10-19 Siemens Audiologische Technik Gmbh Hearing aid
EP2140908A2 (fr) * 2008-07-02 2010-01-06 Cochlear Limited Dispositifs pour personnes dont l'audition est altérée

Non-Patent Citations (11)

* Cited by examiner, † Cited by third party
Title
BLAMEY, P.: "Adaptive Dynamic Range Optimization (ADRO): A Digital Amplification Strategy for Hearing Aids and Cochlear Implants", TRENDS IN AMPLIFICATION, vol. 9, no. #2, 2005 *
DRAKE, L. ET AL.: "A Computational Auditory Scene Analysis-Enhanced Beamforming Approach for Sound Source Separation", EURASIP JOURNAL ON APPLIED SIGNAL PROCESSING, SPECIAL JOURNAL TITLE "DIGITAL SIGNAL PROCESSING FOR HEARING INSTRUMENTS", 12 August 2009 (2009-08-12), pages 139 - 155 *
DUNN, C. ET AL.: "Benefits of Localization and Speech Perception with Multiple Noise Sources in Listeners with a Short-electrode Cochlear Implant", J. AM. ACAD. AUDIOL., vol. 21, no. 1, January 2010 (2010-01-01), pages 44 - 51 *
FRANCART ET AL.: "Sensitivity to Interaural Time Differences with Combined Cochlear Implant and Acoustic Stimulation", JOURNAL FOR THE ASSOCIATION FOR RESEARCH IN OTOLARYNGOLOGY, 2 December 2008 (2008-12-02), pages 131 - 138, 140 *
GOCKLER, H. ET AL.: "Editorial "Digital Signal Processing for Hearing Instruments"", EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING, vol. 2009, 26 October 2009 (2009-10-26), pages 34 - 36 *
KONG, Y.Y. ET AL.: "Speech and melody recognition in binaurally combined acoustic and electric hearing", JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, vol. 117, no. ISSUE, 1 March 2005 (2005-03-01), pages 1351 - 1361, XP012072830, DOI: doi:10.1121/1.1857526 *
LOMBARD, A. ET AL.: "Combination of Adaptive Feedback Cancellation and Binaural Adaptive Filtering in Hearing Aids", EURASIP JOURNAL ON APPLIED SIGNAL PROCESSING, SPECIAL JOURNAL TITLE "DIGITAL SIGNAL PROCESSING FOR HEARING INSTRUMENTS", 17 March 2009 (2009-03-17), pages 165 - 179 *
MAROZEAU, J. ET AL.: "THE EFFECT OF TEMPORAL ENVELOPE ON MELODY SEGREGATION", THE 2ND INT. CONF. ON MUSIC COMMUNICATION SCIENCE 3-4 DECEMBER 2009 *
QUADRIZIUS, S.: "Effects of combined electric and acoustic hearing on speech perception of a pediatric cochlear implant user", INDEPENDENT STUDIES AND CAPSTONES. PAPER 330. PROGRAM IN AUDIOLOGY AND COMMUNICATION SCIENCES, 2008, WASHINGTON UNIVERSITY, Retrieved from the Internet <URL:http://digitalcommons.wustl.edu/pacs_capstones/330> *
SUCHER, C. ET AL.: "BIMODAL STIMULATION: Benefits for music perception and sound quality", COCHLEAR IMPLANTS INT., vol. 10, no. S1, 20 February 2009 (2009-02-20), pages 96 - 99 *
WILSON, B ET AL.: "Cochlear Implants: Current designs and future possibilities", JOURNAL OF REHABILITATION RESEARCH AND DEVELOPMENT JRRD, vol. 45, no. #5, 2008 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9485589B2 (en) 2008-06-02 2016-11-01 Starkey Laboratories, Inc. Enhanced dynamics processing of streaming audio by source separation and remixing
US9332360B2 (en) 2008-06-02 2016-05-03 Starkey Laboratories, Inc. Compression and mixing for hearing assistance devices
US9924283B2 (en) 2008-06-02 2018-03-20 Starkey Laboratories, Inc. Enhanced dynamics processing of streaming audio by source separation and remixing
US9185500B2 (en) 2008-06-02 2015-11-10 Starkey Laboratories, Inc. Compression of spaced sources for hearing assistance devices
JP2013088771A (ja) * 2011-10-21 2013-05-13 Dainippon Printing Co Ltd 音響信号に対する妨害信号の埋め込み装置
EP2670050A3 (fr) * 2012-05-29 2015-08-26 Samsung Electronics Co., Ltd Procédé et appareil de traitement de signal audio
CN103456311A (zh) * 2012-05-29 2013-12-18 三星电子株式会社 用于处理音频信号的方法和设备
EP2747458A1 (fr) * 2012-12-21 2014-06-25 Starkey Laboratories, Inc. Traitement dynamique amélioré d'une source audio en continu à partir de séparation et de remixage
US20160022991A1 (en) * 2013-03-11 2016-01-28 Ohio State Innovation Foundation Multi-carrier processing in auditory prosthetic devices
US10137301B2 (en) 2013-03-11 2018-11-27 Ohio State Innovation Foundation Multi-carrier processing in auditory prosthetic devices
WO2014164814A1 (fr) * 2013-03-11 2014-10-09 Ohio State Innovation Foundation Systèmes et procédés de traitement multiporteuses dans des dispositifs prothétiques auditifs
US9848266B2 (en) 2013-07-12 2017-12-19 Cochlear Limited Pre-processing of a channelized music signal
EP3020212A4 (fr) * 2013-07-12 2017-03-22 Cochlear Limited Prétraitement d'un signal musical à canaux
CN105489227A (zh) * 2014-10-06 2016-04-13 奥迪康有限公司 包括低延时声源分离单元的听力装置
EP3007467A1 (fr) * 2014-10-06 2016-04-13 Oticon A/s Dispositif auditif comprenant une unité de séparation de source acoustique à faible latence
US10341785B2 (en) 2014-10-06 2019-07-02 Oticon A/S Hearing device comprising a low-latency sound source separation unit
CN107071674A (zh) * 2015-10-12 2017-08-18 奥迪康有限公司 配置成定位声源的听力装置和听力系统
CN107071674B (zh) * 2015-10-12 2020-09-11 奥迪康有限公司 配置成定位声源的听力装置和听力系统
WO2020152324A1 (fr) * 2019-01-25 2020-07-30 Sonova Ag Dispositif de traitement de signal, système et procédé de traitement de signaux audio
CN113366861A (zh) * 2019-01-25 2021-09-07 索诺瓦有限公司 用于处理音频信号的信号处理装置、系统和方法
CN113647119A (zh) * 2019-01-25 2021-11-12 索诺瓦有限公司 用于处理音频信号的信号处理装置、系统和方法
US11910163B2 (en) 2019-01-25 2024-02-20 Sonova Ag Signal processing device, system and method for processing audio signals

Similar Documents

Publication Publication Date Title
WO2011100802A1 (fr) Appareil auditif et procédé de modification ou d&#39;amélioration de l&#39;audition
US9848266B2 (en) Pre-processing of a channelized music signal
Ternström Preferred self-to-other ratios in choir singing
Monson et al. Detection of high-frequency energy changes in sustained vowels produced by singers
Büchler Algorithms for sound classification in hearing instruments
Kato et al. Effect of room acoustics on musicians' performance. Part II: Audio analysis of the variations in performed sound signals
Martellotta Subjective study of preferred listening conditions in Italian Catholic churches
Nagathil et al. Spectral complexity reduction of music signals based on frequency-domain reduced-rank approximations: An evaluation with cochlear implant listeners
Best et al. Spatial unmasking of birdsong in human listeners: Energetic and informational factors
KR101919508B1 (ko) 가상 공간에서의 사운드 신호 생성을 통한 입체음향 공급방법 및 장치
Nemer et al. Reduction of the harmonic series influences musical enjoyment with cochlear implants
Kates et al. The hearing-aid audio quality index (HAAQI)
Macherey et al. Perception of musical timbre by cochlear implant listeners: a multidimensional scaling study
Chung et al. Effects of directional microphone and adaptive multichannel noise reduction algorithm on cochlear implant performance
Buyens et al. A stereo music preprocessing scheme for cochlear implant users
KR101406398B1 (ko) 사용자 음원 평가 장치, 방법 및 기록 매체
US10149068B2 (en) Hearing prosthesis sound processing
Zhang Psychoacoustics
Luizard et al. How singers adapt to room acoustical conditions
US20180116565A1 (en) Method and Device for Administering a Hearing Test
Nagathil et al. Music complexity prediction for cochlear implant listeners based on a feature-based linear regression model
Terrell et al. An offline, automatic mixing method for live music, incorporating multiple sources, loudspeakers, and room effects
Zakis Music perception and hearing aids
KR20110065972A (ko) 콘텐츠 적응형 입체음향 구현 방법 및 시스템
Hermes Towards Measuring Music Mix Quality: the factors contributing to the spectral clarity of single sounds

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11744199

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11744199

Country of ref document: EP

Kind code of ref document: A1