CN114040308B - Skin hearing aid device based on emotion gain - Google Patents

Skin hearing aid device based on emotion gain Download PDF

Info

Publication number
CN114040308B
CN114040308B CN202111358689.1A CN202111358689A CN114040308B CN 114040308 B CN114040308 B CN 114040308B CN 202111358689 A CN202111358689 A CN 202111358689A CN 114040308 B CN114040308 B CN 114040308B
Authority
CN
China
Prior art keywords
emotion
module
digital signals
hearing aid
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111358689.1A
Other languages
Chinese (zh)
Other versions
CN114040308A (en
Inventor
付永华
张文欣
李其林
张策
李韵辞
刘森
李建文
吕安童
李亚珂
王洁优
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University of Aeronautics
Original Assignee
Zhengzhou University of Aeronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University of Aeronautics filed Critical Zhengzhou University of Aeronautics
Priority to CN202111358689.1A priority Critical patent/CN114040308B/en
Publication of CN114040308A publication Critical patent/CN114040308A/en
Application granted granted Critical
Publication of CN114040308B publication Critical patent/CN114040308B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F11/00Methods or devices for treatment of the ears or hearing sense; Non-electric hearing aids; Methods or devices for enabling ear patients to achieve auditory perception through physiological senses other than hearing sense; Protective devices for the ears, carried on the body or in the hand
    • A61F11/04Methods or devices for enabling ear patients to achieve auditory perception through physiological senses other than hearing sense, e.g. through the touch sense
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/14Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
    • G10L15/142Hidden Markov Models [HMMs]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0272Voice signal separating
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Neurosurgery (AREA)
  • Psychology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Vascular Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Physiology (AREA)
  • Neurology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Probability & Statistics with Applications (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The utility model provides a skin hearing aid device based on emotion gain, belongs to hearing aid technical field, including microphone, hearing aid host computer and electrode subsides, wherein, the microphone can independently set up, is connected with the hearing aid host computer through the wire, also can set up on the hearing aid host computer, and the electrode subsides adopts compound planar electrode, including one or two of parallelly connected setting, posts on the skin top layer behind the human ear, is connected with the hearing aid host computer through the wire, hearing aid host computer built-in power. The invention has novel structure and ingenious conception, and can gain skin hearing effect through the emotion characteristics identified by the computer, so that the hearing effect is better, the receiving capability of the hearing aid is strong, meanwhile, the learning difficulty and threshold of hearing impaired people are effectively reduced, and the effect is obvious.

Description

Skin hearing aid device based on emotion gain
Technical Field
The invention relates to the technical field of hearing aids, in particular to skin hearing, and specifically relates to a skin hearing aid technology based on emotion gain and application thereof.
Background
The 2018 report by the world health organization shows that there are 3.6 million people with dyshearing in the world, accounting for 5.3% of the global population. In China, the hearing impaired people are up to 2780 ten thousand at present, account for 1.679% of the population of China, and about 3 ten thousand people are newly added for various reasons each year.
Hearing impairment refers to the occurrence of organic or functional abnormalities in the neural centers of various levels of sensory and sensory sounds in the auditory system, resulting in a different degree of decline in hearing, commonly known as deafness. In fact, only severely impaired hearing is called deafness, which is manifested in that the patient cannot hear any speech in both ears, while those in which the hearing loss does not reach such a severity are called impaired hearing. Since the hearing of the human body begins to be irreversibly lost year by year after the growth and development are finished, in the process, the hearing loss speed is possibly accelerated due to the influence of some external factors, such as medicines, noise environments, trauma and the like, most adults cannot avoid the hearing loss caused by the fact that the people receive calls for a long time, wear earphones/earplugs for a long time and are in a noisy environment for a long time in the working life process, at present, no effective recovery and improvement method for hearing impairment exists, so that the population of people with mild and moderate hearing impairment in China is increased at a very high speed and far exceeds the statistical quantity.
In order to improve this situation, on the one hand, good hearing habits of people are actively cultivated, the speed of hearing loss is delayed, and on the other hand, hearing loss of severely and above hearing impaired people is compensated by a hearing aid device. For the latter, the existing hearing aid device has two methods of hearing aid and artificial cochlea, the hearing aid can play a certain compensation effect on the hearing loss of severely and following hearing impaired persons, the price is relatively low, but the existing hearing aid has noise which is difficult to eliminate, the hearing loss of a user can be further caused while the hearing loss of the user is compensated, and meanwhile, the hearing loss of a deaf patient is completely ineffective; the artificial cochlea is realized through operation, the hearing of a patient can be improved to a completely normal state, but the cost is high, the success rate of the operation is low, the original hearing organ of the patient can be thoroughly destroyed and replaced by the operation, the hearing organ of the patient can not be repaired no matter whether the operation is successful or not, and even if the operation is successful, the hearing of the patient can not be repaired even if the operation is successful, the hearing of the patient can not be damaged after the operation is used. Therefore, research on more excellent hearing aid schemes is imperative, and skin hearing technology is one of ideas with excellent performance.
Research shows that human skin can sensitively feel various stimuli such as pressure, vibration, electricity and the like, and the nervous system can send the stimuli from the skin to the brain for recognition and treatment. Since the hearing of the ear of the human body is amplified through a series of vibration, recognized and converted into an electric signal, and transmitted to the brain through a nervous system, the external sound can be completely heard through the skin, which is a great advantage message for the hearing impaired people who lose hearing completely due to the damage of part of auditory organs, a hearing aid device manufactured by using the skin hearing technology is already disclosed in the prior patent 'variable pressure skin hearing aid' (application number 200410026265.5), and the hearing of the whole deaf person can be obtained through the device.
However, the living environment is full of various sounds, various background noises are generated besides the sounds which people talk on, the people with hearing have long training and learning, the brain learns to analyze various sounds, classify various sounds, which are human sounds, which are various environmental noises, and then the brain focuses attention on the type of sounds which want to hear and ignores other types of sounds, so that the sound world in the ears of people is full of layering sense. For a person with congenital complete hearing loss, the method is a complete strange field, directly faces various kinds of sounds from a surge, and takes a long time to train and learn after losing the efficient learning ability of the 'young-young' section, so that the technical scheme proposed by the patent is not friendly for adults, and the using difficulty and the threshold are high.
The basis of human-computer emotion interaction is derived from computer application, the capabilities of 'artificial psychology' and 'artificial emotion' of a computer are simulated through algorithm and a large amount of learning, emotion modeling is realized through analyzing model features of various emotions, emotion recognition of the computer is further carried out, further emotion interaction is completed, and research of domestic and foreign experts shows that the method is feasible, and on the basis, the theory is further applied to understanding and synthesis of emotion voice by the Chinese sciences, so that the process of human and emotion interaction is promoted by a large step, on one hand, the computer can accurately judge the emotion of a human through voice recognition, and on the other hand, words can be converted and output into voice with emotion or other signals.
The hearing-aid device for the hearing-impaired person can greatly improve the hearing effect of the hearing-impaired person, and has obvious hearing compensation effect.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a skin hearing aid device based on emotion gain, which is used for providing hearing compensation for hearing impaired people.
The technical problems to be solved by the invention are realized by the following technical scheme:
the invention discloses a skin hearing aid device based on emotion gain, which comprises a microphone, an electrode patch and a hearing aid host connected with the microphone and the electrode patch.
In the invention, the information processing module separates the preprocessed digital signals, respectively processes the human voice and the background sound, combines the human voice with the background sound after being regulated by the emotion adding module, and sends the human voice and the background sound to the sound information output module. The information processing module is connected with the emotion adding module, after compiling digital signals of human voice into digital signals of words which are easy to identify, syllables and speech speeds are adjusted through the emotion adding module, the digital signals are output into digital signals which can be expressed as analog voice, and after rereading, discordant components in human voice languages, such as dialects and the like, are cleared out and converted into analog human voice which is more convenient to accept and identify, generally mandarin, the understanding is convenient, and the learning difficulty and threshold are reduced.
Further, the emotion analysis module is connected with an emotion feature database, extracts emotion features in the digital signals obtained by the digitizing module, compares the emotion features with emotion feature data, determines emotion contained in a language represented by the emotion features expressed by the digital signals, sends the emotion to the emotion adding module, and adds the corresponding emotion features to the digital signals processed by the information processing module through the emotion adding module.
In the invention, the sound information output module comprises a filter and a booster, wherein the information processing module, the filter, the booster and the electrode paste are sequentially connected, and the filter and the booster are arranged in parallel in a plurality of groups.
Further, the filters are multichannel band-pass filters, different center frequencies are selected by each group of filters respectively, and the selection range is 15 Hz-15 kHz.
Further, the filter and the booster are preferably 48 groups, and 48 channels are adopted to output to the electrode paste through different frequencies.
In the invention, the electrode paste is connected with the hearing-aid host through a lead, and the electrode paste is pasted on the skin with dense nerve distribution of people.
Further, the electrode paste is attached to the skin surface layer behind the human ear.
In the invention, the sound preprocessing module comprises a denoising module and a syllable verification module.
In the invention, a power supply is arranged in the hearing aid host.
In the invention, the electrode patch adopts a composite planar electrode.
In the invention, the extraction of emotion characteristics and the establishment of an emotion characteristic database are based on Chinese voice, and part of expert students in China including Beijing aviation aerospace university establish a Chinese voice emotion extraction and modeling method combining emotion characteristics, and the established database, wherein the Chinese voice emotion characteristic extraction method comprises the following steps: specifying emotion feature database specifications including speaker specifications, recording script design specifications, audio file naming specifications, and the like; collecting emotion characteristic data: affective characteristics pleasure, activation and dominance (PAD) are evaluated, namely PAD subjective listening and evaluating tests are carried out on affective characteristics data by at least ten evaluating persons who distinguish the affective characteristics from speakers. The modeling method for the Chinese speech emotion characteristics comprises the following steps: firstly, selecting a gender recognition Support Vector Machine (SVM) model for training the voice characteristics according to Fisher ratio; and secondly, respectively establishing an emotion feature Hidden Markov Model (HMM) for male voices and female voices, and selecting a corresponding HMM according to the SVM gender recognition result to classify the emotion features.
Compared with the prior art, the invention has the following advantages:
1) Based on the principle of skin hearing, the hearing-impaired person can hear the sound, and the sound heard by the hearing-impaired person is more real through adjusting the emotion gain;
2) The whole processing process converts the analog signals output by the microphone into digital signals, and the digital signals are converted into analog signals after the processing is completed and output by the electrode paste, so that the signals are more convenient to process, and editing and strengthening of emotion gain on sound signals are effectively realized;
3) In the processing process of the sound signal, the sound signal is separated into human voice and background sound, and the human voice is further subjected to gain processing, so that the human voice is conveniently identified by the hearing impaired person, and the perception of the hearing impaired person on the environment sound is not influenced;
4) In the processing process of the sound signal, two processes of identifying and converting the voice are carried out, and rereading can convert voice language with dissonance components such as dialects and the like into Mandarin which is more convenient to accept and identify, so that understanding is facilitated, and learning difficulty and threshold are reduced;
5) The sound information output module filters through 48 channels to comprehensively cover sound signals with various frequency spectrums, so that the sound frequency range perceived by the hearing impaired person is consistent with that perceived by the normal person, and the hearing impaired person is prevented from being additionally discriminated or rejected.
Therefore, the invention has novel structure and ingenious conception, and can gain the skin hearing effect through the emotion characteristics identified by the computer, so that the hearing effect is better, the receiving capability of the hearing aid is strong, meanwhile, the learning difficulty and threshold of hearing impaired people are effectively reduced, and the effect is obvious.
Drawings
FIG. 1 is a schematic diagram of the overall structure of the present invention;
FIG. 2 is a schematic diagram of the sound pretreatment module of FIG. 1;
FIG. 3 is a schematic diagram of the information processing module of FIG. 1;
fig. 4 is a schematic diagram of the audio information output module of fig. 1.
Detailed Description
The invention is further described below in connection with the drawings and the specific preferred embodiments, but the scope of protection of the invention is not limited thereby.
The utility model provides a skin hearing aid device based on emotion gain, as shown in fig. 1-4, includes microphone, hearing aid host computer and electrode subsides, and wherein, the microphone can independently set up, is connected with the hearing aid host computer through the wire, also can set up on the hearing aid host computer, and the electrode subsides adopts compound planar electrode, including one or two that parallel setting, posts on the skin top layer behind the human ear, is connected with the hearing aid host computer through the wire, hearing aid host computer built-in power.
The hearing aid host comprises a sound preprocessing module, an emotion analysis module, an emotion adding module, an information processing module and a sound information output module, wherein the sound preprocessing module is connected with a microphone and comprises a digitizing module, a denoising module and a sound correction module, the digitizing module converts an analog signal output by the microphone into a digital signal, the denoising module removes partial non-harmonic clutter to avoid affecting subsequent processing, and the sound correction module adjusts waveforms; the emotion analysis module is respectively connected with the sound preprocessing module and the emotion additional module, and is connected with an emotion characteristic database; the information processing module is respectively connected with the sound preprocessing module, the emotion adding module and the sound information output module, the information processing module processes the digital signals from the sound preprocessing module, so that the digital signals are convenient for the disabled to learn and train and are sent to the sound information output module, the sound information output module is respectively connected with the information processing module and the electrode paste, and the sound information output module converts the digital signals into analog signals and then transmits the analog signals to the skin of the person through the electrode paste; the sound information output module comprises a filter and a booster, the information processing module, the filter, the booster and the electrode paste are sequentially connected, the filter and the booster are arranged in parallel, the filter is a multi-channel band-pass filter, each group of filters respectively select different center frequencies, the selection range is 15 Hz-15 kHz, the filter and the booster are preferably 48 groups, and the filters and the booster are output to the electrode paste through 48 channels by different frequencies.
The invention discloses a voice processing method, which comprises the steps of processing digital signals from a voice preprocessing module by an information processing module, wherein the processing of the digital signals comprises the steps of sequentially separating, identifying, adding, converting and merging, wherein the separating step is used for separating an input voice digital signal of the voice preprocessing module and separating voice from background voice, the identifying step is used for identifying and converting the voice digital signal into an alphanumeric signal, the adding step is used for receiving specific emotion characteristics sent by an emotion adding module from the alphanumeric signal, the converting step is used for re-reading the alphanumeric signal and converting the alphanumeric signal into a digital signal of analog voice, generally mandarin, and the syllable and the speech speed are regulated by combining the specific emotion characteristics and output into the analog voice digital signal with specific emotion, and the merging step is used for merging the analog voice digital signal with the specific emotion with the background voice digital signal and outputting the analog voice digital signal with the specific emotion into the processed voice digital signal.
When the invention is applied to daily hearing assistance of handicapped people, after receiving sound, the microphone converts the sound into an analog signal and inputs the analog signal into a hearing assistance host, under the processing of the digital module, the sound analog signal is converted into a sound digital signal, the sound digital signal is transmitted to the information processing module through noise elimination and pitch correction, meanwhile, the emotion analysis module extracts emotion characteristics in the sound digital signal, compares the emotion characteristics with an emotion characteristic database, determines specified emotion, and transmits emotion characteristics corresponding to all the emotion to the emotion additional module; after the information processing module obtains the voice digital signal, separating the voice from the background voice, and after the voice is added with the emotion characteristics from the emotion adding module, re-reading the voice digital signal into a digital signal simulating the voice, combining the digital signal with the background voice digital signal, and outputting the processed voice digital signal into the voice information output module; the sound information output module outputs the sound information to the electrode paste after filtering of 48 paths of different frequency filters and amplifying of the booster; the electrodes are used for stimulating human skin, transmitting electric shock to human ear nerves, converting the electric shock into human nerve pulse signals, transmitting the human nerve pulse signals through a nervous system and projecting the human nerve pulse signals to auditory cortex of brain, thereby generating auditory sense.
In the invention, the electronic sound which is not the original sound but is processed by the AI can be easily removed from the voice of the person, such as dialect and the like, and is converted into the Mandarin which is more convenient to accept and recognize, so that learning difficulty and threshold can be effectively reduced for the hearing impaired person who is completely hearing impaired in congenital, but the hearing impaired person who is completely hearing impaired in the acquired needs to be adapted in a shorter time because the voice is learned. However, in summary, the problem of hearing impaired people who are completely hearing impaired can be effectively solved, and sound from the world can be accurately heard through the hearing aid device.
Therefore, by combining the above construction and steps, the skin hearing device based on emotion gain disclosed by the invention gains the skin hearing effect through the emotion characteristics identified by the computer, so that the skin hearing device has better hearing effect, strong receiving capability on sound frequency, and remarkable effect, and simultaneously effectively reduces the learning difficulty and threshold of hearing impaired people.

Claims (8)

1. The utility model provides a skin hearing aid based on emotion gain, includes microphone, electrode subsides to and connect microphone and electrode subsides's hearing aid host computer, its characterized in that: the hearing aid host comprises a sound preprocessing module, an emotion analysis module, an emotion additional module, an information processing module and a sound information output module, wherein the sound preprocessing module is connected with a microphone and comprises a digitizing module, the digitizing module converts analog signals output by the microphone into digital signals, the emotion analysis module is respectively connected with the sound preprocessing module and the emotion additional module, the information processing module is respectively connected with the sound preprocessing module, the emotion additional module and the sound information output module, the information processing module processes the digital signals from the sound preprocessing module, so that people with disabilities can learn and train conveniently, the digital signals are sent to the sound information output module, the sound information output module is respectively connected with the information processing module and the electrode paste, and the sound information output module converts the digital signals into analog signals and then transmits the analog signals to skin of people through the electrode paste;
the information processing module processes the digital signals from the voice preprocessing module and comprises a plurality of links which are sequentially carried out, namely separation, identification, addition, conversion and combination, wherein the separation link is used for separating the input voice digital signals of the voice preprocessing module and separating the voice and background voice, the identification link is used for identifying and converting the voice digital signals into the voice digital signals, the addition link is used for receiving the emotion characteristics sent by the emotion addition module from the voice digital signals, the conversion link is used for rereading the voice digital signals and converting the voice digital signals into Mandarin analog voice, the emotion characteristics are combined, syllables and speech speed are adjusted, the voice digital signals are output as analog voice digital signals with emotion, and the combination link is used for combining the analog voice digital signals with emotion with the background voice digital signals and outputting the voice digital signals as processed voice digital signals.
2. The emotion gain based skin hearing aid device of claim 1, wherein: the emotion analysis module is connected with an emotion feature database, extracts emotion features in the digital signals obtained by the digitizing module, compares the emotion features with emotion feature data, determines emotion contained in a language represented by the emotion features expressed by the digital signals, sends the emotion to the emotion adding module, and adds the corresponding emotion features to the digital signals processed by the information processing module through the emotion adding module.
3. The emotion gain based skin hearing aid device of claim 1, wherein: the sound information output module comprises a filter and a booster, wherein the information processing module, the filter, the booster and the electrode paste are sequentially connected, and the filter and the booster are arranged in parallel in a plurality of groups.
4. A skin hearing aid device based on emotional gain according to claim 3, characterized in that: the filters are multichannel band-pass filters, and each group of filters respectively select different center frequencies, and the selection range is 15 Hz-15 kHz.
5. The emotion gain-based skin hearing aid device of claim 4, wherein: the filter and booster 48 sets, through different frequencies, are output to the electrode patch using 48 channels.
6. The emotion gain based skin hearing aid device of claim 1, wherein: the electrode paste is connected with the hearing-aid host through a lead, and the electrode paste is pasted on the skin with dense nerve distribution of people.
7. The emotion gain-based skin hearing aid device of claim 5, wherein: the electrode paste is attached to the skin surface layer behind the human ear.
8. The emotion gain based skin hearing aid device of claim 1, wherein: the sound preprocessing module comprises a denoising module and a syllable verification module.
CN202111358689.1A 2021-11-17 2021-11-17 Skin hearing aid device based on emotion gain Active CN114040308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111358689.1A CN114040308B (en) 2021-11-17 2021-11-17 Skin hearing aid device based on emotion gain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111358689.1A CN114040308B (en) 2021-11-17 2021-11-17 Skin hearing aid device based on emotion gain

Publications (2)

Publication Number Publication Date
CN114040308A CN114040308A (en) 2022-02-11
CN114040308B true CN114040308B (en) 2023-06-30

Family

ID=80144656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111358689.1A Active CN114040308B (en) 2021-11-17 2021-11-17 Skin hearing aid device based on emotion gain

Country Status (1)

Country Link
CN (1) CN114040308B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6985594B1 (en) * 1999-06-15 2006-01-10 Hearing Enhancement Co., Llc. Voice-to-remaining audio (VRA) interactive hearing aid and auxiliary equipment
CN1748250A (en) * 2002-12-11 2006-03-15 索夫塔马克斯公司 System and method for speech processing using independent component analysis under stability restraints
JP2008122729A (en) * 2006-11-14 2008-05-29 Sony Corp Noise reducing device, noise reducing method, noise reducing program, and noise reducing audio outputting device
KR20180125393A (en) * 2017-05-15 2018-11-23 한국전기연구원 Environment feature extract method and hearing aid operation method using thereof
CN212381404U (en) * 2020-07-07 2021-01-19 昆山快乐岛运动电子科技有限公司 Glasses with hearing aid function

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5256119B2 (en) * 2008-05-27 2013-08-07 パナソニック株式会社 Hearing aid, hearing aid processing method and integrated circuit used for hearing aid
US7843337B2 (en) * 2009-03-09 2010-11-30 Panasonic Corporation Hearing aid
CN102222500A (en) * 2011-05-11 2011-10-19 北京航空航天大学 Extracting method and modeling method for Chinese speech emotion combining emotion points
CN104053107B (en) * 2014-06-06 2018-06-05 重庆大学 One kind is for Sound seperation and localization method under noise circumstance
CN105310826B (en) * 2015-03-12 2017-10-24 汪勇 A kind of skin listens acoustic device and its listens method for acoustic
CN104902423A (en) * 2015-05-04 2015-09-09 上海交通大学 Implantable hearing aid device and implementation method thereof
WO2018071630A1 (en) * 2016-10-12 2018-04-19 Elwha Llc Multi-factor control of ear stimulation
EP3373603B1 (en) * 2017-03-09 2020-07-08 Oticon A/s A hearing device comprising a wireless receiver of sound
DE102017207581A1 (en) * 2017-05-05 2018-11-08 Sivantos Pte. Ltd. Hearing system and hearing device
CN110798789A (en) * 2018-08-03 2020-02-14 张伟明 Hearing aid and method of use
EP3641344B1 (en) * 2018-10-16 2023-12-06 Sivantos Pte. Ltd. A method for operating a hearing instrument and a hearing system comprising a hearing instrument
EP3641345B1 (en) * 2018-10-16 2024-03-20 Sivantos Pte. Ltd. A method for operating a hearing instrument and a hearing system comprising a hearing instrument
CN110008481B (en) * 2019-04-10 2023-04-28 南京魔盒信息科技有限公司 Translated voice generating method, device, computer equipment and storage medium
CN112714390B (en) * 2019-11-17 2021-12-14 江苏欧百家居用品有限公司 Hearing aid based on electronic skin technology
WO2021127228A1 (en) * 2019-12-17 2021-06-24 Starkey Laboratories, Inc. Hearing assistance systems and methods for monitoring emotional state

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6985594B1 (en) * 1999-06-15 2006-01-10 Hearing Enhancement Co., Llc. Voice-to-remaining audio (VRA) interactive hearing aid and auxiliary equipment
CN1748250A (en) * 2002-12-11 2006-03-15 索夫塔马克斯公司 System and method for speech processing using independent component analysis under stability restraints
JP2008122729A (en) * 2006-11-14 2008-05-29 Sony Corp Noise reducing device, noise reducing method, noise reducing program, and noise reducing audio outputting device
KR20180125393A (en) * 2017-05-15 2018-11-23 한국전기연구원 Environment feature extract method and hearing aid operation method using thereof
CN212381404U (en) * 2020-07-07 2021-01-19 昆山快乐岛运动电子科技有限公司 Glasses with hearing aid function

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
research on communication app for deaf and mute people based on face emotion recongnition technology;Y Tao;《2020 IEEE 2rd ICCASIT》;全文 *
学龄前不同助听方式听障儿童和健听儿童情感语调识别能力比较;张芳;《听力学及言语疾病杂志》;全文 *
智能数字助听器中声场景分类的研究;丁一坤;《中国优秀硕士论文全文数据库工程科技II辑》;全文 *

Also Published As

Publication number Publication date
CN114040308A (en) 2022-02-11

Similar Documents

Publication Publication Date Title
US5737719A (en) Method and apparatus for enhancement of telephonic speech signals
Vongphoe et al. Speaker recognition with temporal cues in acoustic and electric hearing
Yao et al. The application of bionic wavelet transform to speech signal processing in cochlear implants using neural network simulations
CN102973277B (en) Frequency following response signal test system
Lan et al. A novel speech-processing strategy incorporating tonal information for cochlear implants
CN115153563B (en) Mandarin hearing attention decoding method and device based on EEG
CN100502819C (en) Artificial cochlea manufacture method suitable for Chinese voice coding strategy
Alain et al. Hearing two things at once: neurophysiological indices of speech segregation and identification
CN104661700A (en) Reduction of transient sounds in hearing implants
CN102579159A (en) Electrical cochlea speech processor and processing method with signal compression in wide dynamic range
Kleczkowski et al. Lombard effect in Polish speech and its comparison in English speech
CN113178195B (en) Speaker identification method based on sound-induced electroencephalogram signals
Tinnemore et al. The recognition of time-compressed speech as a function of age in listeners with cochlear implants or normal hearing
CN114040308B (en) Skin hearing aid device based on emotion gain
Huang et al. Combination and comparison of sound coding strategies using cochlear implant simulation with mandarin speech
CN114550701A (en) Deep neural network-based Chinese electronic larynx voice conversion device and method
CN102426839B (en) Voice recognition method for deaf people
CN203001232U (en) Earplug type sound sensing assisting device
Firszt HiResolution sound processing
Barda et al. CODING AND ANALYSIS OF SPEECH IN COCHLEAR IMPLANT: A REVIEW.
Zhu et al. Important role of temporal cues in speaker identification for simulated cochlear implants
Luo et al. Vocal emotion recognition with cochlear implants.
Johnson Improving speech intelligibility without sacrificing environmental sound recognition
Harczos et al. An auditory model based vowel classification
CN113763783A (en) Deaf-mute auxiliary system and method based on brain-computer interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant